The Science-Politics Power Struggle

"When Science Meets Power" by Geoff Mulgan

Politicians often assume that “following the science” will help make choices more straightforward, public policy expert Geoff Mulgan writes in his book When Science Meets Power. “Yet my experience is that this is rarely the case,” he says. Inevitably, delving into a body of scientific research reveals it to be complicated, conflicting, and incomplete.

Mulgan, a professor at University College London (where I also teach), describes a “science-politics paradox”: breathtaking advances in science require governance to ensure those advances benefit society, but politics is unable to govern something so complex. The result is an uneasy power dynamic that Mulgan thinks should be better channeled.

Mulgan argues that science and politics need closer integration, which will require reinvention on both sides. As I understand his vision, national governments and the international community would be supported by a stronger knowledge infrastructure—a collection of bodies expert at providing the world with the right knowledge at the right time. Scientists also need to acknowledge their role in serving society, and politicians need to more systematically integrate research and other knowledge into governance.

Mulgan became interested in the “clashing logics of science and politics” during his years working in government, which included heading policy for UK prime minister Tony Blair. The Blair government was an early proponent of grounding policy in research, and Mulgan was tasked with developing evidence-based policy on issues such as climate change, crime, and drug addiction. But when his team brought in scientists to rapidly review the relevant evidence, they became so paralyzed by how much they didn’t know that they couldn’t advocate for a particular action. Meanwhile, policymakers just forged ahead, worryingly blasé about making decisions with limited knowledge. “This contrast between the ways of thinking stayed with me,” Mulgan writes. 

Breathtaking advances in science require governance to ensure those advances benefit society, but politics is unable to govern something so complex.

Mulgan marches briskly through several thousand years of science history to show how the relationship between rulers and research has evolved. Governments began directing science to meet their goals—for engineering bridges and winning wars—and have continued to fund research on the basis that it fuels economic growth and promotes the national interest. The power balance had shifted by the mid-1900s as it became clear that science and technology led to risks as well as transformative discoveries. Yes, there were benefits, such as vaccines and cars, but also dangers—pollution and nuclear war, for instance. Such concerns spurred international treaties as well as greater regulation and procedures to weigh these impacts, including risk assessments and ethical reviews.

And, as nations have become more dependent on scientific knowledge to solve problems like climate change, they have also found research harder to understand and manage. (Mulgan compares it to “steering a trolley with ever more items piled on top.”) The relationship keeps evolving. During the COVID-19 pandemic, some politicians looked to scientists to guide their response, while others confidently rejected what science showed. Scientists struggled to convey the uncertainties of research and to know where to draw the line between providing research findings and expressing an opinion on policy.

There are many signs that governments are increasingly turning to research evidence to guide policies. In 2018, US lawmakers passed the Evidence Act, which requires federal agencies to improve their efforts to evaluate whether and where policies work. The last couple of decades have seen a mushrooming of science advice systems for governments as well as other “knowledge brokers,” or bodies working to improve the use of research in policy. The field of international development policy, meanwhile, is being transformed by economists who use randomized trials to show experimentally which policies work to address poverty. And yet Mulgan says that there is more work to be done: science can no longer be seen as a simple pipeline of information into politics—“instead we need to interweave and synthesize the two.”

As a science journalist, some of my reporting over the last few years has focused on evidence synthesis—the important and often overlooked process by which researchers systematically assess entire landscapes of conflicting knowledge. This prevents people being misled by a single study and knits different types of information together so it can be seen as a whole. For example, the United Nations’ Intergovernmental Panel on Climate Change (IPCC) attempts to synthesize studies on climate change, and the Cochrane Collaboration conducts systematic reviews of clinical trials to determine whether a treatment helps or harms.

As nations have become more dependent on scientific knowledge to solve problems like climate change, they have also found research harder to understand and manage.

Mulgan calls for more people and institutions with the relevant expertise to join knowledge synthesis efforts. One of his central arguments is that governments should apply metacognition, or thinking about how to think. A schoolchild who realizes that she learns spelling better with a mnemonic device rather than rote memorization is practicing metacognition. A government practicing metacognition would consciously recognize the best way to find knowledge needed to solve a particular problem and draw on a network of institutions to provide it—by synthesizing research evidence, say, or collecting the lived experiences of citizens.

On the flip side, Mulgan also highlights how surprising it is that society has not developed more efficient systems to reap the considerable benefits of scientific research while avoiding the harms. There are “remarkably few proposals for how to govern, shape, and guide powerful new fields,” he writes.

Researchers often argue that they are best placed to direct, judge, and govern their own work, but that only works up to a point. “It’s not obvious that [scientists] can be trusted to govern science, any more than the military can be put in charge of wars,” Mulgan writes. Serious discussions about how to regulate a new technology tend to occur only after it is racing around the world. A lot of talk from researchers about regulating gene editing didn’t stop the Chinese scientist He Jiankui from revealing in 2018 that he had edited babies’ genomes. And although AI leaders have talked about existential threats posed by AI and called for regulation, they’ve been short on concrete proposals—and some tech groups have protested the European Union’s Artificial Intelligence Act. So it makes sense for governments rather than researchers to govern science for the good of society, Mulgan argues. Governance is, after all, governments’ job.

One part of the book that I particularly liked highlighted the wide and seldom discussed disconnect between the research that is done (usually what interests researchers) and the research that societies want. This divide becomes obvious when, for example, groups undergo priority-setting partnerships, collaborative exercises in which patients and health professionals devise a list of questions they want answered. One such exercise on knee osteoarthritis showed that patients wanted research on physiotherapy and coping strategies, whereas 80% of clinical trials were on drugs. A vast amount of medical research is wasted because of this mismatch, and Mulgan rightly argues that scientists should engage more openly in democratic debate about research priorities. Scientists “will only be fully trusted if they are seen to care about the interests of the public,” he says.

It makes sense for governments rather than researchers to govern science for the good of society, Mulgan argues. Governance is, after all, governments’ job.

One solution could lie in better integrating science advice and governance into global policymaking. (The United Nations announced the creation of a scientific advisory board in 2023.) Mulgan suggests a “global observatory for science and technology” that would assess where the world’s research and development budgets are going and whether they align with the global disease burden and the sustainable development goals. Such bodies would counter “the secrecy that surrounds R&D for military and intelligence purposes.” If the United Nations were invented today rather than the 1940s, he suggests, then alongside the World Bank and related finance institutions, it would have bodies to help “mobilize knowledge of all kinds.”He points to the IPCC, established in 1988, as the most visible example of an international body designed to synthesize scientific research that the world needs in order to tackle a shared problem—although some researchers now feel that assessing the vast global climate literature requires more rapid and systematic methods of evidence synthesis.

Mulgan’s book is itself a knowledge synthesis, and sometimes I wished he’d made it more like a pithy policy brief—with bullet points—than an academic tome that crams in the very impressive extent of his knowledge. A more concise summary would help further debate about his good ideas and how to put them into practice—which is a big task. None of it will happen if scientists, the public, and policymakers fail to challenge those who seek to undermine science entirely or twist it to support their purported truth. “In the face of these attacks it’s essential to be clear-headed and willing to fight,” he says.

How Can STEMM Do A Better Job of Caring for Its Caregivers?

Caregiving is a nearly universal human experience, but it’s not often thought of as an issue with implications for our nation’s science, technology, engineering, mathematics, and medicine (STEMM) enterprise. A new report from the National Academies of Sciences, Engineering, and Medicine, Supporting Family Caregivers in STEMM: A Call to Action, seeks to change that. In some academic STEMM environments, devoting time to care for family members is still seen as a taboo subject because it clashes with the idealized notion of scientists who focus exclusively on their work. The lack of legal and institutional support for caregivers drives many people to leave STEMM fields altogether. What can be done to change this inequity?

On this episode, Issues editor Sara Frueh talks to Elena Fuentes-Afflick, chair of the report committee and a professor of pediatrics and vice dean for the School of Medicine at Zuckerberg San Francisco General Hospital at the University of California San Francisco. Fuentes-Afflick talks about the pressures of balancing caregiving with a STEMM career; how complex and poorly implemented policies are hurting workers and the economy; and steps that the government, universities, and others could take to make a difference.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Sara Frueh: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

Have you ever had to miss work to care for a loved one? Caregiving is a nearly universal human experience, but it’s not often thought of as an issue with implications for a nation’s STEMM enterprise. A new National Academy’s report titled, Supporting Family Caregivers in STEMM: A Call to Action, seeks to change that.

I’m Sara Frueh, an editor at Issues. I’m joined by Dr. Elena Fuentes-Afflick, chair of the report committee, and a professor of pediatrics and vice dean for the School of Medicine at Zuckerberg San Francisco General Hospital at the University of California San Francisco. Elena talks to us about the pressures of balancing caregiving with a career in academic STEMM; the impact that’s having on workers and on science overall; and steps that the government, universities, and others can take to make a difference.

Elena, welcome. Thank you for joining us.

Elena Fuentes-Afflick: Thank you, Sara, I’m glad to join you.

Frueh: I’d like to start talking today about what your study found in terms of the problems and barriers that caregivers in academic STEMM are facing now. In terms of laws and existing supports for caregivers or lack of supports, what does the current landscape look like?

Fuentes-Afflick: The committee considered the current landscape of legislative and other policies related to caregiving, and what we found was that while there are many supports and protections for caregiving, current policies are incomplete and fragmented, and this lack of cohesion creates confusion and it makes it difficult for institutions to comply with the current laws. So we recommend a number of corrections or suggestions for these entities, but we were pleasantly surprised to find the scope and the type of supports that are in place. But we note that the communication is often difficult to find, the resources are hard to localize, and it’s difficult both for institutions as well as individuals to find information when they need it.

Frueh: Your report notes—I found this interesting—that there’s a high degree of non-compliance with the law across institutions. What types of laws aren’t they complying with and what’s driving that? Is it just the complicatedness of all the different laws and rules or what’s behind that?

That’s part of what we mean by the fragmentation, that there isn’t a cohesive set of rules that applies to all people in academic STEMM.

Fuentes-Afflick: Part of the issue is that the applicable laws and policies depend in part on which constituent group we are referring to. So our focus was in academic STEMM and even within that community, we are talking about faculty; we’re talking about staff, students, postgraduate trainees; and in the case of medicine, residents and fellows. Each of those groups has slightly different policies and rules which apply and so it takes a very sophisticated level of knowledge and coordination to understand which policies apply and how to navigate them. So that’s part of what we mean by the fragmentation, that there isn’t a cohesive set of rules that applies to all people in academic STEMM. And we recommend that universities and other entities have a centralized resource with a specific person or office who can be a confidential resource when people have questions, and that that person be fully versed in the laws and policies to provide accurate and timely advice.

Frueh: Now, your report points to another problem, which is cultural. You point out that in some STEMM environments, it’s taboo to even bring up caregiving responsibilities and you talk about there being this ideal worker norm and expectation. I’m wondering if you can say more about what that ideal worker norm looks like on the ground for caregivers and how it affects them.

Fuentes-Afflick: The ideal worker norm is deeply embedded in the fields of STEMM. The ideal worker norm is the prototype of someone who can be completely dedicated to their work in a STEMM field, who is available for round-the-clock work, who has no or minimal outside obligations. And clearly this prototype is difficult to achieve, but for someone who has caregiving responsibilities, there’s a direct conflict with requesting time or juggling caregiving responsibilities with the expectation of round-the-clock non-stop availability and productivity. So while we recognize that this is a deeply embedded cultural norm, we also recognize that changing culture is very difficult and we have a number of suggestions for how institutions and individuals in leadership roles in particular can begin to identify it and try to change the culture to make it less reliant on an ideal worker norm as the ideal and more balancing caregiving and other responsibilities with one’s professional obligations.

Frueh: I’m wondering if during your career in academic medicine you’ve run into some of these issues? Did anything while you and the committee were exploring this issue and writing the report, were there particular things that resonated with you or made you think about your own experiences?

Fuentes-Afflick: As an academic physician, I’ve juggled with caregiving responsibilities throughout my career as a faculty member. I have two children and when they were born, our policies were not as generous as they are now. And so I’ve always been cognizant of balancing personal responsibilities with work. And often—although we identified the ideal worker norm as a concept and we discuss it in the report—it’s often much more subtle in the way that it plays out. And I would say I was conscious, I didn’t have the term “ideal worker norm,” but I was conscious of a high expectation of availability to either do clinical work or other research responsibilities. And it’s always been a struggle. I’m very glad that now we have better terminology. We have better policies so that we can talk about caregiving responsibilities without violating laws. We can talk about how we can adapt our expectations, how can we adapt our schedules to be more accommodating by focusing on the work that needs to be done, but being more realistic about the other responsibilities that people bring to their work.

Frueh: I kind of want to go a little bit deeper into what this looks like in the real world for people who are trying to do a good job, progress in their career and they run into these types of barriers. What effects is it having on them and their careers?

The birth of a first child is a moment of great vulnerability and a high proportion of both women and men in STEMM fields leave the field entirely at that time.

Fuentes-Afflick: What we know—both from research as well as the interviews that we conducted as part of this study—is that people experience a great deal of stress. We also know that from the published research that the birth of a first child is a moment of great vulnerability and a high proportion of both women and men in STEMM fields leave the field entirely at that time. We don’t know all the reasons why people leave the field, but one might speculate given the timing that it is that challenge of integrating a new set of caregiving responsibilities with the professional responsibilities that becomes either untenable, too stressful, unaffordable or somehow doesn’t work anymore, and people leave the field. So that’s a real challenge for our profession when we want to encourage people to pursue careers in STEMM, and we want to make the workplace inviting and receptive and accommodating to their needs.

So we know that there are major challenges, some of which people are even reluctant to share with their supervisor. So it’s sad to think about someone leaving the field without ever really asking for help or accommodation, but sometimes people don’t know who to ask, they receive incorrect information or they just make assumptions that no one will be accommodating and I just will leave. So we have a lot to do, but we also identified in our report and through our committee process that there are institutions that are undertaking innovative pilot programs. And these include programs such as team-based science or team-based teaching, so that the responsibilities don’t only fall to one person. There are programs called time banking where you can receive kind of an in-kind support if you are helping out another colleague, if you step to the plate and you help a colleague, you can perhaps get a benefit in a different form. There are re-entry programs for people who take time away for caregiving responsibilities and then need a bit of a refresher or retooling, so that’s a way to retain people in the workforce.

And then we recognize, as you said in your question, that the STEMM workplace has high expectations. We expect people to be innovating, to be creating new knowledge, disseminating that, and that can be very stressful without caregiving responsibilities, but that our tenure process is often very rigid, very unforgiving, and some institutions have developed ways of adapting their tenure review and advancement review process to take more consideration of those who have caregiving responsibilities. So those are examples of innovative practices that we recommend institutions consider.

Frueh: I want to go back to something you said before about this affecting both women and men. And I know the report notes that these issues disproportionately affect women, and I’m wondering if you can talk a little bit about that, like the impact that caregiving and stresses around caregiving have on women’s ability to participate in the STEMM workforce?

Fuentes-Afflick: We know that women are very interested in STEMM fields. When we look at survey results of children or adolescents interest in STEMM fields, there’s a widespread interest. And you look at college along the continuum. There’s a great deal of interest among women and men, but we note that there is attrition as one proceeds along that professional pathway. Part of that we believe is caregiving, and these are societal norms that place a disproportionate burden on women, particularly women of color.

Society has a norm that places a disproportionate burden on women. But in our committee deliberations, we noted that these also have disproportionate impact for women of color and for both ethnic minority and LGBTQ communities.

And our committee took a very expansive interpretation of what caregiving means. It is caregiving in the perhaps traditional sense, but it also involves transportation, it involves financial management. It does not just involve children. There are elder care aspects to it. And it’s not just one’s nuclear family, there are other forms of caregiving for people outside the nuclear family. So we considered all those dimensions of caregiving. And again, society has a norm that places a disproportionate burden on women. But in our committee deliberations, we noted that these also have disproportionate impact for women of color and for both ethnic minority and LGBTQ communities, the definition of family or the scope of caregiving is often broader than it is in a majority population. So it’s an important issue, but it doesn’t apply equally to different groups and it’s very important from our committee perspective to take what we call an intersectional lens to look at the way that different identities such as race, ethnicity, and gender affect caregiving expectations and caregiving experience.

Frueh: It’s clear that this is affecting individuals and individuals in some groups disproportionately. I’m wondering if you can talk a little bit about the impact this lack of support for caregiving has on the workforce and the nation’s scientific capabilities? What’s at stake here for the rest of the country?

Fuentes-Afflick: The committee strongly believes that supporting caregiving represents a strategic investment in the labor force and is an important aspect to addressing and advancing equity. So we see caregiving as a major national priority. And that is based on our finding that lack of support for caregiving results in labor force issues like reduced participation in the labor force, reduced earnings and retirement savings for those who either cut back their labor force participation or drop out completely. And for those who remain and are juggling either the ideal worker norm or other pressures, they may experience reduced career opportunities or career growth. So each of these dimensions has a negative impact on individuals’ career trajectory. And when you consider the fact that caregiving as an experience is extraordinarily widespread in our society, you can understand that collectively there is a major impact.

Frueh: We’ve talked a little bit about the problem and its impacts, and I kind of want to switch to speaking a little bit more about solutions to this. And you noted that the committee really is encouraging innovative solutions and gave some great examples of that. And in addition to trying to think creatively and come up with other solutions, are there things that all academic institutions should be doing as a baseline to support caregivers?

Fuentes-Afflick: Our committee made some global recommendations and then some specific for colleges and universities. So what we made as a global recommendation is the development of a centralized resource to which people can turn. That includes clear and easily accessible written communication about policies and procedures. We recommend that these be universal so that they apply to the faculty, to the staff, students, the various constituencies. And we recommend that they be opt-out rather than opt-in. Opt-in requires an affirmative choice that you want to participate in some of these flexibility or programs, whereas opt-out means the expectation is you will participate, you have to actively disenroll from that.

We recommend that very strong protections be placed to protect against discrimination and bias, which we heard about in our interviews that people are sometimes very afraid to bring forward even a request for caregiving accommodation because of retribution. And we recommend that affordability and access be directly addressed by institutions because the cost of caregiving services is often high and puts it out of reach, particularly for some of the earlier career groups.

For colleges and universities specifically, we made some recommendations and these include ensuring compliance with legal requirements. We spoke earlier about the fragmented and the piecemeal nature of laws and policies, and so colleges and universities should be sure that they are complying with everything that is required of them. We also recommend that they provide 12 weeks of paid caregiving leave for all employees and leave for students that allows them to maintain their student status. Sometimes students are given accommodations, but they have to step away from their role as students to accept them and then that puts them behind in their educational progress. So we are encouraging colleges and universities to ensure that their policies continue to support students in their student role.

Colleges and institutions are their own labs, if you will, and they can implement policies and see who takes advantage of them and what impact do they have.

We recommend that colleges and universities institutionalize opportunities for flexibility in the location, time, and work intensity associated with employment. We didn’t focus specifically on the pandemic, but we know that the pandemic has encouraged us to be more flexible in our thinking about how and where and when we work. So we recommend institutionalizing opportunities for flexibility. We recommend providing centralized resources to support caregiving needs. We recognize that colleges and universities are major employers and they have opportunities as large employers to create resources. We recommend that they collect and analyze data on caregivers and on the impact of the policies that they institute. Colleges and institutions are their own labs, if you will, and they can implement policies and see who takes advantage of them and what impact do they have. And then we encourage them to take a scholarly approach and pilot innovative practices and evaluate them to understand their impact.

Frueh: I’m wondering about the culture change piece. With topics such as sexual harassment in academia and diversity and inclusion, one thing that often comes up is that culture change is both really important and really hard, and I’m wondering if you can talk about that in the context of caregiving. How can schools and individuals who are part of this system start to shift to a more supportive culture for caregivers? Who needs to start that and who needs to be involved in it?

Fuentes-Afflick: This is a really important area and one on which we did not find a great deal of research. So it comes more in the category of sharing practices and sharing experiences around the committee table. What we discussed was that the leader’s example can be very powerful. So if a major leader at a university says that they are taking caregiving leave either for the birth of a child, for example, or an elder care issue, and if they share their experience that they are stepping away, that can be a very powerful example to the remainder of the community.

I encourage leaders to the extent that they feel comfortable talking about their own personal experience with it.

Now, one can’t force that, it has to be authentic. But I will say that from my own experience, when my kids were young and I was a junior faculty member, I often felt that I had to be a little on the down-low about any kind of leaving early or taking them to a pediatric appointment or whatever because we didn’t have a vocabulary for that. And I’m very proud that although our culture still needs to change, I believe that it is changing and that we will continue to change. So I encourage leaders to the extent that they feel comfortable talking about their own personal experience with it. We believe that that is one important way to begin to change culture. But clearly understanding and following the policies and laws is another important way to shift some of those cultural expectations.

Frueh: What about other parts of the STEMM ecosystem? Are there things that policymakers and research funders should be doing to make sure that caregivers in STEMM are better supported?

Fuentes-Afflick: Our committee considered these various entities because we recognize that the educational institutions are one part of the STEMM ecosystem, but as you know, there are many others that have an important role in either the policies that we follow or the way that we can implement flexibility. So we see a role for the federal government, and we recommend that the federal government enact legislation that mandates 12 weeks of paid comprehensive caregiving leave. We believe that would be a major advancement for our country. We also were inspired by the recent CHIPS and Science Act which requires that institutions applying for funding under that act must provide on-site childcare. That we see as a very innovative practice, and we recommend that the federal government consider adopting that requirement for other federal opportunities.

It was hard for me, when we were reviewing the interviews that were conducted, to hear people speak so painfully about their fears of requesting accommodation, their fears of retaliation, their fears of discrimination.

For funders more generally, private and public, we recommend that they support flexibility in some of their deadlines. For example, they may define a junior faculty award and they may define it very narrowly, for example, the first three years of a faculty career. Well, what happens if you’ve taken a caregiving leave during that period? We recommend that funders consider a bit of flexibility if there has been a caregiving issue that has disrupted that zero to three year requirement. So we recommend some flexibility on the part of funders, and we recommend that funders provide support for caregiving when that comes under the granting period. We recommend that funders also assist in developing reentry programs for grantees, and we recommend that they fund innovative scholarship on family caregiving, whether it is the policies that support reentry programs. There are a number of important research topics that remain in need of additional study.

Frueh: While you were conducting this study, thinking about both the problem and solutions and examining both parts of that, did you run into anything that surprised you?

Fuentes-Afflick: I would say that nothing really was a surprise, but I will say it was hard for me, when we were reviewing the interviews that were conducted, to hear people speak so painfully about their fears of requesting accommodation, their fears of retaliation, their fears of discrimination. We talked in our committee about the maternal wall bias, biases that assume that professional mothers are less committed to their work or less committed to being a good employee if they are juggling caregiving responsibilities. These are norms and biases that I’m familiar with, but I guess I had hoped that in 20 or 30 years they might be less prevalent. So it was sad for me to realize that these are ongoing timely issues even now, and so we need to focus on this and acknowledge it and try to develop policies and programs to address it.

Frueh: So given that it has been a problem that’s been so persistent and things haven’t entirely changed in the last 20 years, how optimistic are you or aren’t you that this can really change in the near future and that the STEMM community, policymakers, everyone can start to see this for the important issue that it is an act like it? And if you are hopeful, where does it lie?

Fuentes-Afflick: I am very hopeful about making meaningful progress on these issues. First of all, I think we have strong data, and while we need further research, we have a good amount of true data that we can use to identify challenges as well as best practices and innovative programs. So we need to continue to build on that. We also, I believe, understand more clearly the economic impact that it has on individuals, institutions, and also our whole country. We have significant problems in STEMM that we need to address and we need everyone to engage in that. We need them to participate to create solutions. We can’t afford to lose people. So I hope that the policy imperative is clear.

We have significant problems in STEMM that we need to address and we need everyone to engage in that. We need them to participate to create solutions. We can’t afford to lose people.

I also am very encouraged in my own field of medicine, and as I work with junior faculty and trainees, they bring an urgency around these issues. They speak up, they talk about what they need. They may be reluctant to a certain extent about fears of discrimination or retaliation, but they still speak up and they push us to be better. And so I’m encouraged by that bravery that they often show. And so I’m optimistic that we are creating momentum for change and that we will see meaningful improvement.

Frueh: Is there anything that you think is really important about this report that we haven’t covered so far that you want listeners to know about?

Fuentes-Afflick: What I want to be sure to convey is that caregiving we consider a universal phenomenon. Certainly when we were little, we needed caregiving. If you become a parent, you get firsthand experience. If you have parents for whom you are caring, you understand that. Sometimes you’re caring for a spouse, sometimes it’s a sibling or a neighbor or some other person. So caregiving is really an equalizing experience over our lives. So it is really important. But also, even if one is not a direct caregiver, the caregiving experience impacts families and couples and communities. It is truly an issue that should matter to all of us in our professional and in our personal lives, and we can all speak up and advocate for policies, for individuals, for being innovators. I think it truly touches everyone, and I hope that everyone will be engaged in the effort.

Frueh: Thank you. That was so helpful. I appreciate your delving into this issue, and that is all the time we have. So I just want to thank you for joining us, Elena.

Fuentes-Afflick: Thank you so much for the invitation.

Frueh: Check out our show notes to find links to the report, Supporting Family Caregivers in STEMM: A Call to Action, and other Resources. And if you’re listening to this, you’re probably passionate about science policy. Please visit issues.org/survey to participate in our survey of the science policy community.

Please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producer, Kimberly Quach, and our audio engineer Shannon Lynch. I’m Sara Frueh, an editor at Issues in Science and Technology. Thank you for listening.

Inviting Civil Society Into the AI Conversation

Karine Gentelet’s proposals for fostering citizen contributions to the development of artificial intelligence, outlined in her essay, “Get Citizens’ Input on AI Deployments” (Issues, Winter 2024), are relevant to discussions on the legal framework for AI, and deserve to be examined. For my part, I’d like to broaden the discussion on ways of encouraging the contribution of civil society groups to the development of AI.

The amplification or emergence of new social inequalities is one of the fears of those calling for more effective supervision of AI. How can we prevent AI from having a negative impact on inequalities, and why not encourage a positive one instead?

Involvement of civil society groups, notably from the community sector, that work with impoverished, discriminated, or vulnerable populations in consultations or deliberations about AI and its governance is currently very marginal, at least in Quebec. The same holds true for the involvement of individuals within these populations. But civil society groups, just like people, can be affected by AI—and as drivers of social innovation, they can also make positive contributions to the evolution of AI.

Even more concretely, the expertise of civil society groups can be called upon at various stages in the development of AI systems. This may occur, for example, in analyzing development targets and possible biases in algorithm training data, in testing technological applications against the realities of marginalized populations, and in identifying priorities to help ensure that AI systems benefit society. In short, civil expertise can help identify issues that those guiding AI development at present fail to raise because they are far too remote from the realities of marginalized populations.

The expertise of civil society groups can be called upon at various stages in the development of AI systems.

Legal or ethical frameworks can certainly make more room for civil society expertise. But for them to play their full role, civil society groups must have the financial resources to develop their expertise and dedicate time to studying certain applications. Yet very often, these groups are asked to offer in-kind contributions before being allowed to participate in a research project!

And beyond financial challenges, some civil society groups remain out of the AI conversation. For example, the national charitable organization Imagine Canada found that 61% of respondents to a survey of charities indicated that they didn’t understand the potential applications of AI in their sector. The respondents also highlighted the importance of and need for training in AI.

Legislation and regulation are often necessary to provide a framework for working in or advancing an industry or sector. However, other mechanisms—including recourse to the courts, research, journalistic investigations, and collective action by social movements or whistleblowers—can also contribute significantly to the evolution of practices and respect for the social consensus that emerges from deliberative exercises. Events of this kind concerning AI are still very fragmentary.

Executive Director

Observatoire Québécois des Inégalités

Montréal, Québec, Canada

Existing approaches to governance of artificial intelligence in the United States and beyond often fail to offer practical ways for the public to seek justice for AI and algorithmic harms. Karine Gentelet correctly observes that policymakers have prioritized developing “guardrails for anticipated threats” over redressing existing harms, especially those emanating from public-sector abuse of AI and algorithmic systems.

This dynamic plays out every day in the United States, where law enforcement agencies use AI-powered surveillance technologies to perpetuate social inequality and structural disadvantage for Black, brown, and Indigenous communities.

Police departments routinely use historically marginalized communities as testing grounds to experiment with controversial AI and big data surveillance technologies such as facial recognition, drone surveillance, and predictive policing. For example, reporters at WIRED magazine found that nearly 12 million Americans live in neighborhoods where police have installed AI audio sensors to detect gunshots and collect data on public conversations. They estimate that 70% of the people living in those surveilled neighborhoods are either Black or Hispanic.

As Gentelet notes, existing AI policy frameworks in the United States have largely failed to create accountability mechanisms that address real-world harms such as mass surveillance. In fact, recent federal AI regulations including Executive Order 141110 have actually encouraged law enforcement agencies “to advance the presence of relevant technical experts and expertise [such] as machine learning engineers, software and infrastructure engineering, data privacy experts [and] data scientists.” Rather than redress existing harms, federal policymakers are staging the grounds for future injustice.

Police departments routinely use historically marginalized communities as testing grounds to experiment with controversial AI and big data surveillance technologies.

Without AI accountability mechanisms, advocates have turned to courts and other traditional forums for redress. For example, community leaders in Baltimore brought a successful federal lawsuit to end a controversial police drone surveillance program that recorded the movements of nearly 90% of the city’s 585,000 residents—a majority of whom identify as Black. Similarly, a coalition of advocates working in Pasco County, Florida, successfully petitioned the US Department of Justice to terminate federal grant funding for a local predictive policing program while holding school leaders accountable for sharing sensitive student data with police.

While both efforts successfully disrupted harmful algorithmic practices, they failed to achieve what Gentelet describes as “rightful reparations.” Existing law fails to provide the structural redress necessary for AI-scaled harms. Scholars such as Rashida Richardson of the Northeastern University School of Law have outlined what more expansive approaches could look like, including transformative justice and holistic restitution that address social and historical conditions.

The United States’ approach to AI governance desperately needs a reset that prioritizes existing harm rather than chasing after speculative ones. Directly impacted communities have insights essential to crafting just AI legal and policy frameworks. The wisdom of the civil rights icon Ella Baker remains steadfast in the age of AI: “oppressed people, whatever their level of formal education, have the ability to understand and interpret the world around them, to see the world for what it is, and move to transform it.”

Senior Policy Counsel & Just Tech Fellow

Center for Law and Social Policy

Drowning in a Mechanical Chorus

In her thoughtful essay, “How Generative AI Endangers Cultural Narratives” (Issues, Winter 2024), Jill Walker Rettberg writes about the potential loss of a beloved Norwegian children’s story alongside several “misaligned” search engine results. The examples are striking. They point also to even more significant challenges implicit in the framing of the discussion.

The fact that search results in English overwhelm those in Norwegian, which has far fewer global speakers, reflects the economic dominance of the American technology sector. Millions of people, from Moldova to Mumbai, study English in the hope of furthering their careers. English, despite, and perhaps because of, its willingness to borrow from other cultures, including the Norse, has become the de facto lingua franca in many fields, including software engineering, medicine, and science. The bias toward English in the search therefore reflects the socioeconomic realities of the world.

Search engines of the future will undoubtedly do a better job in localizing the query results. And the improvement might come exactly from the kind of tightly curated machine learning datasets that Rettberg encourages us to consider. A large language model “trained” on local Norwegian texts, including folk tales and children’s stories, will serve more relevant answers to a Norwegian-speaking audience. (In brief, large language models are trained, using massive textual datasets consisting of trillions of words, to recognize, translate, predict, or generate text or other content.) But—and here’s the crucial point—no amount of engineering can make a model more fair or more equitable than the world it is meant to represent. To improve it, we must improve ourselves. Technology encodes global politics (and economics) as they are, not as they should be. And we humans tend to be a quarrelsome bunch, rarely converging on the same shared vision of a better future.

No amount of engineering can make a model more fair or more equitable than the world it is meant to represent. To improve it, we must improve ourselves.

The author’s conclusions suggest we consider a further, more troubling, aspect of generative AI. In addition to the growing dominance of the English language, we have yet to contend with the increasing mass of machine-generated text. If the early large language models were trained on human input, we are likely soon to reach the point where generated output far exceeds any original input. That means the large language models of the future will be trained primarily on machine-generated inputs. In technical terms, this results in overfitting, where the model follows too closely in its own footsteps, unable to respond to novel contexts. It is a difficult problem to solve, first because we can’t really tell human and machine-generated texts apart, and second, because any novel human contribution is likely to be overwhelmed by the zombie horde of machine outputs. The voices of any future George R. R. Martins or Toni Morrisons may simply drown in a mechanical chorus.

Will human creativity survive the onslaught? I have no doubt. The game of chess, for example, became more vibrant, not less, with the early advent of artificial intelligence. The same, I suspect, will hold true in other domains, including the literary—where humans and technology have long conspired to bring us, at worst, some countless hours of formulaic entertainment, and, at their collaborative best, the incredible powers of near-instantaneous translation, grammar checking, and sentence completion—all scary and satisfying in any language.

Associate Professor of English and Comparative Literature

Columbia University

How to Build Less Biased Algorithms

In “Ground Truths Are Human Constructions” (Issues, Winter 2024), Florian Jaton succinctly captures the crucial importance of the often-overlooked aspects of human interventions in the process of building new machine learning algorithms through operations of ground-truthing. His observations summarize and expand his previous systematic work on ground-truthing practices. They are fully aligned with the views I have developed while researching the development of diagnostic artificial intelligence algorithms for Alzheimer’s disease and other, more contested illnesses, such as functional neurological disorder.

Much of the current critical discourse on machine learning focuses on training data and their inherent biases. Jaton, however, fittingly foregrounds the significance of how new algorithms, both supervised and unsupervised, are evaluated by their human creators during the process of ground-truthing. As he explains, this is done by using ground-truth output targets to quantify the algorithms’ ability to perform the tasks for which they were developed with sufficient accuracy. Consequently, the algorithms’ thus assessed accuracy is not an objective measure of their performance in real-world conditions but a relational and contingent product of tailor-made ground-truthing informed by human choices.

Even more importantly, shifting the focus on how computer scientists perform ground-truthing operations enables us to critically examine the processuality of the data-driven evaluation as a context-specific sociocultural practice. In other words, to understand how the algorithms that are increasingly incorporated across various domains of daily life operate, we need to unpack not only how their specific underlying ground truths have been constructed but also how such ground truths have been operationally deployed from case to case.

We need to unpack not only how their specific underlying ground truths have been constructed but also how such ground truths have been operationally deployed from case to case.

I laud in particular Jaton’s idea that we humanities scholars and social scientists should not stop at analyzing the work of computer scientists who develop new AI algorithms but should instead actively build new transdisciplinary collaborations. Based on my research, I have concluded that many of computer scientists’ decisions on how to build and deploy ground-truth datasets are primarily driven by pragmatic goals of solving computational problems and are often informed by tacit assumptions. Broader sociocultural and ethical consequences of such decisions remain largely overlooked and unexplored in such constellations.

In future transdisciplinary collaborations, the role of humanities scholars could be to systematically examine and draw attention to the otherwise overlooked sociocultural and ethical implications of various stages of the ground-truthing process before their potentially deleterious consequences become implicitly built into new algorithms. Such collaborative practices require additional time investments and the willingness to work synergistically across disciplinary divides—and are not without their challenges. Yet my experience as a visual studies scholar integrated into a transdisciplinary team that explores how future medical applications of AI could be harnessed for knowledge production shows that such collaborations are possible. In fact, transdisciplinary collaborations may indeed be not just desirable but necessary if, as Jaton suggests, we want to build less biased and more accountable algorithms.

Postdoctoral Researcher, Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich

Visiting Researcher, Department of Social Studies of Science and Technology, Institute of Philosophy, History of Literature, Science, and Technology, Technical University Berlin

Boost Opportunities for Science Learning With Regional Alliances

By many metrics, Tennessee struggles in science education. The number of candidates finishing teacher preparation programs fell nearly 40% from 2014 to 2020; if that trend continues, the state will produce no new teachers by 2030. But there are lessons in the Volunteer State that other regions could learn from. For example, in southeastern Tennessee, an alliance of local schools, businesses, universities, and other groups has come together to improve science teaching and learning. Volkswagen, a member of the local alliance, provides the Chattanooga Fab Institute, where teachers can learn 3D printing, microcomputing, and other technologies, and then use these experiences, along with a technology lending library, to build skills and creativity in their classrooms. In partnership with the Public Education Foundation (PEF) of Chattanooga, the alliance supports a cohort of teacher fellows every year to work across community partners and within their schools and classrooms, thus building and reinforcing regional connections.

All of this work has helped reset school expectations such that schools involved in the alliance scored 76% on a 2020–21 Ready Graduate indicator report—in contrast to an average of 40% for other Tennessee schools. Although many districts were not initially strong in science education, researchers have found clear improvements. One teacher said the collective work of improvement “made me a better teacher and kept me in the classroom longer.” When Vanderbilt University researchers asked students in a small focus group whether the community supported their learning, one described how “cool” it was when people from local manufacturing companies visited their engineering classes. The cooperation found in southeastern Tennessee is also formalized and sustained through the PEF STEM Innovation Hub, which leads initiatives across the state.

Tennessee’s experience exemplifies an idea championed in the National Academies of Sciences, Engineering, and Medicine’s 2021 report Call to Action for Science Education: Building Opportunity for the Future. That report, which all three of us worked on, called for better, more equitable science education from kindergarten through postsecondary education (K–16). It emphasized the need to both prepare a workforce and build foundational science literacy for everyone, regardless of race, ethnicity, home language, geographic location, or financial circumstances. The report also identified a key strategy to reach that goal: regional alliances for STEM opportunity, in which K–12 schools, postsecondary institutions, informal education, business, industry, philanthropies, and other stakeholders all join forces to align local needs to local assets. In the years since the release of that report, we have engaged in conversations with a wide range of educators, community organizers, policymakers, and other stakeholders, which has provided compelling examples of how such alliances can come together to achieve tremendous advances. 

Science education for today’s priorities

This is not the first time that the United States has recognized a need to improve science education. The Sputnik moment of 1957, with calls for better science education, led the National Science Foundation (NSF) to exponentially increase funding for a strong science curriculum, which translated into new K–12 textbooks. The focus was on preparing the best and brightest for the growing science, math, and engineering workforce. Equity and inclusion were not priorities in the space race.

The fresh insight from Call to Action for Science Education is for regions to embed formal education within their own specific context, with an emphasis on access and opportunity.

A quarter century later, in 1983, the National Commission on Excellence in Education released A Nation at Risk: The Imperative for Educational Reform. Advances in the Japanese auto industry and declining academic performance among US students triggered the concern, but the report called for an educated citizenry, not just a scientific elite, to have a sound understanding of scientific thinking. It even briefly alluded to “the voluntary efforts of individuals, businesses, and parent and civic groups to cooperate in strengthening educational programs”—what we now call regional alliances. Still, the recommendations focused elsewhere: on improving curriculum, increasing student time for learning, and correcting the shortage of math and science teachers.

The fresh insight from Call to Action for Science Education is for regions to embed formal education within their own specific context, with an emphasis on access and opportunity. Across the nation, we have seen a path to achieve both an informed citizenry and capable workforce by recruiting local industry, community, and philanthropy into supporting science education and allowing learners’ experiences to be tailored to their local context. The best way to identify local priorities, secure local resources, and improve communities is to draw on connections across the breadth of community stakeholders such as local business, colleges, citizen groups, and both the formal and informal education sectors. We call these models Alliances for STEM Opportunity.

The potential for this regional model can be seen in the national Defense Science, Technology, Engineering, and Mathematics Education Consortium (DSEC). This consortium is part of the Department of Defense Education Activity, the umbrella agency that provides education for children of military members stationed on bases in the United States and abroad. It serves nearly 70,000 students in 160 schools, each of which partners with local businesses, industry, and postsecondary institutions. The alliances operate by five guiding principles: engage K–16 students in meaningful experiences, serve students who are underrepresented, connect learning to Department of Defense workforce needs, use DSEC as a lever to amplify the work of regional hubs, and use data to improve over time.

Though DSEC is a national organization, region-specific connections are key. The Dayton Regional STEM Center in Ohio connects 57 schools and community partners, including Wright-Patterson Air Force Base, which offers an Air Camp for students, and the center’s teachers benefit from its STEM Fellows program. Another hub, Center for Research on Educational Equity, Assessment, and Teaching Excellence, at the University of California, San Diego, has a range of distinctive features, including a summer math academy as well as student internships and apprenticeships with the Naval Information Warfare Systems Command. This school system placed well above the national average in the 2022 National Assessment of Educational Progress. It’s hard to know how much the regional alliance approach contributed to this performance, but the fact that a high-performing national school system has adopted the regional alliance approach may be its own endorsement.

Helping success breed success

A regional alliance can help strengthen teacher training and ensure that lessons are relevant to students’ lives as well as collect data to assess weaknesses and guide iterative improvement. The approach also strengthens communities by creating a more engaged citizenry and able workforce.

Although regional alliances will look very different in terms of specific priorities, resources, and projects, initiatives tend to fall into a common set of actions:

Stable regional alliances help success breed success, particularly when they strengthen an educational continuum for K–16. High school students who go on to higher education generally do so close to home, so coordination between regional schools and postsecondary schools can help to better align curriculum, expectations, and other requirements. High school teachers involved in alliances will have a better sense of what higher-education instructors expect their students to bring into the classroom, and those instructors will have a better idea of what students have already been taught. Such coordination can keep students from feeling stranded as they move from high school into higher education. It can also create opportunities including dual enrollment in high school and community college, early college coursework, and more effective community college programs.

Engaging postsecondary schools is also an important strategy to counter teacher shortages. Labor markets for teachers are primarily local rather than national, so postsecondary schools should consider how they can help train enough teachers for their own communities. Links with the local industry (say, practicums around mining, biotech, manufacturing, or agriculture) can produce further synergies, creating a workforce that keeps businesses in the region and jobs that keep young people from moving away.

The twin goals of equity and high-quality schooling are crucial for the economy and for society, and we think regional Alliances for STEM Opportunity are the best way to serve both. 

When these alliances are most successful, they provide students with a strong science education in their earliest years that can continue without disconnects as they move through the K–16 continuum. When students do not have access to robust science learning experiences in the elementary grades, they are not well positioned to pursue advanced science courses in high school. If students are not ready for those courses or their high schools do not offer them, it is harder for students to navigate STEM paths after high school graduation. Many students, particularly Black, Latino/a, and Indigenous students, as well as those living in poverty or rural areas, lack support to transition from high school to postsecondary science courses and may not know how to pursue their STEM-related career goals. Stronger alliances among local philanthropies, schools, postsecondary institutions, and industry can reduce these gaps to help keep students from falling through them.

Opening paths toward STEM careers has advantages that go beyond creating more scientists: those who venture down this path, or even know that the possibility exists, are more likely to view science as accessible and relevant to their world. A robust comprehension of science is crucial for individual, societal, and global well-being. All students deserve the opportunity to experience the wonder and joy that understanding the world around them can bring and to acquire skills in scientific thinking that enable them to participate in society and democracy. The twin goals of equity and high-quality schooling are crucial for the economy and for society, and we think regional Alliances for STEM Opportunity are the best way to serve both. 

Leaning into alliances

The United States’ K–12 and postsecondary education systems are decentralized, meaning there is no unifying driver to maintain quality and ensure equity. STEM alliances can elevate the importance of science in community conversations about education, the workforce, and quality of life. While it is crucial to ensure that the people closest to the work of learning and teaching are included—teachers, principals, district leaders, faculty, lecturers, department chairs, and students—the formal K–16 enterprise alone cannot provide all students with the broad STEM learning possibilities that should be available to them. An alliance’s collective understanding of education infrastructures, unique needs, and local assets can marshal the appropriate resources along the K–16 continuum, providing relevant context and richer experiences to motivate learners.

An alliance’s collective understanding of education infrastructures, unique needs, and local assets can marshal the appropriate resources along the K–16 continuum, providing relevant context and richer experiences to motivate learners.

One thing that we’ve noticed in our conversations is that it is crucial that Alliances for STEM Opportunity have a coordinating hub or formal convener to integrate efforts. More than a decade ago, Battelle’s nonprofit science education organization began supporting regional alliances in its home state of Ohio in collaboration with the Ohio Department of Education. It has since begun managing and supporting regional alliances in 20 states, often through a series of public-private partnerships. For example, the Tennessee STEM Innovation Network, created in 2010, supports eight regional alliances, including the one described at the beginning of this article. The opportunity to learn across states is facilitated through Battelle’s national STEMx network.

In Idaho, hubs were created by Idaho Business for Education, a group of regional businesses that helped launch multiple STEM alliances in different regions of the state, around which communities can coalesce. Each alliance connects K–16 formal and informal education, industry, and other community leaders who support quality instruction, teacher education, and teacher professional development. They explicitly prioritize reaching those most distant from STEM resources, using the local knowledge and expertise of hub partners to deliver programming. Its board of directors includes representatives from Idaho National Lab, regional medical organizations, the mining industry, ranchers, and tribal councils.

Our handful of examples does not imply that every regional alliance needs to be sponsored by a larger organization—it simply represents the ones that are the easiest to find. Indeed, we suspect that many independent STEM alliances doing very good work may only be known within their own communities and may not use terms like “innovation hub” or “STEM Opportunity Alliance.”

Articulating a regional vision           

The first step of a regional STEM alliance is to articulate a vision for its community and develop a plan of action to meet it. Decades of improvement efforts show how important it is to have a vision that aligns policies with practices and focuses on areas most needing improvement, such as lack of access.

Importantly, efforts should aim to strengthen connections across the alliance; one-off initiatives are often less effective. For example, although classroom visits can be good outreach, scientists can be more effective by forging stable alliances between schools and other institutions rather than making isolated presentations in classrooms. Multiple, ongoing connections within an alliance are possible: scientists who work for local employers can share the kind of scientific knowledge and skills that students will need to work in their industry; formal and informal educators can draw on regional features to engage students in scientific practices and concepts; and postsecondary institutions can design pre-service teacher programs that prepare future teachers to understand how people learn, framed in the contexts of local schools.

Support for alliances

Regional alliances are powerful because education is a local issue in the United States. But successful widespread adoption of the alliance model will require support from communities outside of the education sector as well. States can provide infrastructure to support local or regional alliances, whether from state agencies or state-level organizations such as the Idaho example above, and philanthropies can provide financial support and foster links to their own networks.

The federal government also has multiple crucial roles to play in supporting alliances. The White House Office of Science and Technology Policy can further elevate the importance of science education by establishing a regional emphasis for students to learn from their STEM Opportunity Alliance as a key goal of fostering an ecosystem for STEM that is rooted in equity, inclusion, and scientific excellence. Congress could increase attention to science in the next Elementary and Secondary Education Act reauthorization.  The National Science Foundation can prioritize alliances in the research projects it funds. Another opportunity for support could be through the NSF-funded Regional Innovation Engines, launched in May 2022. These engines do not currently emphasize education, but they do engage across sectors with the other stakeholders in science, which could help incubate individuals and efforts in nascent regional STEM alliances.

Creating sustained community support and engagement through Alliances for STEM Opportunity is a powerful way to bring about local improvements to science education. 

We believe that NSF Engines could be more valuable to a region and enjoy greater community support and engagement if they expanded to include support for alliances outlined in Call to Action for Science Education. For example, central Florida was awarded one of the 10 inaugural NSF Engine grants to build semiconductor capacity. To ensure the community has workers with the expertise to work in semiconductor plants and companies that will service the plant, it needs to prioritize high-quality opportunities for student learning. Valencia College, a local postsecondary institution that primarily serves associate degree students, is well positioned to contribute. The college seamlessly integrates K–16, serving as a strategic link between local schools and postsecondary institutions, specifically the University of Central Florida. The college also works with BRIDG, a public-private partnership that matches work in government and academic labs with industry needs, including a workforce. That experience connecting across sectors could apply in larger ways if linked to the innovation engine.

With wide regional variation in economies of innovation as well as in STEM education, creating sustained community support and engagement through Alliances for STEM Opportunity is a powerful way to bring about local improvements to science education. The federal government is well positioned to conduct research and provide resources, but regional alliances are essential to empower communities to find their most effective route to better science education. Importantly, they provide a venue for people to find common ground so that progress does not get lost to political polarization. Galvanizing regional STEM alliances offers a powerful lever to deliver better, more equitable K–16 science education in service of not just a competitive workforce, but also a better civic society. 

Celebrating the Centennial of the National Academy of Sciences Building

This is a special year for the National Academy of Sciences (NAS) as its beautiful headquarters at 2101 Constitution Avenue, NW, in Washington, DC, turns 100 years old. Dedicated by President Calvin Coolidge in April 1924 and designed by architect Bertram Grosvenor Goodhue, the building’s architecture synthesizes classical elements with Goodhue’s preference for “irregular” forms. It harmoniously weaves together Hellenic, Byzantine, and Egyptian influences with hints of Art Deco, giving the building a modern aspect—which is consistent with Goodhue’s assertion that it was meant to be a “modern and scientific building, built with modern and scientific materials, by modern and scientific methods for a modern and scientific set of clients.”

Goodhue, celebrated for his Gothic Revival and Spanish Colonial Revival designs, developed a late-career interest in Egyptian Revival architecture around the time that King Tutankhamun’s tomb was discovered. The NAS building’s design references ancient Egypt with its battered, or inwardly sloping, façade, giving the building an air of monumentality. It depicts the Egyptian god Imhotep, the Great Pyramid of Giza, the Museum of Alexandria, the ancient lighthouse on the island of Pharos, and hieroglyphic decorations. The structure reflects Goodhue’s distinctive aesthetic, and it also harmonizes with the nearby neoclassical Lincoln Memorial, which was under construction when the NAS building was planned.

A Tool With Limitations

In the Winter 2024 Issues, the essays collectively titled “An AI Society” offer valuable insight into how artificial intelligence can benefit society—and also caution about potential harms. As many other observers have pointed out, AI is a tool, like so many that have come before, and humans use tools to increase their productivity. Here, I want to concentrate on generative AI, as do many of the essays. Generative AI is a special kind of tool designed to improve human productivity, but like all tools it has limitations. Growth, innovation, and progress in AI are inevitable, and the essays provide an opportunity to invite collaboration between professionals in the social sciences and humanities to work with computer scientists and AI developers to better understand and address the limitations of AI tools.

The rise and overall awareness of generative AI has been nothing short of remarkable. The generative AI-powered ChatGPT took only five days to reach 1 million users. Compare that with Instagram, which took about 2.5 months to reach that mark, or Netflix, which took about 3.5 years. Additionally, ChatGPT took only about two months to reach 100 million users, while Facebook took about 4.5 years and Twitter just under 5.5 years to hit that mark.

Generative AI is a special kind of tool designed to improve human productivity, but like all tools it has limitations.

Why has the uptake of generative AI been so explosive? Certainly one reason is that it helps productivity. There is of course plenty of anecdotal evidence to this effect, but there is a growing body of empirical evidence as well. To cite a few examples, in a study involving professional writing skills, people who used ChatGPT decreased writing time by 40% and increased writing quality by 18%. In a study of nearly 5,200 customer service representatives, generative AI increased productivity by 14% while also improving customer sentiment and employee retention. And in a study of software developers, those who were paired with a generative AI developer tool completed a coding task 55.8% faster than those who were not. With that said, we are also beginning to understand the kinds of tasks and people that benefit most from generative AI and those that don’t benefit or may even experience a loss of productivity. Knowing when and why it doesn’t work is as important as knowing when and why it does.

Unfortunately, one of the downsides of today’s class of generative AI tools is that they are prone to what are called “hallucinations”—they output information that is not always correct. The large language model technology upon which the systems are based is good at producing fluent and coherent text, but not necessarily factual text. While it is hard to know how frequently these hallucinations occur, one estimate puts the figure at between 3% and 27%. Indeed, currently there seems to be an inherent trade-off between creativity and accuracy.

So we have a situation today where generative AI tools are extremely popular and demonstrably effective. At the same time, they are far from perfect, with many problems identified. Just as we drive cars and use the internet, there are risks, but we use these tools anyway because we decide the benefits outweigh the risks. Apparently people are making a similar judgment in deciding to use generative AI tools. With that said, it is critically important that users be well informed about the potential risks of these tools. It is also critical that policymakers—with public input—work to ensure that AI safety and user protection are given the utmost priority.

Professor Emeritus

Department of Computer Science

Southern Methodist University

The writer chaired a National Academies of Sciences, Engineering, and Medicine workshop in 2019 on the implications of artificial intelligence for cybersecurity.

The essays on artificial intelligence provide interesting and informative insights into this emerging technology. All new technologies bring both positive and negative results—what I have called the yin and yang of new technologies. AI will be no exception. Advocates for a new technology usually emphasize its advantages and dismiss consideration of possible adverse effects. It is only later, when the technology has been allowed to operate widely, that actual positive and negative effects become apparent. As Emmanuel Didier points out in his essay, “Humanity is better at producing new technological tools than foreseeing their future consequences.” The more disruptive the new technology, the greater will be its effects of both kinds.

With AI, it’s not just the bias and machine learning gone amok, which are the current criticisms levied against the technology. AI’s influences can go far beyond what we envision at this time. For example, users of AI who rely on it to produce outputs reduce their opportunities for growth of creative abilities and development of social skills and other functional capabilities that we normally associate with well-adjusted human adults. A graphic example of what I mean can be seen in a recent entry in the comic strip Zits, in which a teenager named Jeremy is talking with his friend. He says, “If using a chatbot to do homework is cheating … but AI technology is something we should learn to use … how do we know what’s right or wrong?” And his buddy responds, “Let’s ask the chatbot!” By relying on the AI program to answer their ethical quandary, they lost the opportunity to think through the issue at hand and develop their own ethos. It is not hard to imagine similar experiences for AI users in the real world who are otherwise expected to grow in wisdom and social abilities.

Advocates for a new technology usually emphasize its advantages and dismiss consideration of possible adverse effects.

It will probably not be the use of AI in individual circumstances that becomes problematic, but the overreliance on AI that is almost bound to develop. Similarly, social media are not, by themselves, a bad thing. But social media have now overtaken a whole generation of users and led to personal antisocial and asocial behaviors. The potential for similar negative outcomes when AI use becomes widespread is very strong.

Back when genetic modification was a new and potentially disruptive technology, it was foreseen as possibly dangerous to society and to the environment. In response, policymakers and concerned scientists put safeguards in place to prohibit the unfettered release of gene-edited organisms into the environment, as well as the editing of human germ cells that transfer genetic traits from one generation to the next. Most of these restrictions are still in effect. AI could possibly be just as disruptive as genetic modification, but there are no similar safeguards in place to allow us time to better understand the extent of AI influences. And it is not very likely that the do-nothing Congress we have now would be able to handle an issue as complex as this.

Professor Emeritus

Fischell Department of Bioengineering

University of Maryland

The Bondage of Data Tyranny

In “The Limits of Data” (Issues, Winter 2024), C. Thi Nguyen identifies key unspoken assumptions that pervade modern life. He skillfully illustrates the problems associated with reducing all phenomenon to data and ignoring those realities that cannot be captured by data, especially when it comes to human beings. He identifies examples of how the focus on quantification frequently strips data of context and introduces bias in the name of objectivity. Here, I offer some thoughts that complement the essay’s essential points while approaching them from slightly different perspectives.

While forcing people into groups to enable better data collection may lead to unwanted outcomes, some social categorization is necessary. Society needs legal thresholds to enable the equal treatment of citizens under the law. Sure, there are responsible 15-year-old geniuses and immature 45-year-old fools, but society has to offer some reasonable, but ultimately arbitrary, dividing line in allowing people to vote, or drive, or drink, or serve in the army. The need to codify legal standards for society remains an imperative, but, as Nguyen argues, those standards need not be strictly quantitative.

The universal drive for quantification and reducing phenomenon to data is driven by the architecture of the digital databases that process that data. Storing the data and analyzing them demands that all information inputs be in a format that must ultimately translate to 1s and 0s. This assumption itself, that all information is reducible to 1s and 0s, contains within it the conclusion that concepts, and by extension human thinking, can be reduced to binary terms. An attitude emerges that information that cannot be reduced to 1s and 0s is not worthy of attention. Holistic notions such as art, human emotion, and the soul must be either reduced to strict mathematical patterns or treated as a collection of examples from the internet or other databases.

The universal drive for quantification and reducing phenomenon to data is driven by the architecture of the digital databases that process that data.

A further motivation for the universal embrace of data and the fixation with quantification lies deep in the roots of Anglo-Saxon, and particularly American, culture. Early in the eighteenth century, the ideas of the British philosopher John Locke initiated a tradition that placed far greater value on practical facts that can be sensed (i.e., measured) rather than spiritual beliefs or cultural traditions that are the products of human reflection. By the end of the century, America’s founding fathers, including Benjamin Franklin and Thomas Jefferson, followed Locke’s tradition by emphasizing practicality and measurement. The advent of mass production and consumption—capitalism—only further sharpened the focus on the practical and obtainable. Entering the twentieth century, the great British physicist Lord Kelvin summed up his commitment to empiricism by declaring: “To measure is to know.”

Society leverages the power of current data processing technologies but is subject to their limits. An enduring fixation with data stems from modern beliefs about what type of knowledge is worthwhile. Freeing society from the bias and bondage of data tyranny will require responding to these deeply embedded technological and behavioral factors that keep society limited by contemporary data structures.

Senior Research Associate, Program for the Human Environment

The Rockefeller University

Harvesting Minnesota’s Wind Twice

The University of Minnesota West Central Research and Outreach Center (WCROC) is located in the city of Morris, in a region of the state where the winds howl across the plains of the Dakotas, bringing thunderstorms or arctic air, depending on the season. In 2003, the center received support from the Xcel Energy Renewable Development Fund to install a utility-scale wind turbine at its 1,100-acre research farm. But as the project ramped up, WCROC (where Reese works) faced a challenge—the local utility companies showed no interest in purchasing the power the turbine could produce. The center’s staff began exploring other opportunities to use and monetize the farm’s wind energy.

Producing nitrogen fertilizer, in the form of anhydrous ammonia, quickly rose to the top of the center’s list. There was an elegance to the concept. Most farmers rely on nitrogen fertilizer produced from fossil fuels to ensure their yields, but it is expensive. If those farmers could produce their own synthetic nitrogen fertilizer using wind power harvested on their land, they would gain an essential input for nourishing their crops while also benefiting the climate.

This idea laid the groundwork for WCROC’s Wind-to-Ammonia pilot project—the first venture of its kind. The research conducted through that effort has since led to an innovative partnership between WCROC, the Minnesota Farmers Union, clean energy advocates, commodity groups, and ethanol co-ops to support public investment in cooperatively owned ammonia production. Recent policies in the Inflation Reduction Act (IRA) and investments from the Department of Energy (DOE) Regional Clean Hydrogen Hubs program have improved the economic competitiveness of regionally produced “green” hydrogen.

In Minnesota, the emphasis on local ownership of these new production facilities provides an opportunity to change the dynamics in the fertilizer market, which now subjects farmers to volatile prices and frequent supply chain disruptions. Policies like the IRA that make green hydrogen economically feasible offer Minnesota’s farmer-owned cooperatives the opportunity to harness the region’s winds twice—for both energy and fertilizer—potentially building wealth in rural communities while lowering the carbon intensity of the region’s agricultural production. The pilot also shows how regional energy research and demonstration can develop uniquely local solutions that ensure the benefits of the green transition reach all corners of the country.

History of the Wind-to-Ammonia pilot

Western Minnesota has a unique history as one of the most productive agricultural landscapes in the United States, benefiting from deep prairie topsoil and plentiful water. Here, farmers raise cattle and poultry and grow corn, soybeans, sugar beets, wheat, and dry beans. As early as the 1880s, Minnesota farmers organized themselves against monopoly power through the Grange, which was founded in 1867 and became the first successful national farming organization in the United States, and the Minnesota Farmers Union (where Kagan works), which was founded in 1918. Farmers, many of them European immigrants, used cooperatives to build market power, buy inputs, and sell crops. But co-ops were also a way to build and maintain community and mobilize to counter the power of the railroad and grain monopolies. Minnesota’s agricultural cooperatives still rely on farmer members for leadership and governance to advance their goal of retaining wealth in rural communities. This long local history explains why cooperative ownership of the Wind-to-Ammonia pilot is so important to the partners involved.

Policies like the IRA that make green hydrogen economically feasible offer Minnesota’s farmer-owned cooperatives the opportunity to harness the region’s winds twice—for both energy and fertilizer.

When the pilot started, there were no synthetic nitrogen fertilizer producers in Minnesota. Even today, most of the world’s supply comes from China, Ukraine, the Middle East, India, and Russia, with some domestic production from states near the Gulf of Mexico. Fertilizer continues to be a large expenditure for farmers, with prices fluctuating widely from $400 per ton to nearly $1,600 per ton. Minnesota farmers buy around 800,000 tons of fertilizer per year—so localized renewable nitrogen production would significantly lower their overhead costs as well as their carbon footprint.

For WCROC, building the world’s first wind-to-hydrogen-to-ammonia pilot plant required overcoming multiple barriers involving costs and technology. At the time, few attempts had been made to use an intermittent source of electricity like wind to produce ammonia. Conventional production methods used hydroelectric power and natural gas—wind energy was still seen as
too expensive.

In 2005, WCROC installed a 1.65 megawatt Vestas V82 wind turbine at the research farm. The state of Minnesota and the University of Minnesota then contributed $3.3 million to construct a hydrogen plant, commissioned in 2010, followed by an ammonia plant, which went online in 2013. The entire operation includes the turbine, three small buildings, and an ammonia storage tank. 

At the hydrogen plant, electricity generated by the wind turbine is used to electrolyze water (separating hydrogen from oxygen) and separate nitrogen from air. At the ammonia plant, the two pure gases—hydrogen and nitrogen—are then combined in a conventional Haber-Bosch process to produce anhydrous ammonia that is condensed and stored.

Minnesota farmers buy around 800,000 tons of fertilizer per year—so localized renewable nitrogen production would significantly lower their overhead costs as well as their carbon footprint.

The initial startup and operation of the facility was challenging, but it quickly provided critical information on operation, technical constraints, and the initial economics of small-scale green ammonia production. As soon as it was operational in 2013, the project could produce 3 kg of ammonia per hour—a small volume relative to the requirements of an average acre of corn, but the annual yield was enough to successfully produce enough fertilizer for the research farm’s needs and more. The remainder was shared with the farm’s local agriculture cooperative. With process improvements, supportive policy, and increases in electrolyzer capacity, the cost of producing ammonia at small scale soon reached a realistic range.

Collaboration with local scientists and engineers at the University of Minnesota has been key to the project’s success. Our colleagues within the university’s Department of Chemical Engineering and Materials Science have developed separation technologies that have greatly improved the overall efficiency and costs of the century-old Haber-Bosch process. They recently published a techno-economic and supply chain analysis to explore paths toward commercial deployment. Additionally, researchers from the Thomas E. Murphy Engine Research Laboratory in the Department of Mechanical Engineering developed an ammonia-fueled tractor and grain dryer, which we tested at WCROC’s research farm. We continue to find and demonstrate new ways to use ammonia, opening doors to more opportunities for commercial applications and meaningful use of clean energy.

What began as an elegant concept and a unique project on a rural agricultural experiment station in western Minnesota is now getting attention around the world. Today, the project demonstrates a vision to broaden localized production of green ammonia to reduce the carbon intensity of synthetic nitrogen fertilizer production and develop further industrial agriculture and energy applications for anhydrous ammonia. Here in Minnesota, we’re also seeing interest from other important sectors of the state’s economy, including the steel, mining, and shipping industries. It shows that the low-carbon energy transition can have meaningful benefits and real opportunities for rural communities.

Decarbonizing nitrogen fertilizer and agricultural production

The transition to green ammonia fertilizers could significantly lower the carbon intensity of farming and farm products. Traditional fossil fuel–based production of synthetic nitrogen fertilizer is responsible for roughly 2% of global greenhouse gas emissions, largely due to the massive energy requirements of the Haber-Bosch process. Using green ammonia to feed crops that are heavy fertilizer users—including corn and small grains such as oats, barley, and wheat—significantly reduces their carbon intensity without compromising productivity.

The low-carbon energy transition can have meaningful benefits and real opportunities for rural communities.

Likewise, using green ammonia as a fuel for grain dryers—the large burners and fans that reduce moisture in stored grains—also shows promise. Nitrogen fertilizer is responsible for 36.42% of the fossil energy of corn produced on WCROC’s land, while grain drying is responsible for 41.63%. When green ammonia is used for both fertilizer and dryer fuel, the fossil energy footprint is reduced by over 78%. 

Transitioning to green ammonia could also create opportunities for regional production facilities to produce essential products locally. In recent years, the biofuel industry has shown interest in reducing the carbon intensity of ethanol production. Building green ammonia plants near ethanol plants could reduce net emissions if biogenic carbon dioxide from ethanol production was used to produce urea fertilizer. Farmers tend to prefer urea to ammonia because the granular fertilizer is safer, easier to store, and can be applied to fields more quickly and easily. 

Beyond agricultural uses, green anhydrous ammonia has been promoted as a potential hydrogen carrier and energy storage option because it is significantly cheaper to store and transport above ground than hydrogen gas. In 2016, the DOE established the REFUEL program to pursue promising hydrogen carriers, and most of the research and development efforts have focused on the production of low carbon intensity anhydrous ammonia.

Still, green ammonia requires further research to reach its full potential. Today, considerable water usage is required for green ammonia production, and there are concerns about controlling nitrous oxide emissions (also a potent greenhouse gas) that necessitate further investment in research and testing. Although green ammonia may become a revolutionary fuel for shipping, mining, and other applications, today there is a clear benefit to small-scale, distributed, localized production models that are easily monitored and adapted to local constraints.

Who owns the green transition?

Until recently, commercializing wind-to-ammonia production faced significant hurdles, despite its benefits. Heavy up-front building costs coupled with the volatile conventional nitrogen fertilizer market simply didn’t pencil out. That fundamental calculation changed with the IRA’s 45V Clean Hydrogen Production Tax Credit, which provides up to $3 per kilogram of clean hydrogen produced, depending on the lifecycle greenhouse gas emissions rate. Additionally, the 2021 Bipartisan Infrastructure Law authorized the DOE’s Regional Clean Hydrogen Hubs program, which is investing $7 billion to grow the hydrogen economy across the United States. The Heartland Hydrogen Hub in North Dakota, South Dakota, and Minnesota will be developing the infrastructure needed to create and commercialize green ammonia and clean hydrogen in the region.

This influx of investment and development comes at a time when the domestic fertilizer industry is highly concentrated. With just four companies controlling 75% of the US market, consolidation leads to higher prices and fewer choices for farmers. For example, when Russia invaded Ukraine, fertilizer prices more than doubled. With few alternatives, farmers were forced to absorb the cost to ensure sufficient crop yields.

When Russia invaded Ukraine, fertilizer prices more than doubled. With few alternatives, farmers were forced to absorb the cost to ensure sufficient crop yields.

We see a huge opportunity to build this new clean hydrogen and green ammonia sector with farmer ownership at the center. During the 2023 state legislative session, a coalition of partners, including the Minnesota Farmers Union, WCROC, commodity groups, ethanol co-ops, and agricultural retailers, helped advance a pilot grant program for farmer cooperatives to buy equity shares in green fertilizer production facilities. The $7 million grant program, administered by the Minnesota Department of Agriculture, is the first of its kind in the nation to incentivize farmer ownership of green ammonia. State policymakers are enthusiastic about building a new sector in Minnesota that uses homegrown resources for a high-value market. Agricultural retail cooperatives that sell seeds, fertilizer, chemicals, and equipment are exploring the possibility of carrying green fertilizer products, while clean fuel markets—including sustainable aviation fuel—are driving biofuel companies to look at green hydrogen as a key part of their decarbonization strategy. 

From a co-op perspective, having a locally produced, fixed-price fertilizer to offer members is a valuable addition to a business plan. For farmers, using a low-carbon fertilizer can open new markets to reduce carbon emissions across the supply chain. And for fertilizer developers, having a relationship with farmer cooperatives means a guaranteed market that isn’t dependent on global market dynamics. We see green nitrogen fertilizer as the first rung on the green hydrogen development ladder, with possible high-value applications across industrial sectors such as steel and low-carbon fuels. The state’s pilot grant program for green fertilizer is one way to ensure that the benefits of these green investments stay rooted with farm families and the communities where they live.

Of equal importance is the role that green hydrogen can play in reducing the state’s greenhouse gas emissions. Climate resilience is a priority for Minnesota farmers on the front lines of the changing climate—as is finding new ways to manage their land and operations in the face of increased storms, droughts, and heat waves. Partnerships between farmers and energy and agriculture researchers, coupled with favorable federal and state policies, are creating new opportunities to reshape who benefits from the green transition.

Let Rocket Scientists Be Rocket Scientists: A New Model to Help Hardware Start-ups Scale

For hardware start-up companies, growth can be dangerous. Scaling up production, coupled with expanding physical space to meet quickly rising production targets, is a challenge unique to companies that make complex physical objects, such as solid rocket motors or lithium-ion batteries. For hardware companies supplying products to the defense and space industries, the squeeze is often more severe, exacerbated by the difficulty of obtaining the appropriate infrastructure for testing, prototyping, and manufacturing. This has real consequences for national security.

Consider the recent experience of a dual-use military and commercial robotics company in Texas. Rapid growth in demand from customers caused the company to expand so quickly that it needed to relocate three times in just three years. With each move, the company had to break lease agreements, pay substantial moving costs, and relocate heavy and difficult-to-calibrate pieces of machinery—all while navigating multiple disruptions to its production process.

Hardware start-ups face different challenges from those for software or other industries; typically, they require long lead times to ensure robust development and significant capital investment to demonstrate market potential, making emerging companies less attractive to both potential investors and commercial landlords. In 2017, CB Insights looked at nearly 400 consumer hardware start-ups and found that they were half as likely to raise second-round funding as tech companies in general; by their fifth year, 97% had failed. New strategies are needed if we want to see more promising hardware companies succeed and reach their market potential.

Hardware start-ups face different challenges from those for software or other industries.

The American Center for Manufacturing & Innovation (ACMI), where I am founder and CEO, is advancing a new industry campus-based model to minimize risk in the process of hardware scaling and help small businesses in critical industries establish secure supply chains within the United States. Our model aims to reduce hardware companies’ growing pains by building campuses where they and other members of their supply chain can grow together. Since 2022, we’ve been working with the Department of Defense (DOD) to overcome systemic barriers facing hardware start-ups with an eye toward maintaining a resilient and secure defense manufacturing ecosystem.

The goal of ACMI’s campus model, in essence, is to free up rocket scientists to be rocket scientists, rather than burdening them with other business tasks. Recently, we met with a solid rocket motor manufacturer that had struggled to acquire real estate for production facilities. That company finally resorted to selling equity in the business—an unwise move because unlevered returns for real estate development (the return a property investment produces for its owner if funded solely with equity) are roughly 70% less than the returns typically demanded by start-up company investors. Subsequently, the chief technology officer (CTO), a rocket engineer, was assigned to manage the development project, an area in which he had little experience. This situation exemplifies a typical predicament for companies at this stage, struggling with how to make the best use of scarce resources. With ACMI’s campus model, we could have helped the company find the right space while freeing its CTO to focus on rocket design and engineering.

Start-ups play a crucial role in security and defense supply chains

The DOD has long recognized the vital role of small business as a driver of innovation. In the wake of pandemic-driven supply chain disruptions, and in the face of escalating cybersecurity threats and active warfare in many parts of the world, helping small businesses in the defense sector bridge supply chain gaps and accelerate the deployment of new and innovative technology has become a major national security concern.

Since February 2022, Heidi Shyu, the under secretary of defense for research and engineering, has been focusing resources on creating a resilient defense industrial ecosystem. This strategy can help start-ups cross the divide from development to deployment in critical technologies, including advanced engineering materials, clean energy generation and storage, biotechnologies, semiconductors and microelectronics, directed energy, hypersonics, and space technologies and systems. The ecosystem approach is rooted in the recognition that early-stage technology development must be accompanied by investing in manufacturing and deploying new technology at scale.

In March 2023, William LaPlante, under secretary of defense for acquisition and sustainment, emphasized the need to help companies achieve production scale in a statement to the House Armed Services Subcommittee on Cyber, Innovative Technologies, and Information Systems: “A paradigm shift is required: we have traditionally thought about ‘innovation in technology’ whereas now we must think about innovation not only in prototyping but also in development and production as well. This means rethinking the intersection of traditional design and manufacturing phases; the more we can collapse the two together, the more successful we’ll be in accelerating capability delivery at scale.”

Early-stage technology development must be accompanied by investing in manufacturing and deploying new technology at scale.

In December 2022, the DOD established the Office of Strategic Capital to facilitate connections between the DOD and companies developing critical technologies with national security applications and to align funding sources across this network. In addition to the well-established and competitive Rapid Innovation Fund, DOD created the Rapid Defense Experimentation Reserve, a collaborative effort among military branches, industry, combatant commands, and joint partners to promote experimentation in new technologies to fill joint warfighting capability gaps. Another pilot program, called Accelerate the Procurement and Fielding of Innovative Technologies (APFIT), seeks to help companies that are developing innovative defense hardware and technology—especially small businesses and nontraditional defense contractors—fast-track development, production, and delivery.

These recently established resources, along with existing Small Business Innovation Research and Small Business Technology Transfer grants, provide a welcome amplification of government support for defense-related hardware start-ups. Standard technology incubator and accelerator programs are generally designed for software companies, but hardware start-ups face different challenges in finding the support and resources they need to scale up once they outgrow these programs. Government-funded programs can contain bureaucratic hurdles that limit innovation and extend timelines. For example, the Department of Energy’s Seeding Critical Advances for Leading Energy Technologies with Untapped Potential (SCALEUP) program has a requirement for companies to surrender some of their intellectual property rights, which can be a significant barrier for innovators who are also seeking commercial applications for their technology.

Amid renewed national focus on the innovation ecosystem, ACMI grew out of my 20-year career in financial services, where I recognized the enormous need for infrastructure and capital investment in the hardware sector. Later, I started a high-precision hardware company to address an unmet need in the specialty vehicle and motorsports industry, an enterprise that provided me with firsthand experience of the challenges of bringing a new hardware product to market.

Standard technology incubator and accelerator programs are generally designed for software companies, but hardware start-ups face different challenges in finding the support and resources they need to scale up once they outgrow these programs.

ACMI, through three affiliated companies, is working to create a holistic solution where hardware companies can grow. ACMI Federal, which is focused on managing government programs, supports the domestic supply chain and innovative commercial companies that seek to work with the federal government. ACMI Capital, which invests in early-stage companies, provides private investment and guidance to hardware technology companies during the critical period when they are scaling up. The third affiliate, ACMI Properties, develops shared industry campuses, providing a manufacturing-suitable scaling-up space—a foundational service for start-ups requiring shared infrastructure to foster innovation. 

The industry campus model

At ACMI, we developed our industry campus model to bring together start-up company tenants within a specific sector to share specialty infrastructure, be near their large corporate counterparts, and have access to facilities that are scalable and adaptable to their needs. Niche industry start-ups often require specialized spaces, which demand capital investment. Scale-up and production then require even more capital and expertise. Co-locating start-ups within a dynamic ecosystem that includes established manufacturers, technical experts, service providers, and other companies creates natural connection points for collaboration, joint ventures, rapid growth, and acquisitions. The campus model provides the benefits of vertical integration and efficient use of resources without losing specialization.

ACMI’s first DOD contract, in 2022, was for the Critical Chemical Pilot Program, which sought to generate a domestic manufacturing base for chemicals needed for munitions manufacturing. The two-year program, funded through the Defense Production Act Title III Program, aimed to use DOD funding to leverage private capital at a 10-to-1 ratio to support the effort. In October 2023, ACMI was awarded an extension of the program. The extension noted that the project had a private-to-public funding ratio of 16 to 1 and that we anticipate eventually reaching a ratio of 25 to 1. In the pilot extension, ACMI will expand the number of chemicals and add new academic and commercial partners to the team.

During the extension, we intend to build upon our successes with domestic production of critical chemicals in the initial pilot program. For example, we worked with a commercial chemical company to produce a key chemical that has not been available domestically in nearly two decades. In another example, we worked to support the certification of a lower-cost, commercially available material for use in place of one that met military specifications but was not available domestically. And finally, we worked with a company on process innovation to enable future domestic production of several critical chemicals with fewer waste products.

The campus model provides the benefits of vertical integration and efficient use of resources without losing specialization.

Building on our experience with critical chemicals, ACMI is branching out with two new campuses to help hardware companies bridge the gap between traditional incubator or accelerator programs and full production scale. In September 2023, ACMI was awarded, in a competitive process, a $75 million contract by the Office of the Deputy Assistant Secretary of Defense for Industrial Base Resilience through its Manufacturing Capability Expansion and Investment Prioritization office to establish a state-of-the-art munitions campus to foster innovation clusters in support of companies specializing in production at different points in the domestic supply chain. We are also working to develop a space systems campus to support commercial businesses in the space economy in NASA’s Exploration Park in Houston. In addition to the support of start-ups, we intend to make these campuses attractive places for workforce development of specialized talent and powerful engines for regional economic growth.

Establishing a vibrant manufacturing ecosystem

ACMI’s industry campus model aims to provide emerging companies with the structure and support they need to achieve production scale. Instead of focusing on individual companies and innovators one at a time, our regional hubs have the potential to move an entire industry forward. The intention of ACMI industry campuses is to act as a force multiplier, accelerating growth for individual companies and maximizing the impact of both private and government investment while also spurring job creation and economic development.

The US innovation ecosystem is a vibrant cauldron. Harnessing its energy—and mitigating its risks—requires strategic, operational, and financial expertise as well as a high degree of collaboration. Although some nations rely on substantial government funding for technological innovation, as seen in the East Asian chip industry, the campus model leverages modest federal funds to gain a more sizable investment from aligned private capital sources. By taking advantage of these uniquely American, market-driven resources, ACMI is establishing new ways to efficiently transition critical technologies from laboratories to end users. This approach aims to rebuild the US industrial base organically by cultivating a dynamic, domestic defense manufacturing ecosystem.

Although ACMI’s initial award successes have primarily revolved around DOD priorities, the versatility and benefits of the industry campus model may prove to be fruitfully applied to more commercially oriented hardware industries. Numerous hardware innovations have applications in both the defense and commercial realms. Facilitating the introduction of these innovations not only enables the DOD to maintain its technological superiority in critical sectors, but also lays the foundation for a broader commercial manufacturing renaissance—and subsequent economic growth—in the United States.

Strategies to Govern AI Effectively

Advances in artificial intelligence are accelerating scientific discoveries and analyses, while at the same time challenging core norms and values in the conduct of science, including accountability, transparency, replicability, and human responsibility—difficulties that are particularly apparent in recent advances in generative AI.

Bioliteracy, Bitter Greens, and the Bioeconomy

The success of biotechnology innovations is predicated not only on how well the technology itself works, but also on how society perceives it, as Christopher Gillespie eloquently highlights in “What Do Bitter Greens Mean to the Public?” (Issues, Winter 2024), paying particular attention to the importance of ensuring that diverse perspectives inform regulatory decisions.

To this end, the author calls on the Biden administration to establish a bioeconomy initiative coordination office (BICO) to coordinate between regulatory agencies and facilitate the collection and interpretation of public acceptance data. This would be a much-needed improvement to the current regulatory system, which is fragmented and opaque for nonexperts. For maximum efficiency, care should be taken to avoid redundancy between BICO and other proposals for interagency coordination. For example, in its interim report, the National Security Commission on Emerging Biotechnology formulated two relevant Farm Bill proposals: the Biotechnology Oversight Coordination Act and the Agriculture Biotechnology Coordination Act.

In addition to making regulations more responsive to public values, as Gillespie urges, I believe that increasing the general public’s bioliteracy is critical. This could involve improving K–12 science education and updating it to include contemporary topics such as gene editing, as well as amending civics curriculums to better explain the modern functions of regulatory agencies. Greater bioliteracy could help the public make more informed judgments about complex topics. Its value can be seen in what befell genetic use restriction technology (GURT), commonly referred to as terminator technology. GURTs offered solutions to challenges such as the efficient production of hybrid seeds and the prevention of pollen contamination from genetically modified plants. However, activists early on seized on the intellectual property protection aspect of GURT to turn public opinion against it, resulting in a long-standing moratorium on its commercialization. More informed public discourse could have paved a path toward leveraging the technology’s benefits while avoiding potential drawbacks.

Greater bioliteracy could help the public make more informed judgments about complex topics.

Gillespie began his essay by examining how some communities and their cultural values were missing from conversations during the development of a gene-edited mustard green. The biotech company Pairwise modified the vegetable to be less bitter—but bitterness, the author notes, is a feature, not a flaw, of a food that is culturally significant to his family.

This example resonated keenly with me. I have attended a company presentation on this very same de-bittered mustard green. Like Gillespie, I do not oppose the innovation itself. Indeed, I’m excited by how rapidly gene-edited food products have made it into the market, and by the general lack of public freakout over them. But like Gillespie, I was bemused by this product, though for a different reason. According to the company representative, Pairwise’s decision to focus on de-bittering mustard greens as its first product was informed by survey data indicating that American consumers wanted more diversity of choice in their leafy greens. My immediate thought was: just step inside an Asian grocery store, and you’ll find a panoply of leafy greens, many of which are not bitter.

Genetic engineering has opened the doors to new plant varieties with a dazzling array of traits—but developing a single product still takes extensive time and money. Going forward, it would be heartening to see companies focus more on traits such as nutrition, shelf stability, and climate resilience than on reinventing things that nature (plus millennia of human agriculture) has already made.

PhD Candidate, Stanford University

Policy Entrepreneurship Fellow, Federation of American Scientists

Christopher Gillespie notes that inclusive public engagement is needed to best advance innovation in agricultural biotechnology. As an immigrant daughter of a smallholder farmer at the receiving end of products stemming from biotechnology, I agree.

Growing up, I witnessed firsthand the challenges and opportunities that smallholder farmers face. So I am excited by the prospect that innovations in agricultural biotechnology can bring positive change for farming families like mine. At the same time, since farming practices have been passed down in my family for generations, I directly feel the importance of cultural traditions. Thus, the author’s emphasis on the importance of obtaining community input during the early development process resonates deeply.

Such public consultation, however, often gets overlooked—to common detriment. In the author’s example of gene-edited mustard greens, the company behind the innovation could have greatly benefited from a targeted stakeholder engagement process, soliciting input from the very communities whose lives would be impacted. Such a collaborative effort can not only enhance the relevance of an innovation but also address cultural concerns. I believe that many agricultural biotechnology companies are already doing public engagement, but how it is being done makes a difference.

Such a collaborative effort can not only enhance the relevance of an innovation but also address cultural concerns.

In this regard, while the participatory technology assessments methods that Gillespie describes represent an effective way to gather input from members of the public whose opinions are systemically overlooked, it is important to recognize that this approach may hold certain challenges. Companies might encounter roadblocks in getting communities to open up or welcome their innovation. This resistance could be due to historical reasons, past experiences, or a perceived lack of transparency. Public engagement programs should be created and facilitated through a decentralized approach, where a company chooses a member of a community to lead and engage in ways that resonate with the community’s values. Gillespie calls this person a “third party or external grantee.” This individual should ideally adopt the value-based communication approach of grassroots engagement, where stories are exchanged and both the company and the community connect on shared values and strategize ways forward to benefit equally from the innovation.

Another step that the author proposes—establishing a bioeconomy initiative coordination office within the White House Office of Science and Technology Policy, focusing on improved public engagement—would also be a step in the right direction. But here again, it is crucial that this office adopt a value-based inclusive and decentralized approach to public engagement.

Though challenges remain, I look forward to a future filled with advancements in agricultural biotechnology and their attendant benefits in areas such as improved crop nutrition, flavor, and yield, as well as in pest control and climate resilience. And I return to my belief that fostering a transparent dialogue among innovators, regulators, and communities is key to building and maintaining the trust needed to ensure this progress for all concerned.

PhD Candidate, Department of Horticultural Science

North Carolina State University

She is an AgBioFEWS Fellow of the National Science Foundation and a Global Leadership Fellow of the Alliance for Science at the Boyce Thompson Institute

Kei Koizumi Advises the President

In this installment of Science Policy IRL, Kei Koizumi takes us inside the White House’s Office of Science and Technology Policy, or OSTP. As the principal deputy director for policy at OSTP, Koizumi occupies an unusual position at the very heart of science policy in the United States. OSTP provides science and technology advice to the president and executive office, works with federal agencies and legislators to create S&T policy, and helps strengthen and advance American science and technology. Koizumi talks to Issues editor Lisa Margonelli about what he does at OSTP, how he got there, and the exciting developments in S&T policy that get him out of bed every day. 

Are you involved in science and technology policy? From science for policy to policy for science, from the merely curious to full-on policy wonks, we would love to hear from all of you! Please visit our survey page to share your thoughts and provide a better understanding of who science policy professionals are, what they do, and why—along with a sense of how science policy is changing and what its future looks like.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast
Kei Koizumi Advises the President

Resources

Transcript

Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

I’m Lisa Margonelli, editor-in-chief at Issues. We launched our Science Policy IRL series to explore what science policy is by talking directly to the people who do it. Before we get started on this week’s interview, I’d like to invite you to participate in a survey of all of the people who do science policy, which I think might include you. Our survey has its roots in an old observation. Back in 1968, journalist Dan Greenberg tried to estimate the “remarkably small number of people involved in science policy,” and landed on somewhere between 200 and 1,000 people. That’s a big range, and the world has changed a lot since 1968, but we still don’t have a good idea of who does science policy and what it is that they do, so we’re launching this survey to better understand that, and we would love to hear from listeners like you. Go to issues.org/survey to participate. Now, back to our show.

On this episode, I’m joined by Kei Koizumi, principal deputy director for policy for the Office of Science and Technology Policy, or OSTP, at the White House. Kei has an unusual position in the science policy world, because the OSTP provides science and technology advice to the president, and it also works with federal agencies and legislators to create science and technology policy and to steer the general mission of science. Kei talks to us about what he does at OSTP, how he got there, and the exciting developments in S&T policy that get him out of bed every day.

Kei, welcome.

Kei Koizumi: Thank you. It’s a pleasure to be here talking to Issues listeners, and your podcast listeners.

Margonelli: You are at an absolutely central place in science policy in the United States, so I’m very excited to ask you some questions, but I’m going to start with the usual ones, which is the very first one that we always ask. How do you define science policy?

It’s our job as all of us—as citizens, as policy scholars, as voters—to try to steer the wonderful things we get from science and technology.

Koizumi: That’s a good question. And it’s actually a question that I ask my students to define when I’m teaching science policy classes, and it’s different for each person. Without giving away the answers that listeners might come up with, what does science policy mean to me breaks out into two categories. First is what I like to call science for policy. That is, scientific information, advice and evidence that is used to make policy decisions in every field imaginable, whether it’s healthcare or national security or economic competitiveness, so on.

The other category is policy for science. Those are the policy decisions that governments make that affect our US science and engineering enterprise, and that is government funding of research and development. That is all the policy that’s around how we do research, and how we make sure that results of that research get translated into impacts that we can all believe in and see. That includes things like human subject research protections, intellectual property policies, or open science and public access policies, to make sure that as many people as possible have access to the results of federally funded research. I could go on, but those are how I think about science policy, and what it comes down to is that policy is any action that an organization can take that has some kind of impact on the world. It’s our job as all of us—as citizens, as policy scholars, as voters—to try to steer the wonderful things we get from science and technology, like new knowledge, new technological options, toward making progress on the things that we all care about.

Margonelli: That’s an interesting and a common breakdown between the science for policy and policy for science. You’ve also said that science policy is a contact sport. What does that mean?

Koizumi: Well, it means that every day in my role here at the White House Office of Science and Technology policy, I have to work with a lot of people and have contact with them, talk to them about where they’re coming from, their hopes and aspirations, their interests. Together, through a lot of contact—and it’s meetings, it is written documents, it is emails, phone calls—we try to get to a policy that not only makes sense, but has the impacts that we’re seeking.

I also say that science policy is a team sport, because if you’re going to have contact with people, you need a lot of people to be on the team. I like that metaphor because in my work—in my office especially—we have, like a sports team, we have lots of different positions and skills. I’m surrounded by physicists, life scientists, astronomers, social scientists like myself, but I also have the opportunity to work with communicators, lawyers, policy wonks and specialists, and students, professors from all walks of life, and people who have had business backgrounds. It does take a team, all of us having contact and communication with each other, to make what I hope is wise S&T policy.

Margonelli: That’s interesting. There’s essentially a finite group of people who are involved. They may represent lots and lots of people from around the country and around the world, but you’re constantly bumping up against, and talking with, and working through a community.

If science policy is a team sport, being able to field a team from all across the nation makes us a stronger team.

Koizumi: One important thing that I think we all have to do, but especially in places like the White House, where, I mean, there’s a big fence around my office and it is hard for people to get to. That means that we have to make the extra effort to make sure we are communicating with, engaging with people all across the country. Here is where I’m really grateful for what came out of the pandemic in terms of digital technologies, Zoom, WebEx, et cetera. Because of these technologies, we are able to reach people all across the country that we never were able to before.

I served at OSTP in the Obama administration, 2009 through 2016, and back then, we didn’t have those technologies. We were able to engage with people who could take the time to get on a plane and come to Washington, DC, and meet with us, or on our occasional travels out. Now, we can and do engage with scientists, engineers, students, community members all across the country, and for the first time, we are able to reach people we never were able to reach before. So OSTP, we’ve had tribal consultations, we’ve had listening sessions with native Hawaiian communities, with Alaskan natives, and with students in every part of the country far away from Washington, DC. That helps us make better policy, because if science policy is a team sport, being able to field a team from all across the nation makes us a stronger team.

Margonelli: Huh. That’s really interesting. I didn’t realize that was going on. I want to go to the next question, which is, what does doing science policy look like in your daily life? Do you start the morning with a huddle? How does a Tuesday morning start for you?

Koizumi: This morning I was at the General Services Administration, and we had a federal government workshop on public engagement in science. That’s an important topic that we in the Biden administration and OSTP have been trying to push forward. We want to have two-way communication between scientists and the public, and we want the public to have opportunities to not only benefit from science and technology, but to participate in science and technology. That means putting together policies and tools such as citizen science, crowdsourcing, prizes, challenges, participatory technology assessment, citizen science forums, etc. to enable people from all walks of life to participate not only in communication about science, but actually doing research, and collecting data, and becoming more scientific.

We want to have two-way communication between scientists and the public, and we want the public to have opportunities to not only benefit from science and technology, but to participate in science and technology.

This workshop was about bringing together federal agencies on, let’s learn from each other. What have we been doing with our communities? What’s working? What’s not? How can we as a White House policy office help to clear away obstacles, or provide encouragement from the top, from the White House, for what the agencies are doing? That’s one part of it. Another part of my day is usually working on some piece of legislation, because we are on the same team, the Congress and the executive branch, or at least we try to be. That means, we are thinking of ways in which existing policies can work better for science and technology.

Right now, we are working on reauthorizing the National Quantum Initiative. Many of the listeners have heard quantum information sciences, quantum computing. That’s still a frontier, but that frontier is getting closer to our daily lives. We want to make sure that the research we support as a US government is expanding our frontier and expanding the possibilities for quantum computing to eventually benefit all of us, for commercial, security and scientific applications. In working with Congress, we’re hoping to provide that legislative framework to allow our US scientists and engineers to make the breakthroughs that will translate into impact in quantum.

Margonelli: You spent part of today also working on legislation?

Koizumi: Yes.

Margonelli: And then, what is the third thing that you do in a day?

Koizumi: The third thing I’m doing today is … well, next week I’m going to the Organisation for Economic Co-operation and Development, or OECD. It’s a multilateral group of developed nations who get together to work on cooperation in economics, science and technology, and many other matters. I’m getting ready for that because there’s a lot of science and technology issues on our agenda. One of those is AI, artificial intelligence. We know AI has already had a transformative impact on our lives, because most of us have either experienced or used these large language models like ChatGPT. We know that AI is already transforming recommendation engines or the ways that we interact on social media platforms, and we also know it’s a global technology.

The United States has done a lot in terms of governance of AI and other emerging technologies, but we know that if we’re going to be really effective, we need a more global governance system. OECD is one of the forums in which like-minded nations with the United States, we get together and discuss how we can intelligently govern AI so that it is safe, trustworthy and secure, and that preserves the privacy and the rights for people all across the world.

Margonelli: One of the interesting things about the OSTP, the Office of Science and Technology Policy, is that you can’t make anybody do anything. It’s often described as convening power. You have the power to get people together. You can go to the General Services Administration and talk about goals, and get everybody paddling in the same direction, and you can go to the OECD, and you can talk with other ministers so that everyone is coordinating and knows each other, and can communicate about what needs to be done. You also are working on legislation, although the OSTP, obviously, doesn’t vote that legislation into place, and it gets sorted out in Congress. What is your role in that? At the end of a day, do you feel like you’ve been cheerleading into the wind, or do you feel energized? What does it feel like to do that kind of work?

Koizumi: I usually feel very energized by doing science policy, because it’s about using whatever tools I have to make a difference in people’s lives. Most of the time, I’m trying to make a difference in scientists’, engineers’ and students’ lives, but I’m also trying to make a difference in all our lives, including my life as someone who lives in the United States. I’m part of American society. I’m able to be so optimistic because I have a lot of tools. You’re right, OSTP has a fairly small budget. I tell people, we don’t give out research funding and we don’t have any labs, but we do help to set the direction for federal research funding and that’s a lot of money. $200 billion a year is what the federal government invests in research and development, or R&D. $100 billion of that is research, the majority of which goes to our colleges and universities.

We all have the potential to have some impact in policy, because we’re all part of this policy enterprise. We’re all part of this democracy.

That is a lot of leverage and power, and shaping that research funding helps shape the direction of research throughout the United States, and indeed the world, because the world does look to “What does the US think is important?” as a clue to “Maybe my nation should be thinking about that as an important topic as well.” Also, I’m very fortunate to have a pretty powerful tool. I work at the White House, so I can bring people to meet with me and say, “I’m inviting you to come to the White House to have a discussion with other scientists, other engineers and policy people about a topic that President Biden thinks is important.” That means, usually people say, “Okay, I’ll come talk. I may have to dial in by WebEx, but I want to be there.”

That convening power, it’s a convening power that I did not have at other points in my career when I was not working in the White House, so I really appreciate it, and I can appreciate the impact that I’ve had. I had impact when I was not at the White House as well. We all have the potential to have some impact in policy, because we’re all part of this policy enterprise. We’re all part of this democracy.

Margonelli: I wanted to ask you how you got involved in this. I guess I would just start by saying that one of the things that’s interesting about science policy is that outside the field, it has a reputation for being a little bit dry, or perhaps abstract, or involved with very, very big things over long timeframes. Particle accelerators and massive budgets, and the Department of Defense doing research and all of these things, but so many people in it bring an intense sense of passion, and an intense personal sense to it. I wondered: how did you get involved in science policy?

Koizumi: I got involved at George Washington University in Washington, DC. I came to Washington thinking I’d be in an international affairs program, but little did I know that program had a program within it of international science and technology policy, and that was just fascinating to me. I still don’t know exactly why, but I just know that it’s like, “Oh, this is something I want to do, help shape the direction of science and technology here in the United States.” I was fortunate, having been turned onto it, that I had some opportunities to contribute. I was fortunate enough that I was able to it a career. It spoke to me. I’ve worked in my career either at the American Association for the Advancement of Science, or AAAS, or at OSTP here in the White House, with a few side gigs along the way including teaching at GW and working as a consultant for some other organizations.

One of the policy issues that I’m proud to have worked on is working with the National Science Foundation to double the number of graduate research fellowships that NSF offers each year from 1,000 a year to 2,000 a year.

What’s kept me engaged is other people’s passion. I get to help scientists explore their deepest curiosities of far out things, sometimes literally far out, as in galaxies that are billions of light years away. I’m also able to really make an impact on students. One of the policy issues that I’m proud to have worked on is working with the National Science Foundation to double the number of graduate research fellowships that NSF offers each year from 1,000 a year to 2,000 a year. That means that thanks to work I’ve done, 1,000 more American students are able to have their graduate educations in science and engineering supported. It’s 1,000 more dreams that I’m able to help fulfill. That’s the kind of impact that I love being involved with. I’ve been really lucky that I have been able to stay involved for now 30 years since I first landed at GW, and follow it to the White House Office of Science Technology Policy.

Margonelli: Let’s talk a little bit about your time at AAAS, the American Association for the Advancement of Science, where you became famous for your analysis of the federal budget.

Koizumi: Well, I was also very lucky in that money gets people’s attention. And if you talk to scientists, engineers, they do worry about and they’re very interested in, what is the federal government doing in terms of research funding? I had people’s attention. Also, I’m lucky that that fit right into one of my research interests: the impact and influence government funding can have on the shape and directions of the research enterprise. I was able to actually watch behavior in action from working on the budget end. Most people think of budgets as fairly dry, and in some cases, it is. It is looking at rows of numbers, but these numbers do represent real dollars, real research projects, and of course, real people.

These numbers do represent real dollars, real research projects, and of course, real people.

That is a way in which I could find my way in, to balance what I really am comfortable with, which is the quantitative, and also the things that I was initially less comfortable with, which is, I guess you would say the qualitative, or the people dimension. Now, I’m able to balance those both in my life and in my career, and I think I was able to get a lot of skills from working on the federal budget and explaining it, guiding scientists and engineers through it, and talking to policymakers about this strange world of, how the hell do we decide how to spend this $200 billion that the federal government invests in R&D?

Margonelli: I want to dive a little bit deeper into that. You alluded to this, the federal budget numbers have real practical outcomes for different labs, but they also have a psychological role, where people feel good—scientists in particular feel like the world is secure—and it’s going in the right direction when we’re spending a certain percentage of GDP on science. In some ways that’s totally understandable, but you must’ve been watching this for close to 30 years, watching this profound relationship between the spending and the feeling in the scientific enterprise. I wondered if you had thoughts about that.

Koizumi: As a lapsed economist, I understand that people do assign other values to money and funding. Money shouldn’t be how we get self-worth or validation, but yet, it is. For the United States to be investing in some research project, it’s not just about the money, it is about validation. It’s a signal that, this must be important. Conversely, if the federal government invests less money in research, or in a certain field, researchers can’t help but feel, “That must mean my research is less important to the nation.” A lot of what I do is say, “Well, no.” It’s a valid reaction, but it’s actually because, budget caps and I could go into all sorts of explanations about what is happening in Washington, DC, that caused a research budget to go down.

Margonelli: Can you give us a little bit of detail into how you went from being the budget person at AAAS to joining OSTP? One month, you’re outside, you’re analyzing the budget, you’re explaining it to reporters and stuff, and a couple of months later, you’re helping prepare the president’s budget.

Koizumi: Well, I’m going to take everyone back to 2008. President Obama was the elected president, and he—as most president-elects do—put together a transition team between November and January. That transition team, its job was to put together policy proposals and agenda for this new incoming administration. At that time, it was the Great Recession of 2008 and 2009, and the president-elect said that we needed to reinvest and recover in America. He put together what ended up being a nearly $800 billion Recovery Act. The transition team recognized early on that research and development needed to be part of the recovery, and they were looking for someone who knew something about the R&D budget of the United States. They found me, and they asked me to be part of the transition team. There I was after the election, putting together proposals for how the federal government might invest through the Recovery Act in research and development and research infrastructure.

I am happy to say that about $22 billion of those ideas made it into the final Recovery Act. I must have done a good job, but I was also lucky in that during the transition, President-elect Obama named John Holdren to be his director of OSTP. John Holdren was a recent president of the AAAS, so I knew him. I reached out and said, “When you get to OSTP, there is a job, Assistant Director for Federal Research and Development, and I hope you’ll consider me for that job.” He did, the rest is history. I joined him at OSTP in early 2009. That’s how I went from a nonprofit to the White House, and that was my first federal job, to be at OSTP. The rest has been a wild ride ever since, for the eight years I was at OSTP during the Obama administration, four years away, and now, three years and three months in for the Biden Administration.

Margonelli: That’s a really long period of time with the OSTP. I wanted to ask you a little bit about how, during this time, you’ve watched things evolve. New ideas have become the norm. You’ve alluded to this a little bit, but if you consider, for example, the National Nanotechnology Initiative, it started with legislation in the early 2000s. I think it was passed into law in 2003, and that whole project has matured. For people who are listening to the podcast, the very first episode of Science Policy IRL was with Quinn Spadola, who is in the National Nanotechnology Coordination office. That initiative has evolved, and involved lots of community, or different ways of doing science, and it’s also inspired things, the Quantum Initiative has parts of that in it. Can you talk a little bit about how the conduct of science has evolved through this different legislation over your time in this world?

We need AI research to be open to everyone. AI research is too important to be left just to leading AI companies with billions of dollars in resources.

Koizumi: That’s a big question, and I think I can only get at fragments. What I’ve observed—and I hope we have tried to help through policy—is new fields always emerge, new fields are emerging. The data show this, that research these days in most disciplines has to be more collaborative: more people, larger teams, and larger international collaborations. That means that we as a policy enterprise have to try to keep up, to make sure that we are able to provide large scale research infrastructure, that we are able to support teams working in very different locations, connected through digital infrastructure, that we are able to give access to research opportunities to people all over the country, and not just in a few places in the country. Our focused efforts, like on nanotechnology, have really benefited from that. The Nanotechnology Initiative looks very different from how it looked 20 years ago to respond to these changes.

That means that we are not done focusing on research efforts. Most recently, we at OSTP stood up a national AI research initiative office, and that is because, obviously AI has flowered, and it’s become the thing, and we are trying to adapt the tools that we have always had to this emerging discipline. That’s why, for example, President Biden asked us, in his AI executive order, to ask the National Science Foundation to set up a national AI research resource pilot. That’s going to be a research infrastructure, a data infrastructure for AI research, and it’s deliberately designed to be able to take in ideas and people and capabilities from all across the country. We need AI research to be open to everyone. AI research is too important to be left just to leading AI companies with billions of dollars in resources. We need smaller institutions, academic institutions, students, civil society to be able to participate in AI research for public missions as well. That’s an example of how we are trying to keep adapting to how research continues to change, not only the topics, but the ways in which we do research.

Margonelli: All right, this is really interesting. I want to wind down a little bit with the last question, which is, what are the big questions for you about science policy? You started as a social scientist, thinking about science and scientists. What are the big, looming questions that keep you up at night or get you out of bed in the morning?

Koizumi: I prefer to stay on the optimistic side, so I’m going to answer in the things that get me out of bed in the morning. Some of my research questions are … I’ve talked about how this research enterprise is changing, and especially that global dimension. We have a global research enterprise in ways that we didn’t have in the twentieth century. My question is: What does that mean for national science and technology policies like the ones I work on? It used to be that the United States government would invest in basic research in full confidence that the benefits of that research would stay in the United States. We can’t be so sure of that anymore. How do we as a nation respond to that changing character, where insights and discoveries made in one place could instantly be distributed all around the world? Conversely, a discovery that’s made in Australia could be replicated in a US lab within a matter of days. That’s one question.

Another question is: How do we make sure that people all across the country have the opportunity to benefit from and participate in science and technology? That’s more of a very big picture question, and I can only take a bite, a chunk at a time, but that’s still a question that animates me. I know that I’ve been very lucky. I grew up in Columbus, Ohio. I grew up in an academic family connected to Ohio State, and I had the opportunity to participate in science, math, and so many other opportunities. I know we all need to do better to make sure that kids growing up today have opportunities like the ones that I was able to access because of where I was and who I was. Those are some of the questions that keep me going, and it keeps me energized.

I hope you get to experience this intersection of where science and technology meets public policy, because again, public policy is about translating our visions and our aspirations for a future into some action that can have an impact on making it possible.

Day to day, it’s not a question, but just the opportunity to meet people really energizes me now. Right now, I’m feeling very energized because I’m talking to you, whom I had not met before, and I’m talking, I know, to listeners who probably have never heard of me before. I hope I’m able to provide some of my story, some of my questions, and some of my experiences to help make their own decisions. What I have told students, and what I tell people now is that science policy is for everyone. Some of us will make a full-time career out of it. For some students and scholars, it’ll mean a congressional visits day once a year. It could be a community project that they devote 10% of their time to, or it could be a short-term opportunity, like a fellowship that brings them into a science policy organization for a year or two, and a year or two could turn into a lifetime, or a career.

Whatever it is, whatever model works for you, I hope you get to experience this intersection of where science and technology meets public policy, because again, public policy is about translating our visions and our aspirations for a future into some action that can have an impact on making it possible.

Margonelli: I think that’s a great place to end. Thank you.

Koizumi: Thank you so much for the chance to talk.

Margonelli: If you would like to learn more about Kei Koizumi’s work at the OSTP, check out the resources in our show notes, and please visit issues.org/survey to participate in our survey of who does science policy.

Is there something about science policy you’d like to know? Let us know by emailing us at podcast@issues.org, or by tagging us on social media using the hashtag #SciencePolicyIRL.

Please subscribe to The Ongoing Transformation wherever you get your podcasts, and thanks to our podcast producer, Kimberly Quach, and our audio engineer Shannon Lynch. I’m Lisa Margonelli, editor-in-chief at Issues in Science and Technology. Thank you for listening.

Governing AI With Intelligence

Artificial intelligence is now decades in the making, but especially since the emergence of ChatGPT, policymakers and global publics have been focused on AI’s promise and its considerable, even existential, risks. Driven by machine learning and other advanced computational methods, AI has become dramatically more capable. Benefits have already been realized in areas such as transportation, health care, and sustainable growth, and there are more to come. However, the benefits are matched by mounting concerns over safety, privacy, bias, accountability, and the spread of increasingly compelling misinformation created by generative AI. Lurking as well is the possibility that AI might outperform humans in some contexts, shrinking the sphere of human agency as more and more decisionmaking is left to computers.

While there is a growing consensus on the challenges of AI and the opportunities it offers, there is less agreement over exactly what sort of guardrails are needed. What instruments can we use to unlock the technology’s promise while mitigating its risks? Across the globe, myriad initiatives attempt to steer AI in socially desirable directions. These approaches come in different shapes and sizes and include ethics principles, technical standards, and legislation. While no universal strategy is likely to emerge, certain patterns stand out amid the diversity—patterns that constitute a thickening web of AI norms. There are hints here as to what it might mean to govern this evolving technology intelligently.

A Global Quest for AI Guardrails

Over the past few years, governments have been exploring and enacting national strategies for AI development, deployment, and usage in domains such as research, industrial policy, education, healthcare, and national security. While these plans reflect material priorities, they typically also acknowledge the need for responsible innovation, grounded in national values and universal rights. The responsibilities of AI developers may be articulated in regulatory frameworks or ethics guidelines, depending on the state’s overall approach to technology governance.

While no universal strategy is likely to emerge, certain patterns stand out amid the diversity—patterns that constitute a thickening web of AI norms.

In parallel to the enactment of national policies, private and public actors have crafted hundreds of AI ethics principles. There are commonalities among them—in particular, shared areas of concern—but also much nuanced distinction across geographies and cultures. The ethics push has been accompanied by standards-setting initiatives from organizations bearing acronyms like NIST (the US National Institute of Standards and Technology) and CEN (the Comité Européen de Normalisation, or European Committee for Standardization). Professional associations such as IEEE promulgate best practices. Meanwhile, legislative and regulatory projects aim to manage AI through “hard” rather than “soft” law. In the United States and Europe alone, hundreds of bills have been introduced at all levels of government, with the European Union’s newly approved AI Act being the most comprehensive.

Now add in the efforts of international institutions. The Organisation for Economic Co-operation and Development, with its AI Principles, and UNESCO, with its Recommendation on the Ethics of Artificial Intelligence, have established normative baselines that inform national AI governance arrangements. The UN adopted its first resolution on AI, highlighting the respect, protection, and promotion of human rights in the design and use of AI. G7 and G20 countries are attempting to coordinate on basic safeguards. Among the most ambitious international projects is the Council of Europe’s framework convention to protect human rights, democracy, and the rule of law in the age of AI.

Governance Today: More Tropical Rainforest Than Formal Garden

The landscape of AI governance, as these examples suggest, is no jardin à la française. Rather, it is as dense and intertwined as the Amazon. Importantly, from a governance perspective, the diversity isn’t limited to the mere number of efforts underway. It is also reflected in the fact that governments and other rule-making institutions have pursued vastly different approaches.

Some countries, including Japan, Singapore, and India, rely to a large extent on the power of self-regulation and standard-setting to strike the balance between AI risks and opportunities. Canada, Brazil, and China, among others, take a more heavy-handed and government-led approach by enshrining rules guiding the development and use of AI in laws and regulations. Some jurisdictions are taking a “horizontal” approach by crafting rules intended to apply across most or all AI systems; others take more of a sector-specific approach, tailoring norms to industries and use cases.

One of the most comprehensive examples of the horizontal approach, targeting a wide range of AI applications, is the EU AI Act. Over dozens of pages, the law details requirements that developers and deployers of AI systems must meet before putting their products on the European market. The substantive and procedural requirements increase as one scales a pyramid of risks, with particularly strong safeguards for high-risk AI systems in sensitive areas such as critical infrastructure, education, criminal justice, and law enforcement. The AI Act, supplemented by sector-specific regulations, creates a complex oversight structure to monitor compliance and enforce rules by means of potentially hefty fines.

The United States has taken an alternative path. With gridlock in Congress, the Biden administration has issued the far-reaching Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This order outlines a whole-of-government approach by establishing new standards for AI safety and security and launching programs across bureaucracies to safeguard privacy, advance equity and civil rights, protect consumers and workers, and promote innovation and competition. The initiative’s hundreds of action items vindicate certain norms—for instance, against algorithmic discrimination—and, in general, aim to realize the high-level principles found in the White House’s Blueprint for an AI Bill of Rights.

These and many other national and international governance initiatives form a complex and thickening canopy of principles, norms, processes, and institutions influencing the development and use of AI. The plurality of approaches, instruments, and actors involved, as well as the interactions among these, make AI governance a messy normative field that is increasingly difficult to navigate.

But while the rainforest teems with diversity at ground level, from a bird’s-eye-view, some functional patterns start to emerge. 

Patterns in the Landscape

One recognizable pattern across AI-governance arrangements relates to the different functions AI norms can play. Three kinds of norms are in operation today. First, and perhaps most intuitively, there are norms of constraint: AI norms typically place limits on the development and use of the technology. Such norms have been codified in rules such as bans on use of facial-recognition technology in some US cities and the premarket obligations for high-risk AI systems under the EU AI Act. A second category of norms, in contrast, is enabling. These norms permit or even promote the development and use of AI. Funding and subsidies reflect such norms. So do pro-innovation measures such as the creation of regulatory sandboxes—legal contexts in which private operators can lawfully test innovative ideas and products without following all regulations that might otherwise apply. Finally, a third category of norms attempts to create a level playing field. Such norms underlie, for example, transparency and disclosure obligations, which seek to bridge information gaps between tech companies and users and society at large; AI literacy programs in schools; and workforce training.

The plurality of approaches, instruments, and actors involved, as well as the interactions among these, make AI governance a messy normative field that is increasingly difficult to navigate.

Assorted AI governance systems may emphasize constraining, enabling, and leveling norms to varying extents, but typically the aims of these systems are quite similar. That is, another pattern lies in the goals of diverse AI-governance arrangements, even as the means diverge. The protection of established rights is usually a top priority, so that governments use the norms available to them to shield citizens against discrimination, fraud, and privacy invasions that emerge from AI use. A related objective is protection of established interests, such as the economic goals of certain interest groups or industries. Many governance arrangements include rules designed to promote economic activity, stimulate technological development through market mechanisms, and support the creation of new markets and business models. More generally, innovation is a core theme of most AI arrangements, which often include provisions that seek to promote AI research and development.

Guardrails on the Guardrails: Constraints Affecting AI Governance

The good news is that there is no shortage of approaches and instruments in the AI-governance toolbox. But, at the same time, policymakers, experts, and concerned members of the broader public cannot simply snap their fingers and see their governance goals realized. Contextual factors such as path dependencies, political economy, and geopolitical dynamics cannot help but shape the design and implementation of AI governance everywhere.

Whether in the United States, Europe, or China, the influence of national security interests on AI governance is becoming increasingly clear. The global powers are engaged in an AI arms race, with ramifications for the choices these leading states will make concerning the promotion of, and constraints upon, innovation. In particular, competitive dynamics dampen prospects for truly global governance—universally accepted and enforced AI rules. A case in point is the stalled discussion about a ban on lethal autonomous weapons systems.

Moreover, the norms, institutions, and processes constituting any approach to AI governance are deeply embedded in preexisting economic, social, and regulatory policies. They also carry cultural values and preferences in their DNA, limiting what is feasible within a given context. In other words, path dependencies caution against copying and pasting rules from one jurisdiction to another. For example, the EU AI Act cannot be transplanted wholesale into the laws of all countries.

Despite geopolitical tensions, economic competition, and national idiosyncrasies, there are islands of cooperation in the ocean of AI governance. International initiatives such as the UN AI Resolution, the G7 Hiroshima Process on Generative Artificial Intelligence, and the Global Partnership on AI seek to advance collaboration. These global efforts are supplemented by regional and bilateral ones, including, for instance, the transatlantic partnership facilitated by the US-EU Trade and Technology Council.

No Formulas, but Some Insights

By this point, it is clear that there won’t be a universal formula for AI governance any time soon. But I see three core insights emerging from a deeper analysis of the current state of affairs, which may inform initiatives in the near term and more distant future.

Contextual factors such as path dependencies, political economy, and geopolitical dynamics cannot help but shape the design and implementation of AI governance everywhere.

The first of these insights concerns learning. The rapid progress of technological development and AI adoption, combined with the lack of empirical evidence concerning what kind of governance interventions are likely to produce which outcomes, make AI a strong candidate for tentative governance. Tentative governance is a novel regulatory paradigm in which rules leave space for exploration and learning. In practical terms, whatever institutions take charge of AI governance—whether these are state institutions or industrial players—need to ensure that the rules they put forward are flexible enough to adjust in light of changing circumstances. It should be easy to update rules and eventually also to revise them heavily or revoke them when they no longer are fit for purpose. In addition, it is important to carve out spaces—think of controlled experiments—where certain guardrails can be lifted so that new AI applications can be tested. This is how all parties will find out about the risks of particular technologies and create ways to mitigate those risks. In short, learning mechanisms must be baked into AI governance arrangements because we often don’t know enough about tomorrow to make lasting decisions today.

The second broad insight we might discern in today’s fragmented AI-governance landscape is that great promise resides in interoperability among different regimes. Originally a technical concept, interoperability can be understood as the capacity of software applications, systems, or system components to work together on the basis of data exchange. But interoperability is also an advantageous design principle in the field of AI governance, as it allows for different arrangements—or, more likely, components of such arrangements—to work together without aiming for total unification or harmonization. Emerging AI-governance arrangements introduce and legitimize an assortment of tools and practices that may be thought of as modules subject to crossborder, multistakeholder cooperation. For instance, risk-assessment and human rights–evaluation tools could be aligned across otherwise-divergent AI-governance schemes.

Learning mechanisms must be baked into AI governance arrangements because we often don’t know enough about tomorrow to make lasting decisions today.

The final insight speaks to capacity-building. Private- and public-sector actors who seek to develop or deploy AI systems in their respective contexts—healthcare, finance, transportation, and so on—are confronted with the challenge of translating high-level policies, abstract legal requirements, emerging best practices, and technical standards into real-life use cases. In order to support this translation, AI-governance initiatives should invest in implementation capacity, which includes AI literacy and technical assistance. Such capacity-building demands—once again—multistakeholder and increasingly crossborder cooperation and has significant implications for education systems. Experience with previous cycles of innovation suggests that these on-the-ground capacities are often as important as the policy choices made in halls of power. What’s needed, ultimately, is governance in action, not only on the books.

Embracing Opportunities for Innovation—In Both Technology and Governance

There can be little doubt that AI will have long-term effects on the inner workings of our societies. Right now, in universities and public- and private-sector laboratories alike, scientists and engineers with a zeal for innovation are creating new possibilities for AI, and public interest is high. There is little chance that this collective enthusiasm will abate any time soon.

But while technological innovation is propelled in whatever direction our desires and interests take, governance largely follows the narrow passage allowed by realpolitik, the dominant political economy, and the influence of particular political and industrial incumbents. We must begin to think beyond this narrowness, so that path dependencies do not overly constrain options for governance. This is a historic opportunity, a moment to engage fully and in a collaborative manner in the innovation not just of AI but also of AI governance so that we can regulate this transformative technology without squandering its potential. Traces of such innovation in current debates—outside-the-box proposals for new types of international AI institutions—should be recognized as invitations to embrace a worthwhile challenge: to design future-proofed guardrails for a world shaped by AI.

Novel Technologies and the Choices We Make: Historical Precedents for Managing Artificial Intelligence

Scientific and technological innovations are made by people, and so they can be governed by people. Notwithstanding breathless popular descriptions of disempowered citizens cowed by technical complexity or bowing to the inevitable march of the new, history teaches that novel technologies like artificial intelligence can—indeed, must—be developed with ongoing and meaningful democratic oversight. Self-policing by technical experts is never enough to sustain an innovation ecosystem worthy of public trust. Contemporary AI might be a distinct technological phenomenon, but it too can be governed in the public interest.

History provides insights on how governance might proceed today. There is a robust empirical record of efforts to manage transformative technologies—a record of fits and starts, as wide-ranging constituencies work to make policies that advance the greater good. In this essay, we consider three examples: governance of the early nuclear weapons complex during the 1940s and 1950s, of novel biotechnology in the 1970s, and of polygraph testing and other forensic technologies that emerged over the last century.

In each instance, leaders of the scientific and technical communities sought to define and protect the public interest. Yet in none of these instances did scientists and technologists hold unilateral sway over how the new technologies would be assessed, deployed, or governed. The same is true for AI: technical experts will have their say, but their voices will be joined by others. And while no historical case offers a perfect analogy for present-day challenges, all three of these examples, understood side-by-side, help to identify realistic options for citizens and policymakers struggling to govern AI now.

Keeping Nuclear Secrets

Many commentators today compare the generative-AI rush with the dramatic efforts to build nuclear weapons during the Second World War, often calling for a “Manhattan Project” for AI. To some, the analogy with the Manhattan Project summons a coordinated, large-scale effort to surmount technical challenges. To others, it signals a need for careful control over the flow of information given the risks surrounding a high-stakes technology. Yet the history of nuclear secrecy reveals the limits of such a model for managing AI today.

There is a robust empirical record of efforts to manage transformative technologies—a record of fits and starts, as wide-ranging constituencies work to make policies that advance the greater good.

Research in nuclear science quickly became sensitive, as the path from basic discoveries to sprawling weapons programs was dizzyingly short. The first indication of nuclear fission came in December 1938; by April the following year, the German Reich Ministry of Education was banning uranium exports and holding a secret meeting on military applications of fission. That same month, the Japanese government launched a fission-weapons study, and several British physicists urged their government to jumpstart a weapons project by securing uranium ore from the Belgian Congo. In August 1939 US president Franklin Roosevelt received a letter drafted by émigré physicists Leo Szilard and Eugene Wigner and signed by Albert Einstein alerting the White House that nuclear weapons could exploit runaway fission chain reactions. A few weeks later, the Leningrad-based physicist Igor Kurchatov informed the Soviet government about fission’s possible military applications.

Amid worsening international relations, some scientists tried to control the flow of information about nuclear science. Beginning in spring 1939, Szilard urged a voluntary moratorium on publication of new findings in nuclear fission. When credit-hungry physicists refused, Szilard concocted a different plan: allow researchers to submit their articles to scientific journals—which would enable clear cataloging of discovery claims—but coordinate with journal editors to hold back certain papers until their release could be deemed safe. This scheme proved difficult to implement, but some journals did adopt Szilard’s recommendation. The physicists’ communication moratorium yielded some unexpected consequences: when Kurchatov and his Soviet colleagues noticed a distinct reduction in Physical Review papers regarding nuclear fission, they considered the grave potential of nuclear weapons confirmed and doubled down on efforts to convince Moscow that the matter must be taken seriously.

Szilard’s proposals focused on constraining access to information rather than regulating research itself. That distinction disappeared in June 1942, when the Allies’ patchwork of nuclear study groups were centralized under the auspices of the Manhattan Project. Officials exerted control over the circulation of information, materials, and personnel. The FBI and the Military Intelligence Division conducted background checks on researchers; commanding officer General Leslie Groves imposed strict compartmentalization rules to limit how much information any single individual knew about the project; and fissionable materials were produced at remote facilities in places like Oak Ridge, Tennessee, and Hanford, Washington.

After the war, secrecy routines were formalized with passage of the US Atomic Energy Act. Under the new law, whole categories of information about nuclear science and technology were “born secret”: classified by default and released only after careful review. The act also established a government monopoly on the development and circulation of fissionable materials, effectively foreclosing efforts by private companies to generate nuclear power. (Several of these provisions were amended in 1954 in order to foster private-sector efforts in nuclear power, with mixed results.)

Like Szilard in 1939, postwar scientists and engineers worked hard to shape the practices and norms of nuclear science and technology. But their illusions of control quickly collapsed amid Cold War pressures. For example, the newly established Federation of Atomic Scientists had some initial success lobbying lawmakers in favor of a civilian nuclear complex, but members soon became targets of a concerted campaign of intimidation. The FBI and the US House Committee on Un-American Activities pursued the federation, smearing several members with selective leaks and allegations of communist sympathies. Their attorneys were often denied access to information relevant to their cases under the pretext of protecting national security. The elaborate system of nuclear classification became a cudgel with which to silence critics.

Beyond its impact on individuals, the postwar nuclear-classification regime strained relationships with US allies—most notably Britain—even as it was ineffective in halting proliferation. Within a few years after the war, the Soviet Union built fission and fusion bombs of its own—efforts aided by wartime espionage that had pierced US military control. Arguably, overzealous secrecy accelerated the arms race.

Amid today’s calls to hold back the tide of new computational models and techniques, nuclear secrecy serves as a cautionary tale of bureaucratic overreach and political abuse. Undoubtedly there were good reasons to safeguard some nuclear secrets, but the postwar system of classification and control was so byzantine that legitimate research inquiries were cut off, responsible private-sector investment was stymied, and political debate was quashed. The academic community served as a weak but visible counterbalance, seeking to maintain the openness necessary for scientific progress and democratic oversight.

Controlling Biotechnology

Szilard’s first impulse was to persuade fellow scientists to stop publishing their most potent findings. In the mid-1970s, molecular biologists went further. Led by Stanford’s Paul Berg and colleagues at other elite universities and laboratories, scientists pressed for a halt not only to publication but also to research in the new area of recombinant DNA (rDNA). Their efforts included the famous Asilomar meeting of February 1975, which is routinely cited to this day as the preeminent example of scientists successfully and responsibly governing risky research. Yet, much like Szilard’s calls for nuclear scientists to self-censor, biologists’ self-policing was actually a small part of a much larger process. Responsible governance was achieved, but only after careful, protracted negotiation with stakeholders well beyond the scientific community.

Amid today’s calls to hold back the tide of new computational models and techniques, nuclear secrecy serves as a cautionary tale of bureaucratic overreach and political abuse.

Berg and his fellow biologists appreciated the potential benefits of rDNA techniques, which allowed scientists to combine fragments of genetic material from multiple contributors to create DNA sequences that did not exist in any of the original sources. But the group also foresaw risks. Pathogenic bacteria might acquire antibiotic-resistant genes, or carcinogenic genes might be transferred to otherwise harmless microorganisms. And if the Manhattan Project was carried out in remote, top-secret sites, rDNA experimentation involved benchtop apparatus found in nondescript laboratories in urban centers. What would protect researchers and their neighbors from leaks of dangerous biological materials? As Massachusetts Institute of Technology (MIT) biologist David Baltimore recalled after meeting Berg and others to brainstorm, “We sat around for the day and said, ‘How bad does the situation look?’ And the answer that most of us came up with was that …, for certain kinds of limited experiments using this technology, we didn’t want to see them done at all.” Berg, Baltimore, and the rest of their group published an open letter calling for a voluntary moratorium on rDNA research until risks were assessed and addressed. The request was met with considerable buy-in.

By the time their letter appeared in Science, Nature, and the Proceedings of the National Academy of Sciences, the Berg group had been deputized by the National Academy to develop recommendations for the National Institutes of Health (NIH). They convened again, this time with more concerned colleagues, in February 1975 at the Asilomar Conference Grounds in Pacific Grove, California. The Asilomar group, consisting almost entirely of researchers in the life sciences, recommended extending the voluntary research moratorium and proposed a framework for assessing risks and establishing containment facilities for rDNA experiments. In June 1976 the Asilomar recommendations became the backbone of official guidelines governing rDNA studies conducted by NIH-funded researchers.

On the very evening in June 1976 when the NIH guidelines were announced, the mayor of Cambridge, Massachusetts—home to famously difficult-to-govern research institutions like Harvard University and MIT—convened a special hearing on rDNA experimentation. “No one person or group has a monopoly on the interests at stake,” Mayor Alfred Vellucci announced. “Whether this research takes place here or elsewhere, whether it produces good or evil, all of us stand to be affected by the outcome. As such, the debate must take place in the public forum with you, the public, taking a major role.” And so began a months-long effort by area scientists, physicians, officials, and other concerned citizens to devise a regulatory framework that would govern rDNA research within city limits—under threat of a complete ban if the new Cambridge Experimentation Review Board failed to agree on rules that could pass muster with the city council.

Much like Szilard’s calls for nuclear scientists to self-censor, biologists’ self-policing was actually a small part of a much larger process. Responsible governance was achieved, but only after careful, protracted negotiation with stakeholders well beyond the scientific community.

The local board held public meetings twice weekly throughout autumn 1976. During the sessions, Harvard and MIT scientists had opportunities to explain details of their proposed research to nonspecialists. The board also hosted public debates over competing proposals for safety protocols. Similar civic groups hashed out local regulations in Ann Arbor, Michigan; Bloomington, Indiana; Madison, Wisconsin; Princeton, New Jersey; and Berkeley and San Diego, California. In none of these jurisdictions did citizens simply adopt the Asilomar/NIH guidelines. Rather, there was thorough scrutiny and debate. Cambridge residents, for example, called for the formation of a biohazards committee along with regular inspections of rDNA labs, exceeding federal requirements. Only after the board’s extensive, sometimes thorny negotiations did the city council vote to adopt the Ordinance for the Use of Recombinant DNA Molecule Technology within Cambridge. This was February 1977, two years after the Asilomar meeting.

With the ordinance in place, Cambridge quickly became a biotechnology juggernaut, earning the nickname Genetown. City officials, university administrators, laboratory scientists, and neighbors had worked together to construct a regulatory scheme within which innovative scientific research could thrive, both at universities and at spin-off companies that soon emerged. Public participation took time and was far from easy, but it proved essential for building trust while avoiding Manhattan Project–style monopolies.

Unregulated Forensic Science

Whereas Szilard and Berg tried to craft guardrails around the scientific work they were developing, in 1921 physiology student and police officer John Larson was eager to deploy his latest innovation: the cardio-pneumo-psychograph device, or polygraph.

The result, over the course of decades, has been the unchecked propagation of an unreliable technology. Courts, with input from scientific experts, have worked in their ad hoc way to push polygraphy to the margins of criminal justice. But the polygraph has not been subject to the sorts of democratic oversight and control that helped to ensure the safety and utility of rDNA research. Courts might similarly clamp down on algorithmic facial recognition, the AI-driven forensic technology of the moment; but facial recognition, too, is already commonplace and unregulated. Indeed, public narratives about seemingly miraculous yet flawed technologies can aid in their escape from oversight, creating havoc. There is a lesson here in the importance of both public intervention by concerned scientists—before risky technologies become commonplace—and the need for continuing regulatory scrutiny.

Havoc was not Larson’s goal. Like many early twentieth-century intellectuals, he was convinced that measurements of the body could surface what was buried in the mind. The nineteenth-century physician Étienne-Jules Marey took physical measurements to reveal stress, in hopes that these would in turn reveal interior truths. And by 1917, psychologists and married couple William Moulton Marston and Elizabeth Holloway Marston invented a form of the polygraph. Within a few years, as historian Ken Alder has carefully documented, Larson made two crucial upgrades to the Marstons’ approach. First, Larson’s machine took continuous blood pressure measurements and recorded them as a running line, so that a polygraph operator could monitor changes relative to a baseline. Second, Larson partnered with law enforcement.

The polygraph has not been subject to the sorts of democratic oversight and control that helped to ensure the safety and utility of rDNA research.

In the spring of 1921, Larson tried out his technology to solve a real crime, a potboiler drama involving a missing diamond presumed stolen by one of 90 women living in a boardinghouse. The thief, whose recorded blood pressure did drop precipitously during her interrogation, eventually confessed after days of additional questioning. Eager for gripping narratives, journalists gravitated to the cardio-pneumo-psychograph—except, to Larson’s chagrin, the press renamed his device the “lie detector.” And some law enforcement figures were as enthusiastic as the reporters covering their police departments. August Vollmer, chief of police in Berkeley, California, was an early adopter of the cardio-pneumo-psychograph, believing that the technology could help his department overcome its poor reputation. The public viewed police as corrupt and overly reliant on hunches and personal relationships; Vollmer thought Larson’s methods, though unproven, might lend policework the patina of scientific expertise, bolstering support.

As the press attention suggests, the polygraph was a charismatic technology. Having captured the public interest, the so-called lie detector found ready purchase beyond formal legal settings. Some uses were benign—for instance, market researchers turned to polygraphs in hopes of understanding what drew audiences to particular films or actors. But the stakes of deploying this unreliable technology grew in other domains, as when employers turned to the polygraph to screen for job suitability.

Judges were less willing to accept the polygraph, which became clear during the 1922 trial of one James Frye. Frye had confessed to murder but later claimed that his statement had been coerced. A polygraph test validated Frye’s claim, but a judge rejected the result, claiming it could not serve as evidence. This led to the so-called Frye test, in which a federal court held that scientific evidence was inadmissible unless it was derived from methods enjoying “general acceptability” within the scientific community. This judicial test, which the polygraph failed, reflected a belief that juries would be swayed by supposedly objective scientific evidence. Such evidence, then, had to be held to a high standard.

For its part, the scientific community repeatedly mobilized to limit the use of polygraphs in court. As the US Office of Technology Assessment (OTA) concluded in a 1983 report, there was “limited scientific evidence for establishing the validity of polygraph testing.” But resistance to polygraph testing in the criminal justice sphere was matched by exuberance elsewhere. The same OTA report estimated that, outside of the federal government, more than a million polygraph tests were administered annually within the United States for hiring purposes. In 2003 the National Academies led another effort to scrutinize the reliability of the polygraph. The resulting report has played a crucial role in keeping polygraphs out of courtrooms.

In contrast to polygraphy, other science-based techniques such as fingerprint analysis have a more secure place in US legal proceedings. Fingerprint-based identification is far from perfect, but it has for decades been subject to standardization and oversight, and expert witnesses must be trained in the technique. Moreover, high-profile mistakes have catalyzed meaningful ameliorative review. Expert panels have responded to errors by reassessing the scientific bases for fingerprint identifications, updating best practices for their use, and developing new methods for training practitioners.

Algorithmic facial recognition has followed a trajectory more like that of the polygraph than of fingerprinting. Despite its significant and well-documented flaws, facial recognition technology has become ubiquitous in high-stakes contexts outside the courtroom. The US National Institute of Standards and Technology (NIST) recently evaluated nearly 200 facial recognition algorithms and found that almost all demonstrated enormous disparities along demographic lines, with false positives arising a hundred times more often when the technologies were applied to images of Black men from West Africa as compared to images of white men from Eastern Europe. The NIST tests also found systematically elevated rates of false positives when the algorithms were applied to images of women across all geographical regions as compared to men. Given such clear-cut biases, some scholars have called for more inclusive datasets, which in theory could broaden the types of faces that can be recognized. Other commentators have argued that inclusion would simply put more people at risk.

Many research papers have focused on ways to mitigate biases in facial recognition under pristine laboratory conditions, but uncorrected commercially available algorithms are already having substantial impact outside the walls of research facilities. In the United States, law enforcement jurisdictions can and do purchase commercial facial recognition technologies, which are not subject to regulation, standardization, or oversight. This free-for-all has led to multiple reports of Black men being wrongfully arrested. These real-world failures, which exacerbate long-standing inequities in policing, are likely to worsen in the absence of oversight. There exist today more than a billion surveillance cameras across 50 countries, and within the United States alone, facial images of half the adult population are already included in databases accessible to law enforcement.

NIST recently evaluated nearly 200 facial recognition algorithms and found that almost all demonstrated enormous disparities along demographic lines, with false positives arising a hundred times more often when the technologies were applied to images of Black men from West Africa as compared to images of white men from Eastern Europe.

Like the polygraph, facial recognition technologies have created a certain amount of chaos beyond law enforcement settings. There have been sensational claims that far outstrip technical feasibility—for instance, that algorithmic analysis of facial images can determine an individual’s sexual orientation. Meanwhile, private vendors are scooping up as many facial images as they can, almost always from platforms whose users have not granted permission for, and are unaware of, third-party data collection. In turn, facial surveillance is now deployed in all sorts of contexts, including schools, where the technology is used to monitor students’ behavior. Facial recognition is also being used to prevent access to venues and even for job screening.

Three Principles for Researchers in AI Governance

AI policy is marked by a recurring problem: a sense that AI itself is difficult or even impossible to fully understand. Indeed, scholars have shown how machine learning relies on several forms of opacity, including corporate secrecy, technical complexity, and unexplainable processes. Scientists have a special obligation to push against claims and realities of opacity—to demonstrate how the consequences of complex technologies can be explainable and governable. As our historical examples show, at its best the scientific community has worked to assemble coalitions of researchers and nonresearchers to understand, assess, and respond to risks of novel technologies. History offers reason to hope that building such collective processes around AI is possible, but also reasons to worry that such necessary work will be hard to sustain.

Three principles are apparent across these historical examples—principles that should inform how scientists contribute to present-day AI governance. First, self-policing is not enough. Researchers’ voluntary moratoria have rarely, if ever, proven sufficient, especially once high-impact technologies escape controlled laboratory settings. Scientists and engineers—though eager to act justly by putting bounds around novel technologies and mitigating risks—have never been good at anticipating the social and political lives of their innovations. Researchers did not predict the rise of an elaborate Cold War secrecy infrastructure, the robust public debate surrounding rDNA experiments, or popular enthusiasm for the polygraph. Because accurate predictions concerning real-life responses to novel technologies are beyond the scope of scientific expertise, scientists and engineers cannot be expected to know where exactly the boundaries around novel technologies should lie.

At its best the scientific community has worked to assemble coalitions of researchers and nonresearchers to understand, assess, and respond to risks of novel technologies.

Second, oversight must extend beyond the research community. Broad input and regulatory supervision have repeatedly proved necessary to sustain innovation ecosystems. Extended debate and negotiation among researchers and nonspecialists can build public trust and establish clear regulatory frameworks, within which research can extend across academic and private-sector spaces.

Finally, recurring reviews are necessary. Specialists and nonexpert stakeholders should regularly scrutinize both evolving technologies and the shifting social practices within which they are embedded. Only then can best practices be identified and refined. Such reviews are most effective when they build upon existing civic infrastructures and expectations of civil rights. Civic organizations with long histories of advancing rights and liberties must have an empowered role in review processes.

Today’s AI technologies, like many predecessors, are both exciting and fraught with perils. In the past, when scientists and technologists have spoken decisively about risks, articulated gaps in knowledge, and identified faulty claims, they have often found collaborators beyond their research communities—including partners in government, the legal system, and the broader public. Together, these communities at times have successfully established governance frameworks within which new technologies have been developed, evaluated, and improved. The same commitment to genuine partnerships should guide governance of AI technologies. Any other approach would put at risk the enormous potential of AI as well as the societies that stand to gain from it.

A Justice-Led Approach to AI Innovation

“As it is useful that while mankind are imperfect there should be different opinions, so it is that there should be different experiments of living; that free scope should be given to the varieties of character, short of injury to others; and that the worth of different modes of life should be proved practically, when any one thinks fit to try them.”

—John Stuart Mill, On Liberty

Innovation is disruptive; it changes either the ends we can achieve or the means of achieving established ends. This is certainly true of innovation in artificial intelligence. AI has been used, for example, to increase the capacity of persons with disabilities to access and share information—but has also enabled novel forms of deception, so that now one can create realistic photos, audio, and video of political figures doing or saying whatever one wishes.

To ensure that innovation enhances freedom and promotes equality, research and development should be governed by a sound ethical framework. Such a framework should fulfill at least the following three criteria. First, it should provide normative guidance elucidating which disruptions are morally permissible, and which call for remediation because they are unfair or unjust. Second, the framework should facilitate accountability by identifying who is responsible for intervening to address injustice and unfairness caused by the disruptions of innovation.

Third, the innovation-governance framework should address social relationships and social structures: it should consider how innovation influences divisions of labor and distributions of goods and harms across society over time, not only with respect to the immediate conduct of individuals. Current frameworks for applied ethics fall short in this regard because they focus on first-order interactions—the direct effects of discrete interactions between specific parties, such as scientific investigators and participants in research studies. Responsible governance of AI innovation, however, will have to address not just first-order interactions but also higher-order effects of portfolios of transactions among wide-ranging parties, such as effects of algorithmic policing tools on oppressed communities.

This entails that the framework for responsible governance be grounded in something more than ethics guidance. Instead, it must be grounded in a conception of justice fit for a pluralistic, open society. Exactly what constitutes justice is difficult to describe in brief, but we can at least get at the basics here. A just society, crucially, is not one in which all people agree on what constitutes a morally good life; such societies cannot exist, and efforts to create them are necessarily oppressive. Rather, a just society is one in which all people are equally free to pursue their own ideas about what a good life might be. To achieve justice, then, we need social institutions that promote the freedom and moral equality of all people. And so, to the extent that innovations in AI and other technologies might threaten some people’s freedom or their standing as moral equals, we need institutions capable of correcting these wrongs and promoting social arrangements that better secure people’s freedom in the face of technological change.

Existing Ethical Frameworks Neglect Justice

Perhaps the most influential approach to responsible regulation of innovation in the United States is that of the Belmont Report, published in 1979 by the US National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in response to revelations of abuse in biomedical and behavioral research. The report articulates principles of nonmaleficence, beneficence, respect for autonomy, and justice, and it provides guidance on how these principles should regulate interactions between scientific investigators and the humans who participate in their research projects. These principles and guidelines underlie the specific regulatory requirements governing institutional review boards, which oversee federally funded investigations in biomedicine and behavioral health.

The framework for responsible governance be grounded in something more than ethics guidance. Instead, it must be grounded in a conception of justice fit for a pluralistic, open society.

Although the Belmont approach has never been perfect in implementation, the ethics framework it created provides credible assurance that regulated research studies will respect study participants’ rights and interests. Given the esteem the Belmont system has earned, it should be no surprise that concerned parties increasingly argue for its extension to AI innovation. While the Belmont principles were designed to govern innovation in biological and behavioral sciences, many proposed AI-ethics frameworks now promote Belmont-style governance.

Yet the Belmont principles are insufficient to govern AI innovation because they do not specify the requirements of justice in a way that captures the role of social institutions and distributions of social labor in shaping innovation and its impacts. Rather, Belmont-type frameworks respond only to a limited set of ethical issues: harms or wrongs that result from discrete interactions between investigators and study participants in the course of research. The Belmont principles are unable address ethical problems that arise over time and at scale—patterns of harm in larger portfolios of interactions, resulting from the conduct of a range of agents that affects the functioning of important social institutions.

Current ethical frameworks for innovation governance face four challenges. First, such frameworks, because they focus on the responsibilities of individuals, struggle to address unfairness within social institutions and may exacerbate such unfairness. Yes, discrete actions may cause unfairness, but unfairness also accrues over time from patterns in the operation of institutions. Consider the workings of health care and public systems. Whether these systems can respond effectively, efficiently, and equitably to the needs of diverse populations is determined in part by long histories of inclusion and exclusion, including histories of neglect, indifference, oppression, and racism. The responsiveness of such systems is also profoundly influenced by choices concerning which research questions to pursue and how to allocate funding. But in an open society, individual researchers are permitted to make these sorts of choices, chasing down the questions that interest them. No individual researcher has the ability to rectify problems of exclusion and oppression long since built into health systems, or indeed other social institutions on which people rely.

The Belmont principles are insufficient to govern AI innovation because they do not specify the requirements of justice in a way that captures the role of social institutions and distributions of social labor in shaping innovation and its impacts.

Second, current ethical frameworks do little to ensure accountability among the full range of relevant stakeholders within the innovation ecosystem. Many critical decisions about what research to pursue, how to allocate funding, and whose needs will be the focus of innovation are made not by researchers subject to ethics oversight but by politicians, philanthropic officers, corporate executives, and leaders of government agencies such as the US National Institutes of Health. However, existing ethical frameworks rarely specify the ethical obligations of these government, civil society, and private-sector stakeholders. These frameworks, then, have scant influence over the individuals most empowered to decide whether innovation strengthens or undermines important social institutions, contributes to justice and equity, or exacerbates injustice and underserved inequalities.

Third, current ethical frameworks do a poor job addressing portfolio-level issues. These are ethical problems that arise across interrelated sets of decisions. There can be cases in which any given interaction in a portfolio of interactions, evaluated on its own merits according to Belmont principles, could be seen as ethically permissible even as the larger portfolio is morally problematic. In the case of biomedical research, for example, individual oncology trials might exhibit scientific and social value, but the portfolio of such trials might systematically favor the health needs of advantaged groups. The risk-benefit ratio of each of a dozen individual trials may appear reasonable, but the totality of these trials may expose participants to risks that could be avoided through coordination across studies. The same holds for decisions affecting the relevance and breadth of information produced from sets of studies, the degree to which sets of studies advance the pecuniary interests of firms rather than responding to the needs of clinicians and patients, and the extent to which firms offload risk onto consumers rather than addressing it during development. This shortcoming at the portfolio level is partly a function of the previous two dynamics. But it also derives from the myopic, case-by-case evaluation of study protocols and individual technologies.

Finally, most current frameworks struggle to address the problem of distributed responsibility, instead presuming a one-to-one correspondence between the actions of a party and the responsibility of that party for any morally problematic consequences of their actions. In contrast, a framework of justice recognizes the reality of a division of social labor, which distributes rights, prerogatives, and responsibilities across many parties, with consequences for responding to effects of innovation. Innovation produces a social dynamic in which one era’s technological marvel, such as the telegraph or the steam engine, is eclipsed by subsequent advances. The displacement of legacy technology often produces unemployment, which in turn reduces the freedom of displaced workers. That displaced workers deserve support and assistance in transitioning to new forms of gainful employment is widely understood as a social obligation that falls to institutions of government and civil society and is not the sole responsibility of the innovators whose advances cause the decline of preexisting industries.

Each of the above challenges is relevant to issues of fairness in machine learning and AI. The AI ethics literature tends to conceptualize fairness as equal treatment relative to local norms for the distribution of a good or service and to focus on mitigation measures that target the statistical model employed by a given system. This approach, like the ethics review of individual research protocols, assumes that societies are patchworks of independent interactions, whereas in fact societies are interconnected systems in which social institutions affect overlapping aspects of people’s opportunities, capabilities, rights, and interests. As a result, unjust disparities and social inequalities arising from histories of oppression can be perpetuated by the enforcement of local fairness norms. The reason is that prior injustice in one domain creates disadvantage that can affect the prospects of the disadvantaged in other domains. Consider how past and present injustices in housing, finance, or policing can have profound detrimental impact on the health of oppressed populations, the quality of education available to them, their ability to take advantage of educational opportunities, their career prospects, their ability to vote or hold political office, their freedom to move about and associate, their financial prospects, and other important rights and interests. There need be no violation of local fairness norms in, say, schooling in order for injustices in housing to translate into worse outcomes in education.

The AI ethics literature tends to conceptualize fairness as equal treatment relative to local norms for the distribution of a good or service and to focus on mitigation measures that target the statistical model employed by a given system.

In such cases, there may not be a one-to-one correspondence between the actions of particular individuals, unjust outcomes, and responsibility for ameliorating unfair disadvantage. As a result, a narrow focus on local norms of fairness in discrete transactions cannot discern larger patterns of injustice across portfolios of interactions, cannot facilitate the process of ascertaining how to intervene to improve matters, and cannot help to identify who should be responsible for doing so.

Toward a Justice-Led Approach

Effective governance of innovation, in AI and other areas, requires that we broaden the set of stakeholders whose conduct is subject to accountability as well as oversight and intervention by institutions committed to a framework of justice. That framework must be substantive enough to generate normative guidance while also being widely acceptable to individuals who embrace diverse conceptions of the good and the good life. My book For the Common Good: Philosophical Foundations of Research Ethics covers this sort of framework in detail. Here, I’ll introduce three key elements of my proposed justice-led approach to innovation.

First, justice should be understood as fundamentally concerned with establishing, fostering, or restoring the freedom and moral equality of persons. Second, to respect people as free and equal requires specification of the space in which individuals are equal and in which they have a claim to equal treatment. In a diverse, open society, individuals embrace and follow different conceptions of the good. These different conceptions of the good often include competing hierarchies of value, divergent senses of worth, and inconsistent lists of virtues and vices. But amidst this diversity, every individual shares a higher-order interest in having the real freedom to formulate, pursue, and revise a life plan based on their considered conception of the good. This higher-order interest constitutes a compelling ground for claims of equal standing and equal regard because it is universally shared and because, from this higher-order perspective, there are no grounds on which to deem any individual, or set of individuals, superior to or more deserving than any other. That is, relative to their universally shared interest in having real freedom to formulate and pursue their favored conception of the good—their life plan—all persons are morally equal.

Third, justice is fundamentally concerned with the operation of basic social institutions—basic in the sense that they structure the division of social labor, distribute important rights and opportunities, and operate in a way that determines whether individuals have the real freedom to formulate, pursue, and revise a life plan of their own. Basic institutions include the organs of national, state, and local government because these determine the mechanics of political representation, make and enforce laws, and set the terms on which individuals access all-purpose goods such as employment, education, and the fruits of scientific progress in the areas of individual and public health. Basic institutions also include a network of private organizations that perform socially important tasks such as delivering health care, providing legal services, engaging in scientific inquiry, delivering education, and providing services in the market.

Effective governance of innovation, in AI and other areas, requires that we broaden the set of stakeholders whose conduct is subject to accountability as well as oversight and intervention by institutions committed to a framework of justice.

These three elements of justice can provide normative guidance aimed at rectifying the effects of prior injustice and ensuring that basic institutions function fairly and effectively, even in the face of technological change. To be clear, it is impossible to guarantee that innovation never disadvantages anyone relative to their considered life plan, but we can advance justice by safeguarding the ability of basic social institutions to create conditions that enable all persons to develop and exercise the capabilities they need to formulate, pursue, and revise a life plan of their own.

One significant feature of a justice-led approach to governing innovation is the establishment of incentives that encourage technological advance in service of people’s capacities to secure their shared, higher-order interest in freedom and moral equality. In particular, incentives should be used to align the interests of a wide range of parties with the goal of enhancing the ability of social institutions to maintain or promote freedom and equality. For instance, market forces alone are unlikely to incentivize commercial entities to create AI medical tools that address the distinctive health needs of people historically underserved by the health care system. Promoting a more equitable distribution of the benefits advanced by health care innovation will require a mix of approaches that reward this type of distribution while discouraging practices that increase disparities. In general, identifying gaps in the capacities of social institutions is a first, critical step toward adjusting funding priorities, regulatory requirements, and other incentives to better align the narrow interests of public and private stakeholders with opportunities to secure the freedom and equality of persons.

Furthermore, when innovation threatens the capacity of social institutions to effectively, efficiently, or equitably perform their unique functions, justice requires that there be intervention to strengthen those institutions. For example, the proliferation of machine learning and AI systems raises concerns about justice because the data on which these systems are built—and therefore the functions they perform—often reflect patterns of unfair treatment, oppression, marginalization, and exclusion. Algorithms used in policing, sentencing and bail calculations, or parole decisions that recapitulate histories of exclusion or oppression are unjust because of the strong claim of each individual to equal standing and equal regard before criminal justice institutions. The same is true of disparities fostered by AI systems that make decisions regarding employment, lending, banking, and the provision of social services. Such disparities are unjust even if they are not connected to prior histories of exclusion, indifference, or subjugation, because of the important roles these social institutions play in securing persons’ higher-order interest in freedom. But such disparities can be, and often are, doubly concerning precisely because they are connected to, and do recapitulate or compound, histories of subjugation.

Identifying gaps in the capacities of social institutions is a first, critical step toward adjusting funding priorities, regulatory requirements, and other incentives to better align the narrow interests of public and private stakeholders with opportunities to secure the freedom and equality of persons.

When innovation creates or exacerbates inequalities in the ability of individuals to advance their shared higher-order interest—when it promotes the freedom of individuals seen as normal and restricts the freedom of individuals seen as lesser on the basis of features such as sex, race, or creed—then social institutions should intervene to avert and rectify these undeserved inequalities. Such inequalities are, unfortunately, embedded in deployments of AI today. The widespread use of data in which marginalized groups are not well-represented, or are represented in ways that are associated with negative stereotypes or valuations, recapitulates patterns of subordination. And even when innovative technologies do primarily support individuals’ pursuits of their distinctive life plans, concerns of justice arise to the extent that patterns in the performance of these technologies recapitulate historical disparities. Widespread acceptance of these disparities signals that some individuals have lower standing or status than others—a message that is antithetical to justice. When social institutions act to reduce these disparities, they advance an important cause of justice: ensuring that all people are treated as free and equal.

These elements of a justice-led approach help to address portfolio-level issues. When we place justice first, we augment our focus on discrete interactions among a narrow set of stakeholders. We focus as well on broad patterns that emerge over time within larger sets of decisions and on the effects of strategies for dividing social labor among a wide range of interested parties. Relatedly, this justice-led approach can promote accountability among the full range of relevant stakeholders and address the problem of distributed responsibility. That is because this justice-led approach attends to the functioning of basic social institutions whose role is precisely to secure the freedom and equality of persons, regardless of the sources of injustice. Because the free and equal status of technology developers, scientific researchers, corporate leaders, government officials, and other stakeholders must also be respected, a central tool for advancing accountability and freedom is the construction of incentives designed to better align their parochial interests with the ends of justice.

This discussion only sketches the work required of us, but it points to a perspective that is sensitive to the broad range of growing concerns about the social impact of AI systems. Already AI technologies are proliferating in ways that threaten the ability of citizens to hold political leaders accountable, to distinguish truth from fabrication, to ensure the integrity of elections, and to participate in democratic deliberation. These threats, alongside the discriminatory outputs of some AI systems now on the market, implicate matters of justice. And so it is through the pursuit of justice that we may also head off these threats, governing the use of AI in the interest of the common good.

A Human Rights Framework for AI Research Worthy of Public Trust

In 2014, researchers at Cornell University and Facebook joined forces for an experiment. They wanted to find out whether emotional contagion occurs on social media—whether the expressions of emotion showing up in our newsfeeds influence the way we ourselves feel. The study results were clear and important, confirming that emotional contagion is common on social media, as it is in in-person interaction. But while scientists celebrated a significant finding, the public became incensed. The project involved manipulating the feeds of nearly 700,000 Facebook users and studying their responses without their knowledge or informed consent, leading to widespread accusations of ethical lapses.

A decade later, the commercial launch of generative AI has provoked similar uproar. After all, many of the most popular and useful AI tools involve social experiments relying on participants who haven’t agreed to take part. Researchers in corporate and academic settings use AIs to build statistical models of human behavior based on user data and on users’ real-time interactions with AIs. Yet users are seldom consulted about their willingness to be analyzed by machine-learning algorithms or to be manipulated in the process. Whether or not you use AI tools, you have probably been “opted in” to an experiment on people. And if you are using an AI tool, you may be a sort of lab rat yourself.

When the hidden experiments of scientists come into view, the public tends to see them as beyond the bounds of decency—and possibly illegal. AI-based research, in other words, has a public trust problem, with potentially grave consequences. A collapse in trust would not only stifle AI development but also undermine the huge range of research projects that might use AI to advance scientific inquiry.

If AI is to fulfill its promise, the researchers who develop and take advantage of it must build and maintain public trust. And the more that scientific and technological innovation depends on learning from people’s data and decisionmaking, the more trust researchers will need to call on. Yet the path they are currently taking risks failure. How can AI researchers earn public trust?

Researchers in corporate and academic settings use AIs to build statistical models of human behavior based on user data and on users’ real-time interactions with AIs. Yet users are seldom consulted about their willingness to be analyzed by machine-learning algorithms or to be manipulated in the process.

Achieving an AI research paradigm deserving, and nourishing, of public trust will require reforms. For one thing, the deidentification techniques intended to protect the privacy and security of those contributing data to research projects should be fundamentally overhauled. These techniques have never been reliable, and it is past time that corporate and academic researchers relying on information scraped from the internet commit to a higher standard of data stewardship. But that alone will not be enough. Scientists must be, more generally, committed to respect for the human rights of individuals and groups wittingly and unwittingly participating in their experiments.

A key means of reorienting computational research toward respect for human rights is the adoption of a robust AI ethics protocol. Existing protocols, such as the Belmont principles for human subjects of research, are not sufficient unto themselves. But they can be updated and expanded for the AI age, in hopes of establishing a relationship of care and trust between researchers and society at large.

Moving Beyond Failed Data Privacy and Security Measures

Remember AOL? In 2006 the company released a snapshot of users’ search data: 20 million queries from over 650,000 users were collected in a single text file and posted to a publicly accessible webpage. The file contained no user names or email addresses but did include numerical identifiers for each user and their queries. The assumption was that this would be sufficient to protect users’ identities while still providing the research community a bounty of data from which to learn. But it took two New York Times reporters only a few days to crack the case. Soon they were interviewing a Georgia woman whom they identified on the basis of her queries, which obliquely revealed details about who she might be.

Nearly 20 years later, little progress has been made toward strengthening deidentification methods, even as computer scientists have long understood how brittle they are. And even less progress has been made toward a more expansive ethical vision for computing research.

Security and privacy are important aspects of data protection, but they cannot address the ethical and social implications of, say, training an AI to model group behaviors based on health and social data. Security and privacy measures also do not prevent the misuse of data by authorized parties. For example, a retailer could use data to which they are legally entitled to find out information about customers that they do not wish to reveal, such their health status or whether they are pregnant. There need be no breach of data security in such cases, which is another way of saying that protections do not ensure respect for the preferences and expectations of people who contribute their data.

Yet many researchers continue to treat data security—with the limited goal of deidentification—as their principal and perhaps only ethical obligation. This narrow approach was arguably defensible when computing research did not engage deeply in people’s everyday lives. But today’s scientists, spanning scholarly and corporate domains, use big data to model the physical and social worlds, especially human decisionmaking. Under these circumstances, old-style data security is even less effective than it used to be.

A Human Rights Approach: Taking Trust and Participation Seriously

Researchers’ norms around security and privacy are clearly inadequate—both to meet society’s expectations and to protect people from AI-driven mishaps. Computing-dependent research must be subject, then, to a higher standard, which I submit is the standard of human rights. Researchers must respect the human rights of individuals who contribute to AI models and of those groups presumed to have been modeled. That is to say, respect for human rights must extend not just to research participants but to society at large. Public confidence, and with it the future of AI development and AI-driven science, hinges on the adoption of this high standard.

Security and privacy are important aspects of data protection, but they cannot address the ethical and social implications of, say, training an AI to model group behaviors based on health and social data. Security and privacy measures also do not prevent the misuse of data by authorized parties.

Respect for human rights in research means more than providing for data security. This has been a consensus view since at least 1964, when the World Medical Association adopted the Declaration of Helsinki, recognizing the right of individuals’ to decide whether or not to participate in medical experiments. Under a human rights framework, researchers actively promote and protect the inherent dignity of every human being by taking responsibility for securing their fundamental rights and freedoms. Respect for human rights requires, for instance, that the health and other social data collected for AI development are used in ways that are consistent with the values, interests, and needs of those generating data—and it requires that the resulting products not harm or discriminate against people, whether they contributed to a specific AI model or not. Respect for human rights also entails that data contributors and the communities they may represent are empowered to participate in research that affects their lives.

Participation necessitates, at the very least, informed consent. It can also involve deeper forms of exchange between researchers and data contributors, such as consultation, co-creation of research projects, and co-ownership of research outputs. Participation can enhance trust in and acceptance of AI by fostering transparency and therefore accountability. And broad participation, cutting across socioeconomic and demographic groups, can enhance the quality and relevance of AI models by ensuring that the social data on which they are based reflect the diversity and complexity of human experience.

Only when the effects of AI models reflect the interests of the communities on which they are based will we know that research is truly in line with the obligation to respect human rights. If this seems like an impossibly high bar to clear, that only goes to show how little care is given to data contributors and the groups they are said to represent right now.

Researchers must respect the human rights of individuals who contribute to AI models and of those groups presumed to have been modeled.

The award-winning ASL Citizen study exemplifies the sort of human rights–centered science I have in mind: it is a rare case of computing research that both respects community norms surrounding data collection and model-building and produces results that the data-contributing community values. This study aims to address the relative absence of native signers from American Sign Language (ASL) datasets used by machine learning models. The ASL Citizen study created and released the first machine-readable sign language dataset drawn from native signers, containing about 84,000 videos of 2,700 distinct signs from ASL. This dataset could be used for ASL dictionary retrieval and to improve the accessibility of what are otherwise voice-activated technologies.

The ASL Citizen study respects the autonomy and dignity of contributing signers by obtaining their informed consent, ensuring their privacy and security, and giving them the option to withdraw their data at any time. The study also respects the diversity and complexity of the US signing community by involving researchers and collaborators who sign and by collecting data from signers with varied backgrounds, experiences, and signing styles. Further, the study is clearly beneficial: it creates a large and diverse dataset that can advance the state of the art in sign language recognition, enabling new technologies that can improve communication among signers and make useful tools available to them. Meanwhile, harms are minimized by ensuring the quality and validity of data and models through the use of rigorous methods for data collection, annotation, evaluation, and dissemination. By releasing their dataset and code under open licenses, and by providing detailed documentation of data and models, the study authors invite scrutiny and accountability. Finally, the study promotes fairness and equity by addressing the needs and interests of a community underrepresented in computing research to date.

The contrast between the ASL Citizen study and the 2014 Facebook-Cornell emotional contagion study could hardly be starker. Both of these studies produced valuable results, but the emotional contagion project, by failing to respect people’s fundamental rights to determine their role in scientific studies, did so at the cost of undercutting trust in the scientific community and in computing research.

Principles and Tools for Ethical AI-Driven Research

A good starting point for an ethics of AI research—guidance that, if followed, would promote public trust—might be the Belmont principles. Produced by a federally chartered panel of experts over the second half of the 1970s, the Belmont principles are the philosophical foundations of human-subjects research ethics in biomedicine and behavioral health in the United States. But the principles could also apply to a wider range of research involving people contributing data to AI models.

Only when the effects of AI models reflect the interests of the communities on which they are based will we know that research is truly in line with the obligation to respect human rights.

The Belmont framework includes three core principles: respect for persons, beneficence, and justice. First, respect for persons requires that human subjects are treated as autonomous agents who can freely consent or decline to participate in research. In addition, vulnerable people and those who experience diminished autonomy—such as children, incarcerated people, and people with certain cognitive challenges—are to be protected from coercion and exploitation. Second, beneficence requires that human subjects are not exposed to unnecessary or excessive harms or risks and that the potential benefits of research outweigh the potential harms or risks. Finally, justice requires that human subjects are selected fairly and equitably, and that the benefits and burdens of research similarly are distributed fairly and equitably across society.

Computing research could be significantly improved on the basis of these guidelines, but they cannot be the whole of AI ethics. It has been widely noted that the Belmont principles, which were designed to guide ethical interaction between discrete researchers and research participants, are too individualistic to address social life. This criticism is well taken and is certainly applicable to AI research, given its focus on modeling human decisionmaking at scale. Even rigid conformity to Belmont principles may not ensure the interests of groups said to be represented by AI models.

That being the case, we might look to complementary perspectives from humanist scholars whose work implicates research ethics. These ideas might enrich our sense of what constitutes ethical research in the public interest, motivating investigators—in particular, those using AI—to respect human rights in society broadly and carry out their work in a manner that leaves the public confident in the probity of scientists.

One enriching perspective comes from the concept of mutuality, which emphasizes the interdependence and reciprocity of human beings and the moral significance of caring for others as well as ourselves. Mutuality challenges individualistic assumptions of the liberal tradition and proposes a more relational approach to moral reasoning and action. Whereas liberal ethics emphasizes each person’s possession and vindication of rights, mutuality emphasizes negotiation across barriers of difference and disagreement. With this in mind, a researcher dedicated to mutuality might convene their project’s multiple stakeholders, who will determine together what exactly are the risks and rewards of the research and how these will be distributed. A commitment to mutuality has the potential to foster an inclusive and democratic AI research culture, where the voices and interests of diverse data contributors and their communities are heard and respected.

Scientists could also embrace a more useful and trustworthy research paradigm through a commitment to the ethics of care, a feminist theory associated above all with the philosopher Joan Tronto. Tronto defines care as “a [human] activity that includes everything we do to maintain, continue, and repair our ‘world’ so that we can live in it as well as possible.” We care in all sorts of ways—we care about, take care, give care, and receive care from others. All of these involve ethical responsibilities that Tronto describes and that are worthy of careful consideration. As the feminist anthropologist Ramona Perez has argued, researchers might turn to care ethics to orient themselves to their own motivations and obligations and to the effects of their work on both research participants and society at large.

Finally, researchers might invest in the ethics of dwelling. The social theorist Jarrett Zigon contrasts dwelling—a mode of being-in-the-world that is attuned to the ethical demands and possibilities of one’s situation—with acting, a mode of being-in-the-world that is guided by norms and rules. Zigon argues that dwelling is in fact the primary mode of ethical life, as people respond creatively to circumstances as they encounter them, seeking to transform themselves and their worlds accordingly. As a framework for scientific inquiry, dwelling involves building open and ongoing relationships between researchers and research participants, with scientists learning how the contexts of participants’ daily lives affect their behavior and their needs. In the AI research field, strong relationships with data contributors will help scientists understand what they are modeling so that they can develop more sophisticated systems that reflect the complexity of human experience. Imagine both the ethical and scientific benefits of building on rather than ignoring the diversity and unpredictability of human life!

The Belmont principles, which were designed to guide ethical interaction between discrete researchers and research participants, are too individualistic to address social life.

These may seem like abstract ideas, but fortunately there are also concrete projects that researchers can look to for tools and inspiration—projects that foster participatory and human rights–respecting approaches to AI model-building. Investigators working with people and their data should take note of Data for Black Lives, which mobilizes scientists, activists, and organizers to use data in ways that foster beneficial change for Black people. The Our Data Bodies project traces effects of AI models on marginalized peoples’ opportunities, such as their ability to obtain decent housing and public assistance. The AI Blindspot framework offers a method for identifying and mitigating AI-based discrimination by engaging diverse stakeholders and perspectives. And the Data Nutrition Project promotes transparency in AI model-building by evaluating the completeness, accuracy, and representativeness of datasets and assembling the findings in an easy-to-digest label somewhat like those found on food packaging.

Importantly, these projects are informed by the participation of subject matter experts outside the data-science community, including data contributors, social scientists, and humanists. To rethink their narrow attention to security and privacy and instead consider human rights broadly, scientists must be willing to learn from the rest of society. Indeed, they should be more than willing—they should be eager to embrace the opportunity. Collaboration builds trust and, in AI research, improves outcomes by helping scientists better understand the real-world effects their models might have.

A Future Built on Trust

From where I sit, with one foot in academia and the other at an industry-based computing research center, it is clear that AI governance is a near-term problem. It needs to be addressed now; we can’t wait to evaluate what happens after more AI models have been built and deployed. By then, public trust in computing research may have run out, and opportunities to do real good with AI studies could be squandered.

We need governance on the basis of sound ethical principles now because we need to be building public trust—now. This is the resource that will underlie the robustness of AI systems, especially in sensitive domains such as health, education, and security. Scientific communities should therefore be working overtime to build public confidence in AI. Measures of public trust in our practices—how we engage study participants, communicate the value of our study questions, and steward contributors’ data—might serve as some of our best benchmarks of research success. Then too, the public will be focused on outcomes: scientists should prioritize the development of AI models that, beyond simply incorporating bias mitigation, support the distribution of benefits to the least advantaged.

In other words, both the process and effects of research should be governed through an ethical framework designed to secure human rights—those of research participants and of society at large. Belmont-style principles can help with the process side, framing researchers’ obligations to data contributors. And other perspectives help us understand how to realize good outcomes at social scale, especially among the groups modeled by AI. On the basis of this richer ethical framework, researchers can preserve their ability to do science using advanced computational techniques, generating both valuable knowledge and the public trust that is the basis of their best work.

Reform Federal Policies to Enable Native American Regenerative Agriculture

Over the last five years, the number of bison on the Great Plains has increased significantly. Today, more than 20,000 bison roam the ancestral homelands of 82 tribes in the United States. This is a small number compared to the 30 million or more that grazed these vast prairie ecosystems during the nineteenth century, before federal incentives and land settlement policies drove them to near extinction. The bison’s promising recovery is the direct result of continuous restorative efforts led by generations of tribal members.

The restoration of this keystone species has multiple documented benefits: bison graze in a way that improves the root structure of the grasses and soil health by, among other things, increasing the soil’s retention of rainwater. Their shaggy coats distribute seeds across the landscape, and the wet spots where they wallow support birds and other species. This knowledge is embedded in tribal historical relations, demonstrating the cultural as well as ecological significance of efforts to support the return of bison.

Tribal Nations across the United States have implemented other culturally significant regenerative agricultural practices on the land, including the use of fire and waterscaping, both of which improve soil health and encourage native species to flourish. While much of the world is wondering how to best sequester carbon as a response to climate change, Native Peoples’ relational and integrative approach to land stewardship is just one example of their capacity to lead carbon-conscious land and agriculture management.

In an effort to mitigate carbon emissions, the federal government recently began incentivizing agricultural techniques that increase carbon content in soil, which is measured as soil organic carbon (SOC). The 2024 Farm Bill, for instance, includes $3 billion in federal funds for what are called climate-smart practices on agricultural land. Future funding for carbon sequestration projects is likely to grow. But without deliberate changes in policy and awareness of the potential of Native land stewardship, it is likely that little of that money will support projects where the full range of Native regenerative agricultural practices are used—such as tribal-based bison recovery efforts.

While much of the world is wondering how to best sequester carbon as a response to climate change, Native Peoples’ relational and integrative approach to land stewardship is just one example of their capacity to lead carbon-conscious land and agriculture management.

Fully bringing the power of traditional native agricultural practices to bear on local and national climate goals requires addressing two significant barriers. The first barrier has its roots in over a century of federal data collection and governance that continues to prevent Indigenous communities from making informed decisions about their own land. A second barrier is that “climate smart” practices are currently defined in ways that overlook the full extent of Native land stewardship—in part by failing to fully recognize Native knowledge production as valid science. Addressing both of these barriers will require investment in resources to increase tribal data sovereignty as well as a redefinition of what climate-smart processes mean.

As a collective of Indigenous and allied scholars interested in data and the environment of Indigenous Peoples’ lands and climate research, we have gathered data to explore the potential role of Native-led agriculture in carbon sequestration. The 574 sovereign Tribal Nations in the United States steward 56.2 million acres of land (approximately the same size as Kansas), which is spread out over 703 territories in 35 states. Much of the natural resources within these jurisdictions and beyond would benefit from the revitalization of Indigenous knowledge in land planning.

However, today the US government defines carbon sequestration according to a belief system that prioritizes conservation or focuses on forestry management. This does not formally recognize the potential of Native-led efforts like bison restoration, fire, and waterscaping. We advocate that the government recognize (and fund) tribally supported data sovereignty efforts and integrate and acknowledge these data into non-Indigenous ways of quantifying conservation and the environment at the federal level. We also offer recommendations to support positive approaches to promote self-determination for Native agricultural practices.

Stolen land and missing data

Colonial land policies and a legacy of exploitative transactions have drastically altered Native Peoples’ ownership and stewardship of land in the United States. In particular, the 1887 Dawes Act forcibly privatized a vast majority of Native lands by dividing reservations into individual allotments, ranging from 40 to 160 acres. Allotment, which President Theodore Roosevelt deemed “a mighty pulverizing engine to break up the tribal mass,” has ultimately prevented many Native landowners from working their lands. Because land not assigned to an allottee was typically taken out of the hands of Native ownership, the process further dispossessed Native Peoples from their lawfully granted landbase. As a result, between 1890 and 1934, Native landownership dropped from 117 million acres to 34 million acres.

Tribes and decisionmakers still lack access to relevant and accurate information about Native lands because some data are inaccessible and others are not collected at all.

Today, Native agriculture continues to be hampered by these colonial policies. As an example, on the Great Plains, land leases born from the policies of allotment are still primarily held by white farmers and ranchers, and the leases are typically negotiated by the Bureau of Indian Affairs (BIA). This further privileges resource extraction and the cash-crop industry on Native lands. Recognizing historical land mismanagement and racial discrimination, two historical court settlements—Cobell v. Salazar (2009) and Keepseagle v. Vilsack (2011)—have compelled the federal government to pay more than $4 billion to individual Native landowners, farmers, and ranchers, as well as Native organizations. But that (insufficient) reparation does little to repair generations of damage, including to Native Peoples’ ability to farm sustainably.

A further legacy of colonialism is that most land-use data today are still produced and stored by non-Native institutions, particularly the BIA. Such data curation is limited by what Western agricultural worldviews consider important information. As a result, tribes and decisionmakers still lack access to relevant and accurate information about Native lands because some data are inaccessible and others are not collected at all.

Lack of data puts Native land stewards at a disadvantage. For example, in the case of the Pine Ridge Reservation in South Dakota, land holdings of Native owners were checkerboarded, and much of the productive agricultural land is still leased out by the BIA. Today, tribal decisionmakers lack access to ownership data and leasing records. Without this knowledge, they cannot make long-term plans to engage in tribal climate management, such as carbon sequestration plans, nor be rewarded by federal incentive schemes. More generally, without access to data, tribal decisionmakers are unable to chart their own course in a rapidly changing environment.

A quiet revolution in data

The democratization of digital humanities is providing groups such as ourselves with tools—including geographic information system technology and data visualization dashboards—that allow new analyses. To take advantage of them, we need ways to format public data that fit tribal agendas. Federal support for such datasets could help remediate the historical lack of planning data available and promote better agriculture. To this end, the Native Lands Advocacy Project (NLAP) has developed new ways for tribes to access various soil, climate, land, and agricultural data from the public domain to help support sovereign land planning.

For example, the NLAP created an interactive dashboard from the US Geological Service National Land Cover Database (NLCD) that enables users to see general patterns across Native lands and filter by individual tribal geography. The dashboard reveals that tribal lands consist of 24% grasslands, 29% forests (deciduous and evergreen), 6.7% open water and wetlands, and 10% cultivated crops. Importantly, this tool can be used to monitor the evolution of land cover over time—particularly to assess deforestation, loss of natural cover to land development, and the long-term effectiveness of conservation policies. Viewed at the continental level, it also makes visible the untapped and undeniable potential for carbon sequestration in Native lands in the United States.

Figure 1. VISUALIZING THE POTENTIAL FOR CARBON SEQUESTRATION ACROSS ALL NATIVE LANDS

The analyses enabled by the dashboard put some key questions in high relief: Are these lands valued for the benefit of Native Peoples? Who or what value system determines their potential?

The democratization of digital environmental humanities opens up opportunities to answer some of these questions. For example, using data from the Census of Agriculture for American Indian Reservations, researchers found that a striking 87% of the total agricultural revenue on Native land is still captured by white farmers and ranchers, even though 75% of these farmlands are managed by Native operators. Thus, the dashboard demonstrates quantitatively the long-term effects of allotment policies, giving more information on the distribution of resources in Indian Country.

Examining this contrast between revenue extraction and acreage of land farmed also reveals hidden possibilities. While most agricultural revenue is currently extracted by white farmers, the fact that the majority of the land on reservations is operated by Native farmers contributes to the argument that Native-led agriculture could address land-use issues in a substantial way.

Native agriculture could have the potential to synergistically address a wide variety of social and ecological issues, provided it is given the space to do so.

The dashboard also demonstrates how excavating important details in data can shift perceptions and possibilities. For example, using the dashboard, researchers uncovered a higher proportion of Native women operators leading agricultural practices, which may stem from culturally specific understandings of the land as shown in the story of Navajo/Diné agriculture. This suggests that Native agriculture could have the potential to synergistically address a wide variety of social and ecological issues, provided it is given the space to do so.

The data on ownership are important because today’s incentive schemes for carbon mitigation are likely to reward large landowners due to the high cost of planning, auditing, and issuing carbon credits. For example, the majority of credits for voluntary forest carbon projects are issued to entities getting more than a million credits at a time. It will take a different lens to shift incentives to reward many diverse smallholders, better supporting local communities, preserving biodiversity, and encouraging culturally important practices.

Finally, while figures about income generation do demonstrate the propensity of Western-style farming to extract revenue from land, they reveal little about how to sustainably manage and steward land. And if stewardship is the true priority, then the information available might not be the information needed.

Exploring regenerative scenarios

Today, the Western-led conversation around carbon and land use in the United States is profoundly shaped by the assumption that humans are separate from nature. For example, soil organic carbon (SOC) is sometimes conceptualized as a “debt” incurred by long-term human use. According to this worldview, it is human activities—where “human” is used in a generic way—that have degraded lands and stripped soil of nutrients and organic carbon stocks. Accordingly, many SOC incentives aim to leave croplands fallow, encourage forest growth, and avoid human-led agriculture. This way of thinking is based on a discredited but still active model of fortress conservation, in which nature is “protected” by displacing human inhabitants.

In contrast, regenerative agriculture aims to maintain and restore soil and ecosystem health through a model of land use and management that includes long-term observation and deep care—concepts that have long informed Native land knowledge and stewardship. This premise shifts the conversation from blaming human-caused land use to supporting practices based on stewardship. Thus, land degradation is not inherent to soil use per se, but the result of misguided relations to soil. More broadly, many Native practices open possibilities for humans to foster a healthy and durable relationship with the land.

Humans are seen as an important part of, but not central to, the complex micro- and macro-relationships of healthy ecosystems, including water health, predator-prey relationships, and soil health.

For diverse Native Peoples, food cultivation is part of a tightly woven relationship with the living universe that is tied to each tribal community’s very existence. These relationships have transformed over time into contemporary community-centered agricultural approaches where “successful” agriculture ideally focuses on humans’ interconnection with an entire ecosystem. Humans are seen as an important part of, but not central to, the complex micro- and macro-relationships of healthy ecosystems, including water health, predator-prey relationships, and soil health. Native knowledge often recognizes this interconnection, and it’s becoming more widely recognized by mainstream soil science.

Today, some food sovereignty initiatives led by Tribal Nations adhere to traditions of Native regenerative agriculture that maintain soil health. For example, the Oneida Nation of Wisconsin is now 20 years into its food sovereignty initiatives, which have been carried out in concert with exemplary water and soil quality programs funded and monitored by the tribe. Oneida’s large-scale investment in traditional food crops and food networks has resulted in a number of exciting, innovative, and culturally rooted projects. One example is the certified food handlers program, which uses innovative technology to welcome learners of all backgrounds to a comprehensive approach to Oneida foodways and community food safety. Oneida has been so successful with its food sovereignty strategy that it often provides free consultations to other tribes that are trying to start their own programs. 

Incentives for regeneration

US colonial policies continue to have practical consequences for Native land stewards today. Climate-smart initiatives are structured to achieve national goals without acknowledging or furthering tribal goals, which may include establishing rights and sovereignty and centering Native knowledge. National goals around carbon, by contrast, often reflect ideology around fostering markets for carbon credits and offsets that may end up rewarding extractive industries and fossil fuel producers. Changing federal priorities for land practices is an important step in building a more just response to climate change.

If climate-smart incentives were written to include these broader goals of Native regenerative agriculture, and if appropriate data were available to tribes, we believe that Native operators would be eager to assist in meeting national climate goals. To estimate the size of this opportunity, we used the Soils Revealed project, a dataset that provides estimates for SOC changes under various agricultural scenarios. Across the 703 tribal territories of the United States, we compared today’s business-as-usual practice of moving forestland to crops with regenerative schemes over a 15-year period. We selected three scenarios from the Soils Revealed database that most resembled Native land management practices: improved cropland management with high organic input and minimal disturbance, improved management of grassland, and increased land rewilding.

Our simulations suggested significant differences between these scenarios. Following the mainstream agricultural model where forest land is converted to crops, Native lands are predicted to suffer an additional loss of 14.52 tons of carbon per hectare. The three more regenerative scenarios show an increase in SOC from 2.78 tC/ha for rewilding to a peak 7.17 tC/ha for organic cropland management with minimal disturbance.

These results are consistent with other predictions demonstrating the positive impact of regenerative agriculture on soil health and its potential for efficiently sequestering carbon. They suggest that scenarios involving anthropogenic land use—particularly Indigenous stewardship practices—could be powerful and effective tools for sequestering carbon while nourishing communities. Such analysis also deepens the carbon conversation by recentering it around human relationships with the land, confirming a Native worldview that humans have a stewardship duty toward soil via the maintenance of kinship relations with it. Finally, this analysis shows how the historical marginalization of Indigenous knowledge and data can be challenged when digital humanities encompass data sovereignty and operate from Native worldviews.

Figure 2. COMPARING AGRICULTURAL SCENARIOS FOR SOIL ORGANIC CARBON CHANGE ACROSS NATIVE LANDS

How to support tribal regenerative agriculture

Effectively mitigating carbon emissions, righting historical injustices to Native communities, and stewarding land for the future will require a shift in federal worldviews and policies. Decisionmakers should ensure that Indigenous agriculture, whether practiced by individuals or Tribal Nations, is free from federal obstructions. The authority of Native farmers and land managers over their land should be recognized, both in data policies and in knowledge and practices. In particular, Native voices should be centered in policies that define and incentivize regenerative practices, such as the Farm Bill.

Policies that place Native regenerative agriculture in a position to grow have the potential to transform not only Native lands and communities—a good end in itself—but to remodel ideas about land stewardship and carbon sequestration to build a better future for the planet.

Unlocking the potential of Native stewardship requires changing the way data are gathered and handled by the federal government. The success of Native land planning depends on informed decisionmaking, which requires access to appropriate data. The US government should take into account tribal interests when collecting data and should support tribes’ efforts to gather longitudinal data. Native communities should be empowered to gather the data they need to use for local decisionmaking and land stewardship.

Finally, the federal government needs to honor its trust responsibilities by defending Native Peoples’ control over their lands. As carbon offsets generate more income, predatory practices could harm Native stewards if the federal government does not attend to environmental justice and other power disparities. Policymakers should listen to Native voices on tribal land management to determine which policies are truly needed to enhance soil carbon and support Native communities.

Policies that place Native regenerative agriculture in a position to grow have the potential to transform not only Native lands and communities—a good end in itself—but to remodel ideas about land stewardship and carbon sequestration to build a better future for the planet. Bringing back the bison, as well as global efforts like #LandBack, could be the beginning of Native Peoples leading a shift toward non-harmful ways of inhabiting the earth.

Design for a “Mess”

"Design for a Better World" by Don Norman

“The world is a mess,” reads the opening sentence of the blurb for Don Norman’s latest book, Design for a Better World. Compelled by that phrase, I was left wondering: Does Norman, an influential voice on user-centered design, perhaps best known for his seminal book The Design of Everyday Things, have workable solutions to offer so we can design our way out of the mess?

Thirty years ago, I read Norman’s The Design of Everyday Things, which was originally published in a hardcover version as The Psychology of Everyday Things and retitled for the paperback edition. In his preface to that new edition, the author suggested the title change was a “lesson in design.” I could not agree more—many readers may find a book on design less intimidating than a book on psychology. By changing the title, Norman was practicing what he was preaching: making its design more user centric.

In The Design of Everyday Things, Norman preached effectively. He offered a distinctive perspective on something commonplace (everyday things), with an approachable style and a persuasive pitch to casual readers who otherwise may not have given much thought to the good, bad, and the ugly of the designs of the many things they interact with in their daily lives. His message helped bring user centricity to the front and center of product design and was part of a widespread shift toward more intentional design.

In his new book, Norman shifts the focus to something much more ambitious: the role of design in transforming the world from its present “mess” into something “better”—more sustainable, meaningful, and centered on humanity. While I applaud the author’s ambition, a shift from a relatively narrow focus on the design of tangible everyday objects to something as vast as a moral reform of the economy and its relationship to the environment is a tall order and requires more than a call-for-action-on-multiple-fronts message.

His message helped bring user centricity to the front and center of product design and was part of a widespread shift toward more intentional design.

Design for a Better World begins with a compelling observation: almost everything we see, interact with, and are immersed in is not natural. Institutions; ways of observing and measuring; assessment of success and failure; day-to-day conduct; our spaces and our habitat; and all of our social constructs are “artificial” in the sense that they are designed. And the designs have been intuited, conceptualized, instituted, and evolved over tens or hundreds or thousands of years as humans made design choices—be they conscious or unconscious, explicit or implicit, formal or informal. Reimagining design, in this broad sense of the word, should then be considered a necessary step in reconfiguring our choices toward creating a world that is more sustainable, meaningful, and humanity centered.

Start with the call for sustainable design. In a chapter titled “We Live in the Age of Waste,” Norman addresses the unfortunate reality that many products are designed for fast or forced obsolescence. Waste is produced by economic systems that only reward revenue and profit growth, consequently incentivizing product design that compels the consumer to purchase, discard, and purchase again, with little hesitation. In short, “design” as practiced to date is the antithesis of the concept of a circular economy.

This applies to the design of not just products, but entire sociotechnical systems with substantial economic consequences and social, political, and economic interdependencies. Changes to these systems are difficult: they are hard to understand and are usually underpinned by visible and invisible power dynamics. Good luck retooling design to embrace the circular economy! Then there is the anything-but-trivial issue of cleaning up all the mess already created from past consumption. Who is to take care of that?

Waste is produced by economic systems that only reward revenue and profit growth, consequently incentivizing product design that compels the consumer to purchase, discard, and purchase again, with little hesitation.

Next is the call for meaningful design. Traditionally, we—society at large—tend to focus on things, constructs, system states, and flows that can be measured. Engineers, managers, administrators, policymakers, consultants, leaders, and observers all focus on measuring; if something cannot be measured, it is not worthy of study. But Norman argues that many of the measurements we pursue (for example, gross domestic product) mean little to ordinary people in their daily lives. What is lost is a deeper sense of meaning: What do all of these metrics mean for me and my day-to-day life?

This question underscores the role of design in choosing, framing, and measuring what we value. The goals we pursue, the progress we assess, and the impact of such progress should be designed to be meaningful to those impacted, so that meaning can be effectively communicated. “Communication” in that sense describes the execution, assessment, feedback, and interventions aimed at change—beyond simple messaging—to those directly impacted and to the wider world at large.

Finally, Norman calls for design that is humanity centered—an argument for shifting the focus of design from the individual user (the thrust of The Design of Everyday Things) to humanity in its larger, more holistic sense. This means democratizing design and engaging humanity at large in design; it also requires an expanded, more universal articulation of the construct and application of design. We are all designers now—designers not just of objects but of our worldviews, constructs, mores, metrics, institutions, processes, systems, ideas of well-being, futures, and much, much more. This is an ambitious vision, given the narrow interpretation of the word “design” and the many messy complexities of the real world. At the same time, some might also question why Norman stops at design that is solely humanity centered. Why not Earth centered, thus including biodiversity, our limited resource base, and the planet?

On completing the book, I found myself convinced by Norman’s diagnosis of the design problem, and I looked eagerly for a comprehensive set of prescriptions. But here, I was disappointed. To be sure, the author effectively characterizes the mess-ness of our world and convincingly identifies many of the underlying reasons for the mess. But it nagged at me that this would be true even if the focus of the book were simply “toward a better world,”without any reference to design.

This is an ambitious vision, given the narrow interpretation of the word “design” and the many messy complexities of the real world.

However, since the book is actually a proposition that design can contribute to a better world, I walked away with two big questions. First, in some nonobvious ways and with requisite expansive, granular detail, how exactly should design be changed to create a more sustainable, meaningful, humanity-centered world? And second, how should the changes be implemented? I wish Norman would have answered these two questions with specificity and in some considerable depth.

As it is, many of us already see and understand the problems and contributing causes of the mess. What we need are substantive, workable, prescriptive solutions to help navigate the really hard choices we must make to redesign how we interact with each other and operate in the world. Paradoxically, the design of such a redesign will take an enormous amount of action, collaboration, and coordination at both the individual level and throughout the collective population of 8 billion. Execution of this vision is likely a task beyond the reach of any one human being, Norman’s expertise and capabilities notwithstanding. I believe the author is cognizant of this limitation: he makes a reference to political scientist Charles E. Lindblom’s classic 1959 paper “The Science of ‘Muddling Through,’” praising an approach of “incremental, small attacks on the issues, enabling continual flexibility guided by the feedback from the early results.” Absent clear, specific, comprehensive, workable prescriptions, muddle through we must.

“Ghosts” Making the World a Better Place

In “Bring on the Policy Entrepreneurs” (Issues, Winter 2024), Erica Goldman proposes that “every graduate student in the hard sciences, social sciences, health, and engineering should be able to learn some of the basic tools and tactics of policy entrepreneurship as a way of contributing their knowledge to a democratic society.” I wholeheartedly support that vision.

When I produced my doctoral dissertation on policy entrepreneurs in the 1990s, only a handful of scholars, most notably the political scientist John Kingdon, mentioned these actors. I described them as “ghost like” in the policy system. Today, researchers from across the social sciences are studying policy entrepreneurs and many new contributions are being published each year. Consequently, we can now discern regularities in what works to increase the likelihood that would-be policy entrepreneurs will meet with success. I summarized these regularities in an article in the journal Policy Design and Practice titled “So You Want to be a Policy Entrepreneur?

When weighing the prospects of investing time to build the skills of policy entrepreneurship, many professionals in scientific, technological, and health fields might worry about the opportunity costs involved. If they work on these skills, what will they be giving up? It’s legitimate to worry about trade-offs. And, certainly, none of us want highly trained professionals migrating away from their core business to go bare knuckle in the capricious world of political influence.

But to a greater extent than has been acknowledged so far, building skills to influence policymaking can be consistent with becoming a more effective professional across a range of fields. The same skills it takes to be a policy entrepreneur are those that can make you a higher performer in your core work.

Building skills to influence policymaking can be consistent with becoming a more effective professional across a range of fields.

My studies of policy entrepreneurship show collaboration is a foundational skill for anyone wanting to have policy influence. Policy entrepreneurs do not have to become political advisers, lobbyists, or heads of think tanks. But they do need to be highly adept at participating in diverse teams. They need to find effective ways to connect and work with others who have different knowledge and skills and who come from different backgrounds than their own. Thinking along these lines, it doesn’t take much reflection to see that core skills attributed to policy entrepreneurs are of enormous value for all ambitious professionals, no matter what they do or where they work.

We can all improve our productivity—and that of others—by improving our teamwork skills. Likewise, it’s well established that strategic networking is crucial for acquiring valuable inside information. Skills in framing problems, resolving conflicts, making effective arguments, and shaping narratives are essential for ambitious people in every professional setting. And these are precisely the skills that, over and over, we see are foundational to the success of policy entrepreneurs.

So, yes, let’s bring on the policy entrepreneurs in the hard sciences, social sciences, health, and engineering. They’ll have a shot at making the world a better place through policy change. Just as crucially, they’ll also build the skills they need to become leaders in their chosen professional domains.

Professor of Public Policy

Monash University

Melbourne, Victoria, Australia

Erica Goldman makes the important case that we need to better enable scientists and technologists to seek to impact policy. She asserts that by providing targeted training, creating a community of practice, and raising awareness, experts can become better at translating their ideas into policy action. We should build an academic field around policy entrepreneurship as a logical next step to support this effort.

One key reason why people don’t pursue policy entrepreneurship is, as Goldman suggests, “they often have to pick up their skills on the job, through informal networks, or by serendipitously meeting someone who shows them the ropes.” This is in part because these skills are not regularly taught in the classroom. The academic field of policy analysis relies on a client-based model, which assumes that the student already has or will obtain sufficient connections or professional experience to work for policy clients. But how do you get a policy client without a degree or existing policy network?

How do you get a policy client without a degree or existing policy network?

Many experts in science, technology, engineering, and mathematics who have tremendous professional experience—precisely the people we should want to be informing policy—do not have the skills to take on client-based policy work. Take a Silicon Valley engineer who wants to change artificial intelligence policy, or a biochemist who wants to reform the pharmaceutical industry. Most such individuals will not enroll in a master’s degree program or move to Washington, DC, to build a policy network. As Goldman emphasizes, we instead need “a practical roadmap or curriculum” to “empower more people from diverse backgrounds and expertise to influence the policy conversation.”

What if we developed instead a field designed specifically to teach subject matter experts how to impact policy from the outside, how to help them get a role that will give them leverage from within, or how to reach both goals? At the Aspen Tech Policy Hub, we are working with partners such as the Federation of American Scientists to kick-start the development of this field. We focus on teaching the practical skills required to impact policy—such as how to identify key stakeholders, how to develop a policy campaign that speaks to those stakeholders, and how to communicate ideas to generalists. By investing in the field of policy entrepreneurship, we will make it more likely that the next generation of scientists and technologists have a stronger voice at the policy table.

Director, Tech Policy Hub

The Aspen Institute

Engineering on Shaky Ground: Lessons From Mexico

The Third International Conference on Earthquake Early Warning in 2014 drew roughly 100 attendees from around the world, enough to pack a small lecture hall. The first speaker, University of California, Berkeley seismologist Richard Allen, saw great potential for better systems. No longer would earthquakes take people by surprise. Seconds of warning before earthquakes struck would create new opportunities to protect vulnerable people—sirens would wake them and help them evacuate, and automated signals would slow factory lines, elevators, and commuter trains.

At the time of the conference, United States-based scientists were working hard to secure the support necessary to launch a public earthquake early warning system for California, Oregon, and Washington (a project I have studied and written about elsewhere). Allen’s introductory talk used California’s southern neighbor as a touchstone. He explained: “Mexico City has a warning system, built after 10,000 people were killed in 1985. The question is, therefore, what would it take to build a public earthquake early warning system in the United States?”

The warning system Allen referred to is the Sistema de Alerta Sísmica Mexicano (SASMEX). It relies on a network of accelerometers strung along the central western coast of Mexico to automatically register quakes as they start and then send out warnings. Mexico City can sometimes get more than a minute of notice before shaking that starts on the Pacific coast of Mexico reaches them, and SASMEX can also offer at least 10 or 20 seconds of warning to places elsewhere in Mexico. 

When SASMEX went online in 1991, it was the first system of its kind in the world, and at that point it used just 12 stations. Since then, the state-funded system, maintained and championed by a small community of Mexican engineers, has expanded to include 98 seismic field stations that send alerts to six cities. Proponents suggest that this system, and others like it around the world, can help ordinary people and automated systems prepare for oncoming earthquakes, saving lives and limiting economic losses.

Mexico City can sometimes get more than a minute of notice before shaking that starts on the Pacific coast of Mexico reaches them.

Early warning technologies were the focus of the 2014 conference, which primarily brought together scientists and engineers but also emergency managers, policymakers, and me, an inquisitive anthropologist. Attendees hailed from universities, businesses, NGOs, and government offices across East Asia, Western Europe, and North America. The audience represented professional diversity as well as an abundance of training in geophysics and engineering, and a concomitantly shared confidence that earthquake early warning made sense in the effort to save lives, property, and money.

Conference presentations often centered on technological developments, and they discussed early warning technologies as if the social benefits were obvious and the positive impacts inevitable—as if it was just a matter of some issues that still needed to be worked out.

To me, those characterizations of earthquake early warning development seemed detached from reality. As described in many papers and posters at that conference, getting from alert to response sounded easy. Having studied it for years, “easy” was not a word I would use to describe SASMEX’s development and implementation. But technologists in this community often elaborate on the benefits of earthquake early warning systems and foreground their promises while neglecting mention of the challenges involved in practical use.

Seismicity

These challenges were on full display three years later, when a terrible earthquake shook central Mexico on September 19, 2017. SASMEX worked, I was told, but not as well as it might have. Though exact numbers of those impacted are always hard to determine in a disaster, more than 200 people were killed in Mexico City alone and almost 150 elsewhere. It was the kind of complicated “success” that can happen when a technology designed to support public safety is released into the wider world.

Technologists in this community often elaborate on the benefits of earthquake early warning systems and foreground their promises while neglecting mention of the challenges involved in practical use.

The complications with the warning system in September 2017 arose from intersecting issues: unpredictable earthquakes, Mexican social practice, and unevenly maintained technologies. For example, the system was designed to prevent a repetition of the 1985 disaster by anticipating the range of earthquake possibilities inherent in the region’s geology and geography. The engineers at the Center for Seismic Instrumentation and Registry (or CIRES, from its Spanish-language initials) planned the earthquake early warning system based on the likelihood that another large earthquake would originate from the west coast of Mexico, not 150 miles inland like the quake in September 2017. It is not clear that a substantial warning could have been generated even if stations had been positioned differently, but as it was, the system offered comparatively little advance notice before the ground began to shake.

The magnitude 7.1 quake originated outside the central Mexican city of Puebla, less than 100 miles away from Mexico City. Only a few of SASMEX’s networked stations were positioned nearby, so the earthquake early warning system could only generate an alert 12 seconds before serious shaking began in Mexico City at 1:15 p.m. By that time, some of the fastest-moving seismic waves—the comparatively weak compression waves, or P-waves—had already hit the city.

Also relevant was an unfortunate coincidence of timing and how the disastrous quake of 1985 had been used as a lesson. On the morning of September 19, 2017—32 years to the day after the 1985 earthquake—the nationwide earthquake drill was conducted, an activity coordinated each year in remembrance of the quake. An estimated 7.5 million people responded to an earthquake alert at 11:00 a.m., practicing their response as if a magnitude 8.0 quake were rushing inland from Mexico’s west coast.

The actual earthquake that followed barely two hours later was not part of the plan. Coming so soon after the drill, people who did hear the alert for the real quake reported confusion and doubt. This uncertainty did not encourage them to take advantage of what little time they had to drop, cover, and hold on before the shaking escalated.

Finally, next-step systems responded to SASMEX alerts in ways that created additional challenges: some sirens blared the warning out for all to hear, while others remained silent. Their upkeep, tasked to disparate agencies with varying priorities, was simply inconsistent.

This decentralized way of managing earthquake risk mitigation runs counter to its presentation as a single intervention. CIRES is officially tasked with developing and maintaining SASMEX’s instruments; its engineers take care of the technologies that generate alerts, and municipal governments handle emergency responses. Mexico’s National Civil Protection System, with emergency managers in Mexico City and all state governments, is supposed to integrate earthquake detection with relevant infrastructure such as sirens and response teams. But, like so many engineering projects worldwide, Mexico has approached early warning as a technical system with social implications, rather than as a system integrated into social life with environmental conditions.

When technologists presented their sensor technologies and algorithms at the 2014 conference, they focused on the speed of calculations to generate an alert, not the strength of connections needed to produce a response from various communities that might make use of an alert. Among the attendees were a few fire and emergency service workers—to me, their presence was a reminder that even the speediest, smartest data processing does not necessarily guarantee a successful warning. That would require collaboration: technoscientists forging relationships and integrating their alert systems with social institutions. 

Part of a seismic culture

Similar earthquakes can have very different impacts. Consider the consequences of the magnitude 7.0 earthquake that struck Haiti in 2010 compared to the 7.1 quake that shook New Zealand that same year. The Haitian earthquake had a hypocenter 13 kilometers underground, just 25 kilometers west of the capital city of Port-au-Prince. New Zealand’s quake was also close to a city—just 10 kilometers underground and 40 kilometers from Christchurch. But while the Haitian government’s official tally shows that 316,000 people lost their lives—representing over half of all earthquake deaths globally between 1996 and 2015—not a single person died in the New Zealand quake. According to the reporting agencies, this terrible difference can be attributed to the comparative wealth of people living in these places, the condition of the built environment, and the involvement and effectiveness of government. These inequities extend far beyond the immediate impact of an earthquake event and into recovery and redevelopment.

Mexico has approached early warning as a technical system with social implications, rather than as a system integrated into social life with environmental conditions.

Emergency managers in Mexico’s National Civil Protection System are charged to integrate earthquake alerts with other technical and social infrastructures, and although they can see what a difference adequate resources make, civil protection offices often have fewer resources than they need. Officials told me that the Mexican state of Guerrero had one of the more respected civil protection institutions, but this was not immediately obvious. For instance, as we met in Guerrero’s office of civil protection in Chilpancingo, dirty rainwater that had pooled on the roof from the building’s clogged storm drains drizzled an inconsistent spatter onto us from the ceiling. Guerrero is one of the most seismically active and poorest states in the nation. In fact, Guerrero was the site of the first field stations in the Mexican earthquake early warning system, and, after Mexico City and Oaxaca, it was the third to start disseminating earthquake early warnings. Yet Guerrero must also deal with storms, mudslides, tsunamis, floods, and dangers from incendiary materials like gas canisters for cooking, which is a lot for its people and civil protection services to manage at any given time.

In Oaxaca and Guerrero, which are a thousand miles apart, civil protection officials gave me the same glossy, stapled booklets explaining hazards and how to minimize risks. Oaxaca officials also shared an internally produced document about Oaxacan earthquakes. I visited the SASMEX servers in the back of the civil protection offices, enclosed by dark glass panels beside a screen displaying the status of the loudspeakers that broadcast earthquake sirens through the city, and I saw earthquake early warning as just one effort among many. But the real stuff of emergency rescue was also all around: motorcycles receiving maintenance and an organized mix of departmental and various staffers’ personal tools on shelves and along walls, ready for emergency evacuation or first aid.

All the civil protection offices I visited were busy with people working to do things like respond to immediate emergencies, conduct activities that support crucial services in the event of a crisis, and change individual priorities and social behavior. There, my questions about earthquakes and earthquake risk mitigation were put in the context of broader projects of risk mitigation focused explicitly on social life. “We cannot predict quakes,” one thoughtful official in Oaxaca calmly explained, “but our job is to build a culture.” He and his colleagues explained “culture” as a key concept they used in their work to make Mexicans safer.

Many civil protection officials and commenters in the popular media felt there was a long way to go to reach such goals. They described a public uninformed about and unprepared for hazards as “uncultured.” In a community with an adequate “culture,” officials told me, ordinary people understand their roles and responsibilities and incorporate strategies for risk mitigation into their lives. People with adequate cultural awareness would know how earthquakes occur and where they are likely to be felt and would respond appropriately to that knowledge. Taking part in drills, knowing to implement general strategies to make buildings safer, and developing emergency plans and the ability to stay calm in an emergency were all key activities that civil protection officials associated with a culture of preparedness. By and large, civil protection officials told me, most Mexicans were not aware of or committed to seismic safety—or, indeed, any form of risk mitigation. And all the work they did to cultivate such a culture, they felt, never seemed to have satisfactory results.

“We cannot predict quakes,” one thoughtful official in Oaxaca calmly explained, “but our job is to build a culture.”

It’s not uncommon for people professionally concerned with earthquake risk mitigation to describe culture as deficient, with outreach envisioned as a one-way communication process in which experts simply distribute information to try to instill correct priorities and elicit certain behaviors. Science education and communication researchers have critiqued so-called deficit models like this and found that treating people as passive recipients of knowledge, rather than engaging with how they encounter the world, is not only a misunderstanding of how people learn new information—it’s ineffective for communication.  

There are further implications, though. By this logic of culture as a deficit, vulnerability to earthquakes becomes the result of failures of the Mexican people, not the state. Mexican disaster scholar Jesús Manuel Macías has critiqued civil protection institutions on precisely this issue, writing that they “transfer responsibility for the protection of life and property from a state authority to the population at risk.”

It is true that the Mexican government, charged with creating and enforcing safety regulations, suffers from corruption and a limited ability to enforce building codes, and that many Mexicans lack access to the resources needed to build and maintain safe spaces. Many civil protection officials, however, including those I met in the dripping and dirty office in Guerrero, spoke passionately about what they wanted to do for people and their careful efforts to engage communities in appropriate risk mitigation practices. They want to do more than “transfer responsibility,” though they do not have many resources to work with. 

An engineer and civil protection official who had been involved in seismic risk mitigation work for many years explained that the use of the “culture” concept he observed was a sort of heuristic way of drawing attention to issues that could be addressed in the context of incredibly limited resources and funding. “Our culture,” he told me, referring to the beliefs and practices of those who work within civil protection, “is to identify problems and work for the future.”

An earthquake early warning system makes sense as one of many efforts to reduce vulnerability—but is particularly revealing when it’s operationalized as an effort to put responsibility for personal risk mitigation on ordinary people in the absence of other resources. Taken on its own, an earthquake early warning system may put the onus on ordinary people for preserving their own safety. But there is little they can do if the agencies and organizations around them are not also working to reduce vulnerability and support resilience by hardening infrastructure, enforcing regulations, or providing financial support to struggling people.

The technology exists, but…

It seems to me that well-meaning technologists limit their effectiveness by defining their responsibilities as purely technical, without overlaps with policymakers, emergency managers, and other potential collaborators. But even excellent performance within a narrow task will not yield broad success when it comes to early warning. Instantaneous, accurate detection is not enough to warn people out of harm’s way. And educational campaigns and a responsive culture cannot generate the safety that comes from well-constructed buildings.

With the proliferation of seismic monitors, technologists can now render the earth as a place that’s constantly moving, crisscrossed by faults and tectonic plate interfaces. They can study the deep composition of the planet and detect secret bomb tests. Data generated by seismometers and accelerometers have allowed twentieth- and twenty-first-century researchers to collect masses of information about earth motion and put it to use in ways that would have been unimaginable in previous environmental monitoring regimes. The proliferation of technologies designed for risk mitigation seems inevitable.

Instantaneous, accurate detection is not enough to warn people out of harm’s way. And educational campaigns and a responsive culture cannot generate the safety that comes from well-constructed buildings.

But experts need to do better if they want to understand—and maybe even succeed at developing—environmental monitoring and risk mitigation technologies that have more of an impact. While I remain guardedly optimistic about the potential of earthquake early warning, fulfilling its promises is no small undertaking. Technocentric approaches are an impediment. When technologists take on alerting on their own, when interagency and interdisciplinary collaborations are not built, when potential user communities are not meaningfully involved in alert decisionmaking (or even consistently taught about alert utility), when the infrastructures that support early warning dissemination are not well integrated, and when funding for earthquake early warning is unreliable at best, it should be no surprise when earthquake early warning falters.

On the other hand, if experts in communication and social research are brought in from the very beginning, if education and community involvement are made high priorities, if efforts at warning are controlled and coordinated, and if funding is reliable—well, earthquake early warning might help protect people and property precisely in the ways that advocates hope.

Ten Years Into the Gulf Research Program

On April 20, 2010, an explosion ripped through the Deepwater Horizon, an oil rig operating in the Gulf of Mexico, and triggered the worst oil spill to occur in US waters and one of the worst environmental disasters in US history. Eleven workers lost their lives, and 134 million gallons of oil flowed from the wellhead before it was finally capped 87 days later. The tragedy put a national spotlight on the risks associated with offshore drilling and exploration as well as on the Gulf, a unique American landscape rich in economic, natural, ecological, and cultural resources.

In the aftermath of Deepwater Horizon, a criminal settlement agreement led to the creation of the Gulf Research Program (GRP) at the National Academies of Sciences, Engineering, and Medicine in 2013. The agreement set aside $500 million in penalties for an endowment at the National Academy of Sciences to “carry out studies, projects, and other activities” focused on offshore energy production, human health, and environmental protection in the Gulf of Mexico and along the US Outer Continental Shelf. The endowment and its interest must be spent over a 30-year timeline for the benefit of the people of the Gulf region.

The GRP has just marked its tenth anniversary. Looking forward, our work remains focused on supporting this dynamic region as it navigates three far-reaching transitions related to its energy sector, coastline, and communities.

Offshore safety and the energy transition

The offshore oil and gas industry remains an important part of the regional and national economy, and current Gulf of Mexico federal offshore oil production surpasses levels recorded at the time of the disaster. This production now predominantly occurs in deep water as shallow-water production has continued to decline.

Assessing the current safety of offshore operations is a complex task. Vigilance is important because a major lesson of the BP oil spill is that the underlying risks of offshore operations are not static and shift over time. Although there hasn’t been a subsequent blowout similar to Deepwater Horizon, there have been spills associated with aging infrastructure. “Legacy” offshore oil and gas infrastructure—such as decommissioned or retired pipelines, platforms, and other structures that remain in place after abandonment or the end of use—poses a particularly complex challenge. Eighteen thousand miles of pipelines have been decommissioned in place, for example, since the start of oil and gas production in the Gulf. And whole new categories of threats could emerge in the future, such as terrorism through cyberattacks.

There is not yet a visible, industry-wide, industry-led commitment to a culture that supports safety.

A 2023 National Academies consensus study commissioned by the GRP determined that the offshore industry in the Gulf of Mexico has shown considerable improvement in systemic risk management, thanks to reforms implemented in response to the Deepwater Horizon spill. The study also found, however, that there is still work to be done: there is not yet a visible, industry-wide, industry-led commitment to a culture that supports safety.

Offshore oil and gas operations are only one part of the energy complex in the Gulf region, which is a world-leading dynamic enterprise. It is very possible the offshore industry will contract during the GRP’s existence, which is slated to end by 2043. Leaving aside that Gulf offshore oil and gas is a nonrenewable resource and that technically recoverable resources (admittedly a moving target) may be exhausted over the next few decades, the dramatic increase in US onshore production spurred by advances in hydraulic fracturing has dampened investment in relatively expensive deepwater exploration.

The issue that looms over the future of the energy sector in the Gulf is climate change. Compared with 2013, there is greater agreement in the United States about the need to transition away from carbon-intensive energy sources if we are to meet the climate challenge. The growing consensus is apparent in not just official pronouncements but also major investment decisions by government and industry. Although the magnitude and speed of the transformation are uncertain and depend on several factors, including political and policy developments, the long-term direction is clear.

But this growing acceptance of the need for an “energy transition” masks important disagreements about how it should unfold, and even what the term means. The potential for natural gas to serve as a lower-polluting “bridge” fuel on the way to a clean energy future is one source of contention, as is the economic feasibility of emerging technologies such as carbon capture, utilization, and storage (CCUS). Tristan Baurick’s article in the Fall 2023 edition of Issues highlighted the complexities of initiating CCUS projects in the region, as well as the excitement and skepticism about their potential from different quarters. Stories like this show how the energy transition, whatever its form, is far more than a technical issue.

The changing coastline

Oil and gas production is just one part of what makes the Gulf region a working coast. In the energy sector, production and exploration are complemented by refining, and the Gulf is home to over 45% of US refining capacity. In addition, more than 90% of US primary petrochemicals capacity is located in just two Gulf Coast states—Texas and Louisiana. Other key pillars of the region’s economy—fisheries, navigation, and tourism—depend on or occur in the coastal zone.

The Gulf Coast is constantly changing and transforming as sea level rise affects low-lying communities and coastal ecosystems. Recent scientific findings provide cause for alarm. Over the past 10 years, the Gulf has experienced some of the fastest rates of sea level rise in the world, posing dire threats to coastal ecosystems. Long-range planning in the region depends on better understanding the future trajectory of this phenomenon.

Strategic management and design approaches could help maintain a productive and sustainable delta. How these options are chosen and implemented will have massive implications.

This challenge is acute in the Mississippi River Delta as it undergoes rapid deterioration, abetted by levees and the scars of hydrocarbon extraction and navigation channels. Stemming the sea level rise is out of reach for policymakers in the short term, but strategic management and design approaches could help maintain a productive and sustainable delta. How these options are chosen and implemented will have massive implications for the economy, environment, and local communities, and raise complex and often controversial questions. To help decisionmakers chart a path forward, the GRP has made a $20 million-plus investment in a research consortium in partnership with Tulane University and Louisiana State University to develop an integrated modeling program to look at the future sustainability of the lowermost Mississippi River Delta, known as the Birdfoot Delta.

More of these kinds of collaborative and integrated efforts will be needed. But they are not simple. Virginia Gewin’s article in the Winter 2024 Issues on the Gulf of Mexico’s Loop Current demonstrates just how challenging it can be to put together the diverse teams required to produce and deliver the kind of information that policymakers need.

Resilient communities

A better understanding of the Loop Current can potentially improve researchers’ ability to predict storm intensity and pathways, providing coastal communities with greater advance warning of catastrophic weather events. This capability is important, as recent studies suggest that stronger and rapidly intensifying storms could be increasing in frequency due to climate change.

The science is not yet at the point where we can definitively say that the experience of the last few years represents a “new normal” of extreme weather in the Gulf region—but it is possible that we are looking at a future of compounding and sequential disasters similar to the 2020 hurricane season, which was the most active Atlantic hurricane season ever experienced in the United States. In that season, a record 11 hurricanes made landfall; a record 5 named storms made landfall in the Gulf of Mexico states; and 10 storms underwent rapid intensification, a process that requires extremely warm water (near or above 30°C/86°F). 

“Resilience” is not distributed evenly among communities, and this inequality is correlated with other long-standing inequalities centered around wealth, education, and access to housing and health care.

There has been a rise in billion-dollar disasters as well as an increase in modest or even smaller events occurring in rapid sequence, the aggregate of which can create more damage, costs, and trauma than a single large event. The increased threat of compounding disasters (which can result from hurricanes, heat waves, wind events, and other hazards) requires greater adaptive capacity within communities to understand hazards and reduce exposure and vulnerabilities. This effort brings us face to face with an uncomfortable reality: “resilience” is not distributed evenly among communities, and this inequality is correlated with other long-standing inequalities centered around wealth, education, and access to housing and health care. On these dimensions, the Gulf region lags behind the nation, exhibiting even higher levels of inequality.

The way forward involves moving beyond a backward-looking “disaster” mindset and engaging in proactive efforts to shore up the key components that support the ability of communities to absorb, recover from, and adapt to adverse events and disasters. This will require transcending traditional silos and taking a systems approach that looks across multiple and intersecting community “capitals.”

Partnering is essential to the GRP’s health and resilience work, which inherently plays out locally. The GRP is currently collaborating with the National Academy of Medicine’s Climate Communities Network, a nationwide endeavor to elevate community expertise, experience, and efforts to address the structural drivers of climate-related health inequities at the community level. We’re also working with the Robert Wood Johnson Foundation to support community-engaged research on the role that data on the social determinants of health could play in improving public health data systems and better addressing health disparities.

As Samantha Montano outlines in her article in this edition of Issues, the long, tangled history of national disaster response is inextricably tied to the Gulf.

A focus on the future

The challenges sketched out here are daunting, but within the Gulf region there is a palpable desire and energy to meet them head-on. The research community is mobilizing and coming together via new and exciting partnerships and collaborations to develop integrated solutions for the future. Growing networks are facilitating the sharing of knowledge, data, and information beyond academia to decisionmakers and the communities impacted by their decisions. The region’s young people, many of whom are supported through the GRP’s educational programming, provide a particular source of inspiration, demonstrating a remarkable passion for shaping the region’s future.

Massive disruptions brought by climate change, the need to transition to a new energy economy, and the potential collapse of vital ecosystems are on the horizon for the nation as a whole, not just the Gulf region.

The GRP’s programming is designed to capitalize on these emerging strengths. It supports the complex collaborations needed to expand the set of solutions that address the region’s intersecting and integrated challenges, builds and sustains the networks needed for information to empower the people of the Gulf, and invests in the next generation of engaged leaders and the future workforce.

In many respects, the Gulf is on the front lines. Massive disruptions brought by climate change, the need to transition to a new energy economy, and the potential collapse of vital ecosystems are on the horizon for the nation as a whole, not just the Gulf region. The smart application of scientific, engineering, and medical knowledge is vitally necessary and provides the best hope for the future. If we can get it right in this unique and challenging setting, we can use that experience to inform the path forward for the nation and the world.

To Fix Health Misinformation, Think Beyond Fact Checking

When tackling the problem of misinformation, people often think first of content and its accuracy. But contering misinformation by fact-checking every erroneous or misleading claim traps organizations in an endless game of whack-a-mole. A more effective approach may be to start by considering connections and communities. That is particularly important for public health, where different people are vulnerable in different ways. 

On this episode, Issues editor Monya Baker talks with global health professionals Tina Purnat and Elisabeth Wilhelm about how public health workers, civil society organizations, and others can understand and meet communities’ information needs. Purnat led the World Health Organization’s team that strategized responses to misinformation during the coronavirus pandemic. She is also a coeditor of the book Managing Infodemics in the 21st Century. Wilhelm has worked in health communications at the US Centers for Disease Control and Prevention, UNICEF, and USAID.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Monya Baker: Welcome to the Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

How many of these have you heard? “Put on a jacket or you’ll catch a cold.” “Don’t crack your joints or you’ll get arthritis.” “Reading in low light will ruin your eyes.” Health misinformation has long been a problem, but the rise of social media and the COVID-19 pandemic has escalated the speed and scale at which misinformation can spread and the harm that it can do. Countering this through fact-checking feels like an endless game of whack-a-mole. As soon as one thing gets debunked, five more appear. Is there a better way to defuse misinformation?

My name is Monya Baker, senior editor at Issues. On this episode, I’m joined by global health professionals, Tina Purnat and Elisabeth Wilhelm. We’ll discuss how to counter misinformation by building trust and by working with communities to understand their information needs and then deliver an effective response. Tina Purnat led the team at the World Health Organization that strategized responses to misinformation during the coronavirus pandemic. And Elisabeth Wilhelm has worked in health communications at the US Centers for Disease Control, UNICEF and USAID. Tina, Lis, welcome!

Tina Purnat: Hi.

Elisabeth Wilhelm: Thanks.

Baker: Could each of you tell me what you were doing during the pandemic and what did you see in terms of misinformation?

Wilhelm: So, during the pandemic, I was working at CDC and Tina was working at WHO. We’re going to talk a little bit about our experiences, but those don’t represent our former employers or our current ones. They’re just from our own personal experiences and our own personal war stories.

So, early on, I was sent as a responder to Indonesia to support the COVID-19 response in February of 2020. And at the time, there was officially no cases in Indonesia, but several colleagues in several different agencies were quite worried about this. And so, they asked for support. I saw huge challenges regarding COVID there specifically about misinformation, lack of information, too much information. And all of this really affected the government’s ability to respond and build trust with a really anxious public because so little information was available.

Information overload that was causing this anxiety and panic. And that was paralyzing not just for the public, but for the government and public health institutions that were trying to respond to it.

And at the end of March, I had already ended up in quarantine and I was sent to my hotel because I was in a meeting with too many high level officials in a very poorly ventilated room. And at the time, I reconnected with Tina because she had decided to set up a mass consultation from her dining room table through WHO to really understand and unpack this new phenomenon and misinformation. At the time, it was calling it the infodemic, and it was that information overload that was causing this anxiety and panic. And that was paralyzing not just for the public, but for the government and public health institutions that were trying to respond to it.

Purnat: I mean, we both saw writing on the wall, how big of a problem this was going to become globally. So, in February 2020, I was actually pulled from my day job at WHO. I was working in artificial intelligence and digital health, and I was pulled into the search support to the WHO emergency response team. And initially, the focus of my work was on how to quickly and effectively understand the different concerns and questions and narratives people were sharing online about COVID more broadly. It looked really at how the digitized, chaotic information environment is impacting people’s health.

So, I collaborated with Lis and many, many other people from practically all over the world, both on the science of infodemiology and building new public health tools, training more people, and also building collaborative networks. That later became known as infodemic management in health emergencies.

Baker: Just to sum up, Lis was in Indonesia before any cases of COVID had been reported publicly. And you, Tina, were called to manage a lot of things from your kitchen table as WHO tried to ramp up a response. What surprised both of you in terms of misinformation?

Wilhelm: Well, I could say that in Indonesia, it was really clear that everyone was caught flatfooted. But this was, of course, I think the story all over the world of how fast misinformation grew and spread and where people’s questions concerns were not getting answered and then trying to understand who felt like they were responsible for trying to address this misinformation or who was in a position to do something about it.

There’s really no vaccine against misinformation, although people would like there to be. There isn’t one simple answer.

I learned it’s not just government policymakers who play a role in addressing this problem, but it’s also journalists. It’s working with community-based organizations. It’s working with doctors, with nurses, with other health workers and with digital tech experts. And actually, it’s a lot of the lessons that we learned in Indonesia I would bring back to the US to apply in the US context. And there’s a lot of global lessons learned on addressing this information that we were able to bring back home.

And it’s just part of, I think, the largest story that misinformation is a complex phenomenon. The information environment is increasingly complex. No country is not affected by it. And health systems are just starting to understand and wrestle how to deal with it and recognizing that there isn’t one silver bullet. There’s really no vaccine against misinformation, although people would like there to be. There isn’t one simple answer. And I think that became increasingly clear during the pandemic.

And a lot of it has to do with trust. You have to build trust and do that in the middle of a pandemic. And it’s really hard to do that when you’re trying to address misinformation where people have laid their trust in others and not necessarily those that are in front of a bank of cameras and are an official spokesperson speaking to the public every day during a press conference. And so, that to me was a big revelation.

Baker: And Tina, I think I heard this phrase from you first, that instead of taking this very content-focused approach to misinformation, that a more effective way would be a public health approach to information. What does that mean?

If they find the information, the right information at the right time from the right person, then there’s much less opportunity or a chance that they would actually turn to a less credible source. So, we need to really be thinking much further upstream in this evolution of, well, what does actually create rumors and misinformation.

Purnat: One of the principles in public health, for example, is doing no harm. Another principle is really focusing on prevention instead of only mitigation or just treating disease, but actually preventing it. And I think actually what we’ve learned really most during the pandemic is the need to really understand how the information environment works, how misinformation actually takes hold, how it spreads, and what actually drives the creation and spread of it.

So, if you want to be really proactive, really what we’ve learned is that you need to be paying attention to what are actually people’s questions and concerns, or what is the health information that they cannot find because that basically are the needs that they’re trying to address. If we meet them, if they find the information, the right information at the right time from the right person, then there’s much less opportunity or a chance that they would actually turn to a less credible source. So, we need to really be thinking much further upstream in this evolution of, well, what does actually create rumors and misinformation. And not only basically play whack-a-mole chasing different posts.

Baker: How do you go about figuring out what a community’s information needs are?

Wilhelm: Ask them. Just don’t assume that a survey is really going to fully encapsulate what people’s information needs are. The best way is to ask them directly. And there are ways of engaging with communities, understanding their needs, and then deciding better health services to meet those needs. And that really is a community-centered approach that I hope becomes far more normed than it has been. It’s the whole idea of not for us without us.

And so, recognizing that blasting messages at communities that we think are going to be important or relevant to their context and that they’re more likely to follow, that’s the way of doing public health from 50 years ago. And we got to change how we understand and work with communities and involve them in the entire process in the business of getting people healthy.

Blasting messages at communities that we think are going to be important or relevant to their context and that they’re more likely to follow, that’s the way of doing public health from 50 years ago. And we got to change how we understand and work with communities and involve them in the entire process.

Public health is about the fact that your individual decisions can have population level impacts. I like to think of it in this way that everybody should wash their hands after they use the bathroom, but there are policies that also encourage that in places where people eat food. When you go to a restaurant and you go to the bathroom, there’s a big sign on the side of the door that says, “Employees must wash their hands.” So, while there might be also social norms and healthcare providers recommending that people wash their hands after using the bathroom, there also are policies and regulations in place that encourage that and enforce that so that everybody stays healthy and you can get a burger without getting food poisoning.

One of the projects I worked on at Brown really tried to understand people’s experiences on a health topic through stories. We tell each other’s stories. We understand the world through stories. Stories are incredibly motivating and powerful, and they’re usually emotionally based. They’re not fact-based necessarily. My story is my experience. But if I share it with you, you might be convinced of a certain thing because I’ve had this experience. If you can look at stories like that in aggregate, you can start identifying, well, are there common experiences that people in this community have and what can that tell us about how they’re being bombarded by the information environment or the common kinds of misinformation they’re seeing or the concerns they have? Or what are some of the social norms here that might be helpful or harmful for people protecting their health? And what can we do to better design services to meet people’s needs? It’s not just understanding how people are affected by misinformation, but it’s the totality of the information environment and when they want to do the healthy thing, is it easy to do?

Misinformation is often spread by people successfully when their values align with what they’re saying.

Purnat: Misinformation is often spread by people successfully when their values align with what they’re saying, that narrative. So, if a person values autonomy and their own control over their health, then they’re much more likely to discuss and share misinformation or health information that is underpinned by protecting people’s freedoms and rights. Or if people have historically had bad experiences with their physicians or their health service, then they might discuss and share health information and misinformation that offers alternative treatments or remedies that don’t require a visit to the doctor’s office.

That’s literally where you could say vulnerabilities also come in. And this is where the challenge of addressing health misinformation is because it requires solutions that go beyond only communicating, but actually you need to understand and address the underlying reasons and context and situations that people are in that leads to them sharing or believing in specific health information narratives.

So, in public health, we’re often organized in a particular disease, specific health topic, et cetera, but that’s not how people actually experience that day-to-day or their communities don’t experience it in day-to-day. So, when we plan on meeting their information and service needs, we have to look at the big picture and then work with all the relevant services and organizations that may meet the community where they’re at.

Baker: I wonder if you could have examples of situations where a community’s information needs were met well and situations where community needs were not met well?

Purnat: What’s happening right now in the US, it’s the H5N1 bird flu outbreak in cows. Just yesterday, I did a short search on what people are searching for on Google related to the bird flu. And there’s plenty of questions that people have from their day-to-day life that are not being answered yet by any big credible source of health information. Like the first questions people have when Googling it is about the symptoms of the H5N1 infection. But then the next concern is how is this affecting their pets? And then there’s various questions about food safety related to consuming milk and eggs and beef, and also questions in relation to the risk of infection to farmers also via handling animal manure.

And these are all information voids that the Googling public and affected workers have, but it’s likely just the tip of the iceberg. And the challenging part here is that it’s not only the public that isn’t getting the information, it’s also the public health and other trusted messengers don’t know what’s going on either. They’re complaining about slow and incomplete access to data and lack of communication from animal and public health agencies. So, this is a very common situation in outbreaks. And, I don’t know, Lis, can you think of any examples of successful?

Wilhelm: I really struggled to think of examples. And I don’t think there’s a single health topic where absolutely everyone’s information needs were met because if that were true, then we would have 100% coverage of all the things your healthcare provider recommends for you. I mean, I think the example I gave of there’s 30,000 books on pregnancy and childbirth on Amazon yet more keep getting published points to the fact that despite the thought that in the year 2024 you think every single question that could be asked about pregnancy and childbirth has probably been asked, apparently there’s still demand for more information. And that’s just books.

I don’t think there’s a single health topic where absolutely everyone’s information needs were met because if that were true, then we would have 100% coverage of all the things your healthcare provider recommends for you.

I mean, the most trusted sources of information on health, regardless of the topic, is almost always going to be your healthcare provider. And so, it’s that relationship that people have with their healthcare providers that’s also really critically important, if you’re lucky enough to have a primary healthcare provider.

I think the other side of the coin here is what are we doing to ensure that doctors and nurses and midwives and all kinds of health professionals, pharmacists, which are increasingly important during the pandemic because they started vaccinating people for things more than just flu vaccine. These are people who are having direct one-on-one conversations with individuals who have questions and concerns. What are we doing to ensure that they’re getting the training they need to have those effective conversations on health topics, but also recognize that their patients are having all kinds of stuff show up on their Facebook and social media feeds, and how do they address questions and concerns and misinformation that their patients are seeing on their screens, and how do we get health workers to recognize that that’s also part of their job. The information environment is starting to affect how doctors and nurses and other healthcare providers provide care.

And I don’t think even medical education is really caught up to the fact that the majority of people get their health information through a small screen. And that is also going to mediate how they understand and take that information on board, and that also might affect their health behavior. How many people do you know that you regularly see for a checkup or to discuss a medical topic that is a digital native or understands how to send out a tweet? We’re working in a space that’s increasingly digital, but sometimes the people who are in charge of our public health efforts who are in charge of our healthcare systems are not digital natives.

Baker: Yeah. Lis, you had said sometimes in public health, we are our own worst enemies. And I wonder if each of you could tell me what’s the one or two thing that you’ve seen that just frustrates you?

Purnat: There’s a long list actually.

Wilhelm: I want to take out a banjo and sing a song and tell you a story. I think the biggest challenges in public health is that science translation piece between what does the research tell us, how do we talk to the general public about it, how do we talk to patients about it and make sure that it’s understood. And sometimes things break down in that translation process.

There is a bible for people who are communicators, who do risk communication, who do crisis and emergency communication. And there are seven principles in this bible of how you’re supposed to communicate to the public. The first three are be first, be right, be credible. The problem is is that if you spend all of your efforts trying to ascertain whether or not you’re right, you might not be first. You end up being second, third, fourth, or fifth.

We’re really bad at exploring complex information. And we tend to believe the first thing we hear.

And the problem is is that we know from psychology and science that in emergencies and during outbreaks and crises, people’s brains operate differently. And the way it works differently is that we tend to seek out new sources of information. We’re really bad at exploring complex information. And we tend to believe the first thing we hear, which means if we’re not the first thing you heard, but the second, third or fourth or fifth, it’s really, really difficult to dislodge the first thing that you heard. So, that to me is shooting ourselves in the foot.

It’s really difficult to work as a communicator when you’re trying to balance a lack of evidence and science and being able to speak from a place of evidence. And when you’re trying to talk to an anxious public that has questions that we don’t have great answers to yet. And that is a problem that we’re experiencing every single time that there’s a new outbreak or a new disease or a new health threat, where we are racing against time to catch up.

Unfortunately, the internet will always move faster than that. And those questions and concerns will mushroom and turn into misinformation extremely quickly before someone with credibility could step in front of those cameras and deliver those remarks at a press conference. And at the end of the day, who actually listens to that press conference and who believes what is said by that spokesperson?

Purnat: I mean, just to build on what Lis said, one thing that we’re not yet I think appreciating is that this swirl of information and the conversations and reactions impacts our ability to promote public health. Literally, it cuts across individual people communities, but also it impacts health system and even health workers themselves, and we’re not really fully appreciating while this is a systemic challenge.

So, think about the teen vaping epidemic that basically seems to have caught everyone by surprise. It’s been propagated by very, very effective lifestyle-based social marketing campaigns and attractive design of the vapes that specifically spoke to teens. And while we were working to understand the epidemiological picture and we’re really putting in an effort of getting reliable evidence around it to understand the teen vaping problem, while the marketing that was targeting the teens continued to for many years and be unaddressed.

Baker: One thing I have heard is that too often when planning a response, people focus on this—I think it’s Liz who called them magic messages. Tell me about that and why it’s not going to be the most effective thing.

Wilhelm: So, maybe to put it this way, when was the last time that you had a conflict or disagreement with someone and you were searching for the right words and you found the right words and you said your magic words and it solved the problem immediately? This doesn’t happen in real life. If messages were in fact magical and if you just had to find them and identify them, the entire marketing industry will be out of a job and everyone would follow their healthcare provider’s advice on getting adequate exercise and protein in their diets, right?

If you want to understand what a person’s thinking or feeling, ask them. Just don’t make assumptions because that’s how poorly designed messages are developed and those can be actually harmful.

That’s just not how humans work. We’re not empty brains walking around waiting for messages to be filled in our brains that we then follow. We come with our own basket case of experiences, of biases, of our own literacies or lack thereof, our own perspectives on the world, our culture, our religious beliefs, our values. Those all color how we interact with the world and how we seek and get health services.

And so, there’s no magic messages that’s going to cut through that. People are different. Every community is different, and we have to recognize that in that diversity, trying to identify what people’s information needs are is going to look very different from place to place and from topic to topic, which goes back to if you want to understand what a person’s thinking or feeling, ask them. Just don’t make assumptions because that’s how poorly designed messages are developed and those can be actually harmful.

Purnat: And actually, this links also to how the media environment in general that we live in has changed. The days when people sat around the living room and listened to the nightly newscast, that’s like from a hundred years ago. Nowadays, we don’t receive information on health or other topics from single one source that we trust. We’re more like information omnivores. We consume information from different sources online and offline. We trust some more than others. So, when you attempt to blast out health messages into the world like a radio signal, and then you’re hoping that people are tuning in, that’s destined to fail.

When you attempt to blast out health messages into the world like a radio signal, and then you’re hoping that people are tuning in, that’s destined to fail.

But the problem there is that there’s also not anymore, one organization or person that has monopoly on speaking about credible health information. And that challenges how we need to be dealing with or interacting with information environments. We wouldn’t recommend that you hire a beauty influencer to talk about vaccine safety. And that’s just because they may be credible to their audience because of their beauty know-how, but probably won’t really move the needle in terms of public health outcomes. But we could work with beauty influencers probably about things that relate to social media because they’re experts in that.

Baker: So, not just the message, also the messengers?

Wilhelm: It’s the medium. It’s the message. It’s the messengers. It’s everything. I mean, think about it. For example, when you get alert on your phone saying that a tornado watch has just become a tornado warning and that you should go seek shelter or shelter in place, you’re getting the right information at the right time at the right place because geographically, the phone knows where you’re located and it overlaps where there’s this event that’s occurring. But also when we think about magic messages and we think about trust, we assume that people trust the messenger. What if people don’t trust the Weather Channel or the National Weather Service that provides those alerts to their phone?

And if we kind of extrapolate that to other areas of health, people’s trust in their doctor and the CDC and the National Pediatric Association might all be very different. We know that these are credible sources of information, but if these are not trusted, people will seek information from other alternative sources that better align with their values and their information needs. And that’s the real issue.

It’s not about we need to improve trust in these big institutions. It’s just recognizing people of varying levels of trust with different groups of people, different voices, different messengers, different on and on, different platforms, and recognizing that people get information and work with trusted information from different spaces.

Baker: And Tina, you’ve been thinking about how it’s not just information that needs to be supplied, that it’s not just messages that need to be supplied. It’s important to also know how the services will be delivered or make sure that services are being delivered.

Purnat: In ways that actually meet the needs, yes. So, example, during the pandemic, when the vaccine rollout started happening, many different countries used digital portals, digital tools that people could use to schedule their vaccine shot. But some communities either didn’t have internet access, didn’t have devices they could use to schedule an appointment, or they were just too far from locations that were providing the vaccine. That meant that actually, even though on paper the arrangement and the logistics sounded really well thought out, well, some people missed out because they weren’t able to actually take advantage of what the health system was asking them to do and offering.

Baker: Right. So, the message was delivered, but the services not really, not so much?

Purnat: And probably generated some frustration, which led to erosion of trust and frustration with the health authorities.

Wilhelm: A colleague of ours would say, “You want to make a health service fun, easy and accessible.” And so, just recognizing that if you want people to do something, you want to make it as easy as possible for them to do it. And so, that’s the example that Tina gave is a really great one, where there’s a mismatch.

Or early in the pandemic, you are instructing people who might have family members that may have been exposed to the COVID virus, that they should isolate at home, that they should take these precautions so they don’t transmit the virus to other family members. But how exactly is that supposed to work if you are living in a multigenerational household in a slum somewhere where you don’t have access to running water? So, the public health guidance might be very nice, but completely incomprehensible and completely unactionable by the average person that’s living in that type of community.

You don’t want to set people up to fail. If you’re talking to the general public about what they should do, you really need to be specific.

And so, we also have to recognize you don’t want to set people up to fail. If you’re talking to the general public about what they should do, you really need to be specific as to, “Well, what do I do if I have an elderly person that has accessibility issues or somebody who’s immunocompromised in my family,” or “What do I do if a family member has recovered from COVID? Are they eligible to receive the COVID vaccine?” I mean, these are common questions that people were asking, and the guidance wasn’t always really clear as to what people were supposed to do in those situations.

Baker: You said that just improving communications is not going to make everything better. So, what else could people be doing systematically?

Wilhelm: My pet peeve really is this focus to jump to solutions, which actually can do I think more damage in the long run, and that tend to be coercive in nature, content takedowns versus the more harder and necessary work of building trust and improving the breadth and depth of how healthcare workers and health systems engage with communities and with patients. There’s no magic button you can push just like there’s no magic message that increases trust. And there’s no magic button that you can push that can defeat all the underlying reasons why someone might believe misinformation instead of what you’re telling them.

Misinformation represents a failure—not of that individual or that community—but of a government and a health system that is not worthy of trust.

People who believe misinformation in communities that are acting on misinformation represents a failure—not of that individual or that community—but of a government and a health system that is not worthy of trust. If people believe misinformation instead of their healthcare provider, that tells me that something has gone horribly wrong and it isn’t on the individual.

We need to understand this, that this is a systemic public health problem. And we as public health professionals are on the hook to address these complex problems just like we’ve addressed other complex societal problems such as drunk driving or smoking cessation where it requires a lot of the levers, a lot of different levels.

Baker: I’ve really enjoyed learning more about this. I guess I’ll just ask each of you for one thing that you think could be done or that must be understood to move from sort of a less effective narrow approach to a more effective, broader approach.

Wilhelm: You know, the power of the internet is in your hands. As a consumer, as an individual, what you say and what you do and how you interact with people in your online communities and your offline communities can be extremely powerful. And so, take advantage of that power. Have conversations with family members and friends when they have questions or concerns. Point people in the direction of credible information. Engage with people. Do it so respectfully. Not everything has to be a shouting match on the internet.

And that can go a long way to creating a much healthier information environment where people feel like they can voice their questions and concerns without being shouted at down or talked over or dismissed just because they have legitimate concerns. And so, if we can bring some of that into our online and offline interactions every day, I think that would make things a little bit healthier.

We do need public health leadership that understands the critical and integral role that the digital information environment has in health.

Purnat: We do need public health leadership that understands the critical and integral role that the digital information environment has in health. And we need to be able to deal with how technology might be misdirecting people to the wrong health advice or all too often different health authorities still treat their websites like digital magazines. But in reality, they need to publish health information in ways that gets picked up and disseminated automatically online and used by people.

So, one thing that we need to recognize in public health is that this isn’t just in a domain of one or two functions or offices in a CDC or a National Institute of Public Health or a Ministry of Health or a health department. This is actually something that is challenging every role within the health system. And that means that patient-facing, community-facing roles or researchers and analysts and even policy advisors.

And that means we need to recognize that we need to invest in updating of our tools the way that we understand commercial information, social-economic determinants of health, and that needs to trickle into and be integrated both into our tools the way that we support our health workforce, as well as how it informs policy. It’s a bit tough nut to crack, but we can mobilize and use the expertise of practically every person that works in public health and beyond actually.

Wilhelm: This is a global problem. This affects every country from Afghanistan to the US to Greece to Zimbabwe. Everybody’s got the same issues trying to understand and address this complex information environment. And so, we can all learn from one another and recognize that this is a truly global new public health problem that we need to come up with better strategies to address. So, I think paying attention to this increasingly smaller planet that we live on, what happens in other countries affects what happens in ours, especially when it comes to how information is shared and amplified online.

Baker: I’d like to end by asking you about the Infodemic Manager training program that you worked on with the World Health Organization. You have called it a unicorn factory. Why do infodemic managers call themselves unicorns?

The perfect infodemic manager is someone who has public health experience that understands how the internet works, understands digital health, understands communication and social and behavioral science. They understand public health, epidemiology, outbreak response, emergency management. And there are very few humans on the planet who have all these skill sets.

Wilhelm: It’s the idea that the perfect infodemic manager is someone who has public health experience that understands how the internet works, understands digital health, understands communication and social and behavioral science. They understand public health, epidemiology, outbreak response, emergency management. And there are very few humans on the planet who have all these skill sets in one body.

And so, when we developed this training, we invited a very large group of humans from many different backgrounds to come together to learn some of these skills. And so, the joke became that the trainings were unicorn factories, where people went in with their existing and they upgraded a few new ones, and then they came out the other end with a little bit more sparkle and a little bit more ability to address health misinformation. And this took a life of its own. And these people decided to call themselves unicorns. They’re out there in the world, and you will see them with little unicorn buttons and stickers that they’ll have. And it’s kind of cool.

Purnat: And they were extremely committed and found this so valuable that we had people who were still wanting to participate while their country had massive flooding and monsoons or, for example, with family tragedy. And this was just a testament to the fact that really these challenges, people who worked in the communities, who worked in the COVID-19 response, they were recognizing that actually when they talk to each other, to people from other countries, they were actually seeing the same challenges. They were not alone experiencing this. This was not only specific to their country. And it was a big revelation to everyone that actually we can help each other a lot by talking to each other, supporting each other, and sharing what we’re experiencing and what we’re doing, and trying out to try to address these issues.

132 countries is how many people that we’ve trained from over the course of several years throughout this process. And it’s a small moment of joy in what was otherwise a very difficult, complex and horrifying outbreak response because many of the people that we’re being trained were doing this at all hours of the night, all parts of the world on crappy internet connections sitting together to try and solve this problem and learn together for four weeks when they’re off and also in their day job responding to their country’s COVID outbreak.

Wilhelm: So, you would have the Canadian nurse talking to the polio worker in Afghanistan, talking to the behavioral scientists in Australia, talking to the journalists in Argentina who all were taking the training and saying, “Let’s compare notes,” and then realizing how similar the challenges were that they were facing, but also a great way to come up with new solutions to some of those problems together.

Baker: Tina, Lis, thank you for this wonderful conversation. I hope it has inspired more people to become unicorns. Find out more about how to counter health misinformation by visiting our show notes.

Please subscribe to the Ongoing Transformation wherever you get your podcast. And thanks to our podcast producer, Kimberly Quach and our audio engineer, Shannon Lynch. My name is Monya Baker, Senior Editor at Issues in Science and Technology. Thank you for listening.

Missing Links for an Advanced Workforce

Recent investments in the US advanced manufacturing industry have generated a national workforce demand. However, meeting this demand for workers—particularly technicians—is inhibited by a skills gap. In the sector of microelectronics manufacturing, it is critical that we not only pursue effective technician education but also minimize barriers that hinder quality of education and program completion. For example, there are limited accessible avenues for students to gain hands-on industry experiences. Educational programs also face difficulties coordinating curriculum with local workforce needs. In “The Technologist” (Issues, Winter 2024), John Liu and William Bonvillian suggest an educational pathway targeting these challenges. Their proposals align with our efforts at the Micro Nano Technology Education Center (MNT-EC) to effectively train microelectronic industry technicians.

As the authors highlight, we must strengthen the connective tissue across the workforce education system. MNT-EC was founded with the understanding that there is strength in community bonds. We facilitate partnerships between students, educators, and industry groups to offer support, mentoring, and connections to grow the technician workforce. As part of our community of practice, we partner with over 40 community colleges in a coordinated national approach to advance microelectronic technician education. Our programs include an internship connector, which directs students toward hands-on laboratory education; a mentorship program supporting grant-seeking educators; and an undergraduate research program that backs students in two-year technical education programs.

In the sector of microelectronics manufacturing, it is critical that we not only pursue effective technician education but also minimize barriers that hinder quality of education and program completion.

These programs highlight community colleges’ critical partnership role within the advanced manufacturing ecosystem. As Liu and Bonvillian note, community colleges have unique attributes: connections to their local region, diverse student bodies, and workforce orientations. Ivy Tech Community College, one of MNT-EC’s partners, is featured in the article as an institution utilizing its strengths to educate new technologists. Ivy Tech, as well as other MNT-EC partners, understands that modern manufacturing technicians must develop innovative systems thinking alongside strong technical skills. To implement these goals, Ivy Tech participates in a partnership initiative funded by Silicon Crossroads Microelectronics Commons Hub. Ivy Tech works with Purdue University and Synopsis to develop a pathway that provides community college technician graduates with a one-year program at Purdue, followed by employment at Synopsis. This program embodies the “technologist” education, bridging technical education content taught at community colleges with engineering content at Purdue.

As we collectively develop this educational pathway for producing technologists, I offer two critical questions for consideration. First, how can we recruit and retain the dedicated technicians who will evolve into technologists? MNT-EC has undertaken strategic outreach to boost awareness of the advanced manufacturing industry. However, recruitment and retention remain a national challenge. Second, how can we ensure adequate and sustained funding to support community colleges in this partnership? Investing in the nation’s manufacturing workforce by building effective educational programs that support future technologists capable of meeting industry needs will take a team and take funding.

Principal Investigator, Micro Nano Technology Education Center

Professor, Pasadena Community College

Reports & Communications,
MNT-EC

Communications & Outreach,
MNT-EC

Anyone concerned about the state of US manufacturing should read with care John Liu and William B. Bonvillian’s essay. They propose a new occupational category that they maintain can both create opportunities for workers and position the United States to lead in advance manufacturing.

Their newly coined job, “technologist,” requires “workers with a technician’s practical know-how and an engineer’s comprehension of processes and systems.” This effectively recognizes that without an intimate connection between innovation (where the United States leads) and manufacturing (where it lags), the lead will dissipate, as recent history has demonstrated. In this context, the authors lament the US underinvestment in workforce education and particularly the low funding for community colleges, which can serve as critical cogs in training skilled workers.

Indeed, the availability of a skilled workforce ready to support twenty-first century production is the most significant and immediate problem the United States faces in trying to restore its overall manufacturing capability. And semiconductors are on the front line in the struggle. A report released in December 2023 by the Commerce Department’s Bureau of Industry and Security, Assessment of the Status of the Microelectronics Industrial Base in the United States, which summarizes industry respondents to a survey, “consistently identified workforce-related challenges as the most crucial to their business,” with respondents most frequently citing workforce-related issues (e.g., labor availability, labor cost, and labor quality) as important to expansion or construction decisions.

Other data support this perception. A July 2023 report from the Semiconductor Industry Association, Chipping Away: Assessing and Addressing the Labor Market Gap Facing the U.S. Semiconductor Industry, projects that by 2030 the semiconductor’s workforce will grow to 460,000 jobs from 345,000 jobs, with 67,000 jobs at risk of going unfilled at current degree completion rates. And this problem is economywide: by the end of 2030, an estimated 3.85 million additional jobs requiring proficiency in technical fields will be created—with 1.4 million jobs at risk of going unfilled.

The availability of a skilled workforce ready to support twenty-first century production is the most significant and immediate problem the United States faces in trying to restore its overall manufacturing capability.

The US CHIPS and Science Act, passed in 2022, appropriated over $52 billion in grants, plus tens of billions more in tax credits and loan authorization, through new programs at the Department of Defense, the National Institute of Standards and Technology (NIST), and the National Science Foundation. Central to these new initiatives is workforce development. For example, all new CHIPS programs must include commitments to provide workforce training. In addition, NIST’s National Semiconductor Technology Center proposes establishing a Workforce Center of Excellence, a national “hub” to convene, coordinate, and set standards for the highly decentralized and fragmented workforce delivery system.

To rapidly scale up regionally structured programs to meet the demand, it is wise to examine existing initiatives that have demonstrated success and can serve as replicable models. Two examples with a national footprint are:

  • NIST’s Manufacturing Extension Program has built a sustained business model in all states by helping firms reconfigure their operations through lean manufacturing practices, including shop floor reorganization. And the market for this service is not just tiny machine shops, but also enterprises with up to 500 employees, which represent over 95% of all manufacturing entities and employ 50% of all workers.
  • The National Institute for Innovation and Technology, a nonprofit sponsored by the Department of Labor, has developed an innovative Registered Apprenticeship Program in collaboration with industry. Several leading semiconductor companies are using the system to attract unprecedented numbers of motivated workers.

Liu and Bonvillian have described a creative approach to the major impediment to restoring US manufacturing. Rapid national scale-up is essential to success.

Senior Advisor

American Manufacturing Communities Collaborative

Former NIST Associate Director for Innovation and Industry Services

Effective recruitment and training programs are often billed as the key to creating the deep and capable talent pool needed by the nation’s industrial base. The task of creating them, however, has proven Sisyphean for educators. Pathways nationwide are afflicted with the same trio of problems: lagging enrollment; high attrition; and disappointing problem solving, creative thinking, and critical reasoning skills in graduates.

In response to these anemic results, the government has increased funding for manufacturing programs, hoping educators can produce the desired talent through improved outreach and instruction. Looking at the causes of the key problems, however, reveals that even the best programs, such as the one at the Massachusetts Institute of Technology that John Liu and William B. Bonvillian describe, are limited in their potential to solve them.

Recruitment is primarily hamstrung by the sector’s low wages (particularly at the entry level for workers with less than a bachelor’s degree). In many markets, entry-level technician compensation is on par with that offered by burger chains and big box stores. Technologist salaries ring in higher, but many promising candidates (especially high schoolers) opt for a bachelor’s degree instead, because the return on investment is often better. Until that math changes, technician/technologist pathways will never outmatch the competition from other sectors or four-year degrees, both of which pay more, provide a more attractive job structure, or both.

Furthermore, educators cannot easily teach skills such as aptitude for innovation and technical agility in class: students master theory in school and practical application on the job. As a former Apple engineer explained to me, it is not until entering the workforce that people are routinely exposed to the conditions that develop diversity of thought: open-ended problems that require workers to engage with an infinite solution space to arrive at an answer. While approaches like project-based learning can help students acquire a foundation prior to graduation, companies must accept that the bulk of the learning that drives creativity and problem solving will take place on the factory floor, not in the classroom.

It is not until entering the workforce that people are routinely exposed to the conditions that develop diversity of thought: open-ended problems that require workers to engage with an infinite solution space to arrive at an answer.

This means that to address the nation’s manufacturing workforce shortcomings, we must turn to industry, not education. Compensation needs to be raised to reflect the complexity and effort demanded by manufacturing jobs when compared with other positions that pay similar wages. Companies also need to embrace their role as a critical learning environment. Translating classroom-based knowledge into real-world skill takes time and effort by both students and industry. Many European countries with strong manufacturing economies run multiyear apprenticeship programs in recognition of this fact. To date, the United States has resisted the investment and cooperation required to create a strong national apprenticeship program. Unless and until that changes, we should not expect our recent graduates to have the experience and skill of their European counterparts.

In sum, programs such as the one at MIT should be replicated in every manufacturing market across the nation. But in the absence of competitive compensation and scaled apprenticeships, educators cannot create a labor pool with the quantity of candidates or technical chops to shore up the country’s industrial sector.

Senior Fellow and Director of Workforce Policy

The Century Foundation

John Liu and William B. Bonvillian make a compelling case for bridging the gap between engineers and technicians to support the US government’s efforts for reshoring and reindustrialization. They call for new training programs to produce people with a skill level between technician and engineer—or “technologists,” in their coinage. But before creating new programs, we should examine how the authors’ vision might fit within the nation’s existing educational system.

It is surprising that Liu and Bonvillian don’t explain how their new field differs from one that already bridges the technician-engineer gap: engineering technology. Engineering technology programs offer degrees at the associate’s, bachelor’s, master’s, and even PhD levels. And the programs graduate substantial numbers of students. According to the US Department of Education, more than 50,000 associate’s and 18,000 bachelor’s degrees in engineering technology were awarded in 2021–22. The number of bachelor’s degrees represents about 15% of all engineering degrees awarded during that period. The field also has a strong institutional foothold. Programs are accredited by the Accreditation Board for Engineering and Technology and the field has an established Classification of Instructional Programs code (15.00).

Engineering and engineering technology programs have roots that go back to the late nineteenth century. They were not completely distinct from one another until the 1950s, when engineering schools adopted many of the curricular recommendations made by an American Society of Engineering Education’s 1955 report, commonly known as the Grinter Report, and made engineering education more “scientifically oriented.” Engineering technology programs tend to require less advanced mathematics and science but much more applied and implementation work with real-world equipment.

Engineering technology programs tend to require less advanced mathematics and science but much more applied and implementation work with real-world equipment.

A more recent report from the National Academies, Engineering Technology Education in the United States, published in 2017, describes the state of the field, its evolution, and the need to elevate its branding and visibility among students, workers, educators, and employers. The report describes graduates of engineering technology programs as technologists, the same job title Liu and Bonvillian use for their new type of worker who possesses skills that combine what they term “a technician’s practical know-how and an engineer’s comprehension of processes and systems.”

The preface of the National Academies report provides a warning to those taking a “build it and they will come” approach. It states that engineering technology, despite its importance, is “unfamiliar to most Americans and goes unmentioned in most policy discussions about the US technical workforce.” Liu and Bonvillian are advocating that a new, apparently similar, field be created. How do they ensure it won’t suffer the same fate?

The market gap that the authors identify, along with the lack of awareness about engineering technology, point to a deeper problem in the US workforce development system: employers are no longer viewed as being responsible for taking the lead role in guiding and investing in workforce development. Employers are the ones that can specify skills needs, and they profit from properly trained workers, yet we have come to expect too little from them. Until we shift the policy conversation by asking employers to do more, creating programs that develop technologists will fail to live up to Liu and Bonvillian’s hopeful vision.

Associate Professor

Department of Political Science

Howard University

John Liu and William Bonvillian put forth a thoughtful proposal that US manufacturing needs a new occupational category called “technologist,” a mid-level position sitting between technician and engineer. To produce more of this new breed, the authors encourage community colleges to deliver technologist education, particularly by adopting the curricula framework used in an online program in manufacturing run by the Massachusetts Institute of Technology. And in a bit of good news, the US Defense Department has started funding its adaptation for technologist education.

But more is needed. In scaling up technologist programs across community colleges, Liu and Bonvillian propose focusing first on new students, followed by programs for incumbent workers. I might suggest the inverse strategy to center job quality in the creation of technologist jobs. In this regard, the authors state something critically important: “to incentivize and enable workers to pursue educational advances in manufacturing, companies need to offer high-wage jobs to employees.” Here, the United States might take some lessons from Germany, where manufacturers pay their employees 60% more than US companies do, have a robust apprenticeship system, and generally prioritize investments in human capital over capital equipment purchases.

For too long, US workforce policy has prioritized primarily employer needs. It’s time to add back workers at the heart of workforce policy, as my colleague Mary Alice McCarthy recently argued in a coauthored article in DC Journal. Efforts by community colleges can be important here. By partnering with employers, labor unions, and manufacturing intermediaries such as federal Manufacturing Extension Partnerships to upskill incumbent technicians to become technologists, community colleges can expand upward mobility for workers who are part of the 40 million “some college, no degree” population and set the stage for discussing competitive wages and job quality with employers. Plus, they can ensure that these bold new programs are aligned with employers’ needs—especially critical for emerging jobs.

Community colleges can expand upward mobility for workers who are part of the 40 million “some college, no degree” population and set the stage for discussing competitive wages and job quality with employers.

Indeed, the million-plus workers already employed across 56,000 companies within the US industrial base represent an opportunity to recruit program enrollees and provide mobility in a critical sector of manufacturing that arguably ought to be at the forefront of technologist-enabled digital transformation. Then, with the technologist role cemented in manufacturing—with fair pay—community colleges can turn to recruiting new students for the new occupation.

Policymakers should also consider ways to promote competitive pay and job quality as they fund and promote technologist education. Renewing worker power in manufacturing is one such avenue. Here, labor unions can prove useful. The politics of unions have changed. An August 2023 Gallup poll found that 67% of respondents approved of labor unions on the heels of a summer when both President Biden and former President Trump made history by joining picket lines during the United Auto Workers strike.

The time is right for manufacturing technologists. New federal funding, such as through the National Science Foundation’s Enabling Partnerships to Increase Innovation Capacity program and the Experiential Learning for Emerging and Novel Technologies program, is optimally suited to boost technologist program creation at community colleges. But even with such added support, ensuring that technologist jobs are quality jobs ought to be an imperative for employers who will benefit by bringing the authors’ sensible and needed vision to fruition.

Senior Advisor on Education, Labor, and the Future of Work

Head, Initiative on the Future of Work and the Innovation Economy

New America

An Elusive and Indefinable Boundary

Seven years before the release of Silent Spring in 1962, marine biologist and writer Rachel Carson wrote The Edge of the Sea. Part field guide to the Atlantic seashore, part meditation on Carson’s love for the evanescent world between land and water, it was an idea that came to her while working for the United States Fish and Wildlife Service. The book begins:

The edge of the sea is a strange and beautiful place. All through the long history of Earth it has been an area of unrest where waves have broken heavily against the land, where the tides have pressed forward over the continents, receded, and then returned. For no two successive days is the shore line precisely the same.… Today a little more land may belong to the sea, tomorrow a little less. Always the edge of the sea remains an elusive and indefinable boundary.

As a photographer, my work explores that indefinable boundary, often by visiting sites multiple times over the course of many years. I photograph certain structures repeatedly to capture a perspective of change and time that’s larger than the frame itself.

I photograph certain structures repeatedly to capture a perspective of change and time that’s larger than the frame itself.

The development of the American shoreline reflects our ideas of living with the natural world—a world which, it was once believed, could be manipulated and maneuvered without consequence. The Army Corps of Engineers, the agency responsible for designing and implementing the infrastructure of many of these water-ruled landscapes, describes their mission as relating to “the desire of many people to live near the coast,” which, along with economic opportunities, “led to extensive development of coastal areas and the need to protect lives and property from waves, storms, and erosion.”

In South Louisiana, where much of my work has been based over the past 10 years, the control of water permeates all aspects of life. In geological terms, floods formed Louisiana as snowmelt from as far west as the Rocky Mountains drained into the Mississippi River. When the development of the Mississippi Valley accelerated in the early 1900s, people sought ways to lock the river in place. The Flood Control Acts of 1928 and 1936 authorized the Army Corps to construct thousands of miles of levees—structures that were monumental in shaping the Louisiana landscape and would have massive environmental impacts in the decades to come.

Since the 1930s, approximately 2,000 square miles of the state’s coast have sunk into the Gulf of Mexico. A plethora of maps and aerial surveys document this figure in attempts to convey the magnitude of what has already been lost to the sea.

A particular visual lexicon has emerged out of the desire to understand a “disaster” at this scale, one that seeks to make the complexity of changing landscapes legible in a compact form. The violence of climate change is often represented and communicated through images of flooding, destroyed buildings, and wildfires. But these events are seen through a narrow temporal lens that omits the many social, political, economic, and scientific reasons for the way disasters unfold.

In my work, I engage with the ongoing environmental crisis by looking at the ways architecture and infrastructure symbolize our beliefs about inhabiting space. I’m drawn to projects like the Lake Borgne Surge Barrier, which is nicknamed the “Great Wall of Louisiana” and is the largest design-build project in the history of the Army Corps of Engineers. It is a physical monument of our relationship with the natural world.

I’m also drawn to invisible infrastructures—like flood insurance—that continue to alter the landscape and built environment. Beginning in the 1960s, the National Flood Insurance Program further encouraged and incentivized the development of floodplains, and the downstream effects of these policies are now clear—for example, the flooding in Houston after Hurricane Harvey. Flood insurance also dictates the elevation of structures throughout South Louisiana, which is why so many houses are raised 12–20 feet in the air. 

These subtle changes in the built environment speak to how we view our relationship to the earth and with each other. The boundaries always seem to be shifting, and yet they also stay the same. Although our tools and strategies for building have become more complex, we still all need shelter, security, water, and community.

The Roots That Ward Off Disaster

Every summer I take my undergraduate emergency management students to Louisiana to learn about disasters in the Gulf. One of our most valuable experiences is volunteering with local organizations that are replanting coastal wetlands. On these trips, we take a boat from the shore and then jump out onto unsolid ground. Standing is impossible; instead, we army crawl our way through shallow water. Every few feet, we pause to punch a hole into the mud with our fists while our free hands push in plugs of California bulrush so that they run in rows parallel to the shore. This plant is well suited to prevent erosion in places where land and water meet. Eventually, they will grow to be 5–10 feet tall. When we run out of plants and energy, we return home soaked, sunburnt, and muddy, hoping that the little reeds will have sufficient time to grow strong enough to hold up against some of the world’s fiercest winds and storm surges.

Restoring the wetlands is one tactic that can help millions of people miles inland cope with continuous cycles of flooding. Approximately 25% of the Louisiana wetlands, an area equivalent to the state of Delaware, have been destroyed by past storms, a century of Mississippi River manipulation, climate change, and the activities of the oil and gas industries. The loss of the wetlands has made flooding worse as the natural barrier protecting land from sea shrinks. The general rule of thumb when it comes to storm-surge mitigation is that every 2.7 miles of wetlands can reduce storm surge by a foot.

The disappearing coast is an existential risk, and the state has approved a $50 billion, 50-year comprehensive master plan that includes many strategies to restore the wetlands, including massive dredging projects and diversions. In the two decades that it took to develop the plan and search for funding, the wetlands continued to erode. In the interim, small volunteer groups have tried to move ahead by restoring the wetlands plant by plant. 

This approach to wetland restoration in Louisiana mirrors the relationship between formal government efforts and communities in the face of vast hazards in the region. Problems that should be the responsibility of government, or are of such a scale that federal funding is required, are often kicked down the road by elected officials. Disasters, though, do not run on political timelines. People who live in these communities increasingly find government help is absent or insufficient, and so it is volunteers who step in to help locals undo a century of ecological damage.

For my students, there is plenty to be learned while planting bulrushes—about the conditions that contribute to disasters, about the strengths and weaknesses of the system we use to manage them, and about what it takes, tangibly, to hold Louisiana’s soil together.

Blue tarps layered over each other

The Gulf Coast has faced an exhausting run of hurricanes: Ivan, Katrina, Rita, Wilma, Gustav, Ike, Isaac, Harvey, Irma, Michael, Barry, Sally, Laura, Delta, Zeta, Ida, Ian, and Idalia. Floods have been named for the holidays they disrupted—the Tax Day and Memorial Day floods in Texas, for example—but many more have gone unnamed. Some, like the 2016 Louisiana flood, cause massive damage all at once; others, like street flooding in New Orleans, cause chronic damage over time.

People who live in these communities increasingly find government help is absent or insufficient, and so it is volunteers who step in to help locals undo a century of ecological damage.

And the Gulf has battled many other disasters. Katrina caused, at the time, the second-largest oil spill in US history, a record far surpassed just five years later by the BP oil disaster. Chemical plant explosions regularly rock communities across Texas and Louisiana. In Mississippi, Jackson has gone without reliable drinking water, while the threat of saltwater intrusion up the Mississippi River in Louisiana looms. Parts of the region have experienced weeks without power, from the Texas-Louisiana freeze to Hurricane Ida. Tornadoes have become even more frequent across the land as Tornado Alley makes a shift east. Wildfires in Louisiana and Florida have filled the air with smoke. COVID-19 preyed on areas with high poverty rates and poor access to health care. And all the while, the Gulf slowly rises.

On paper, these 150 presidential disaster declarations, plus others that did not meet the federal threshold, are treated as individual disasters—but in reality, their impacts compound. In southeast Texas, homeowners became trapped in a disaster cycle when they could not rebuild before the next flood came. In Lake Charles, Louisiana, the blue tarps are layered over each other. In Florida, hurricane debris became fuel for wildfires.

Writers have long described the trials and calamities of the Gulf—from Europe’s brutal colonization of Native tribes to the area’s central role in slavery—as the origins of an assemblage of discrimination that residents today continue to navigate and endure. These legacies have paved the way for the future to arrive faster in the Gulf than in other parts of the country. The Gulf is an epicenter where both the causes and consequences of climate change collide with a history of policies that have entrenched social vulnerability.  

To the extent that the US government has attempted to address climate change, it has focused largely on the reduction of CO2 emissions rather than the increasingly critical problem of adaptation. This choice has strained the emergency management system—which is already falling short. To better understand the current and future ability of the emergency management system to meet the needs of the Gulf Coast, it is instructive to start with an understanding of the history of how this system was created.

Inventing emergency management

In 1965, across the wetlands we are now replanting, Hurricane Betsy arrived as a Category 4 storm. In 1969, Hurricane Camille made landfall as a Category 5 just to the east. Although hurricanes had always affected the area, these storms were different, not only in terms of scale of destruction but also because the United States now had a civil defense system. Created in the wake of World War II to prepare Americans for nuclear attack, this system was beginning to be repurposed for other disasters. Betsy and Camille (along with the 1964 Alaska earthquake) were the first big tests of whether activating a preexisting system to respond to a disaster would work.

The Gulf is an epicenter where both the causes and consequences of climate change collide with a history of policies that have entrenched social vulnerability.  

At the time, the public’s attitudes about the role of government in times of crisis were shifting toward an expectation that the government could and should respond immediately when disaster struck. Although much of the new field’s attention was focused on how to respond to a nuclear attack, these other disasters helped spur conversations about the need for a national all-hazards emergency management system. And some envisioned a well-tuned system that could help communities before, during, and after disasters.

Despite consensus across presidential administrations that having a way for the federal government to help in moments of crisis was important and necessary, the mechanism remained elusive; funding and organization frequently shifted between civilian and military. There was even a point during the Kennedy administration when the location and authority of civil defense changed monthly. In 1979, at the behest of the National Governors Association, President Carter signed an executive order establishing the Federal Emergency Management Agency (FEMA), and the transformation of civil defense into emergency management as we know it today began.

From the beginning, the emergency management system was based on two fundamental pillars. The first was that emergency management should take an all-hazards approach. This meant that agencies would plan for any hazard that could happen, from hurricanes to terrorist attacks. The second was that emergency management must be comprehensive. This meant emergency management agencies should address all four phases of the disaster cycle: mitigation, preparedness, response, and recovery—each one in concert with the others. Neglecting just one phase would undermine the overall effort.

Throughout the 1980s and 1990s, the federal government increased its investments in disaster mitigation and developed national disaster recovery policies. Across the country, state and local emergency management agencies followed FEMA’s lead as they chased mitigation grants from the new federal agency. Simultaneously, researchers in other disciplines expanded the work of disaster sociologists on human behavior. They began to untangle the complexities of managing disasters and started emergency management degree programs at universities.

Across the Gulf Coast, the transition to relying on a formal emergency management system led to the creation of local agencies that hired new staff (mostly white male veterans, building on the field’s origins). In many counties and parishes, however, the responsibility of interfacing with FEMA often fell to the fire or police chief who, on paper, took responsibility for emergency management.

Emergency management agencies should address all four phases of the disaster cycle: mitigation, preparedness, response, and recovery—each one in concert with the others.

Through tornadoes, floods, and hurricanes, the emergency management system learned on the job. Following Hurricane Andrew in 1992, FEMA faced extensive criticism for a response that brought too little aid too slowly. The Clinton administration took note and appointed former Arkansas state emergency manager James Lee Witt as FEMA director—the first with actual emergency management experience, as opposed to a military background. The decision ushered in a period at FEMA referred to now as the “golden years.” During this time, the agency was able to build relationships with local governments and administered a popular mitigation grant program, Project Impact, which enabled local communities to fund mitigation projects. By the end of the decade, emergency management agencies in Florida earned the reputation of being some of the most effective at emergency management in the country.

Even during FEMA’s golden years, emergency management was a work in progress. But by 2000, just 20 years after FEMA had been created, the US emergency management system in the Gulf and the rest of the country looked to be on a trajectory toward a future that evaluated risk more realistically, prioritized mitigating those risks, and had an emerging academic discipline to support that work. Then a disaster far from the Gulf threw the emergency management system off course.

A catastrophe bigger and more complex than any before

After 9/11, the Bush administration and Congress created the Department of Homeland Security (DHS). The FBI, National Security Agency, and other politically powerful agencies protected themselves from being swallowed into the behemoth, but FEMA was unsuccessful in lobbying to maintain its status as an independent cabinet-level agency. It became one of 22 agencies moved under DHS and, in the process, lost authority, status, funding, key personnel with expertise, and vision.

State and local governments scrambled as the federal government issued new training requirements that forced every emergency management and first-responder agency in the country to be retrained on a new system. The mandate was unfunded and difficult to implement. Furthermore, Congress shifted federal grants away from all-hazards preparedness and toward terrorism preparedness. The Gulf Coast, more than anywhere else in the country, bore the brunt of the consequences of these post-9/11 changes.

Amid this chaos and confusion came a catastrophe bigger and more complex than any before: Hurricane Katrina and the federal levee failures. Even before Hurricane Katrina made landfall, the impacts of these federal changes were felt. In Louisiana, the final phases of the now infamous Hurricane Pam exercise that effectively predicted the impacts of a storm like Katrina were cut from the budget, which meant that the problems identified were never addressed in the region’s response plans. Although much of the criticism of Katrina and the levee failures was rightly placed on the incompetence of FEMA administrator Michael Brown, the Department of Homeland Security, and the Bush administration in general, the failures within the overall emergency management system were even bigger than the public understood at the time.

On top of the failed response, it quickly became clear that the government did not have a plan for recovery. In its absence, local residents, alongside volunteers from around the world, moved ahead, rebuilding homes nail by nail. An entire ecosystem of nonprofit organizations dedicated to rebuilding was created. Two million people are estimated to have volunteered their time for the Katrina response and recovery. While some of those organizations remain in the Gulf today, most have either disbanded or moved on to the next disaster.

Modern-day emergency management

Today the region has a web of emergency management agencies that quietly orchestrate the programs, projects, and plans that keep life on the Gulf possible, knitting together all levels of government, the private sector, nonprofit organizations, and the public. But this system is barely keeping up with the relentless disasters.

In many ways, the bones of the emergency management system are good. Government agencies at the local level (e.g., public works, planning offices, first responder agencies) and their state and federal counterparts (e.g., the Environmental Protection Agency, Department of Housing and Urban Development, Centers for Disease Control and Prevention) make up the core of the system. They create mechanisms to coordinate the many stakeholders and a shared language to ease communication. The private sector fills in through its role in restoring utilities and fulfilling contracts for everything from bringing in food and water to removing debris to rebuilding structures. Because the Gulf’s oil, gas, and chemical operations themselves present risks to the public, close collaboration between industry and emergency managers is particularly important. The needs that go unmet by the public and private sectors are left to nonprofits and volunteers. In theory, it is easy to see how the many parts fit together to take a comprehensive, all-hazards approach that involves the whole community.

Emergency management agencies are at the center of getting this complex system to function, but as the country faces more disasters, more quickly, the ability of these agencies to do so is in trouble. The problems across the emergency management system range widely—from bungled responses to the deprioritization of mitigation, protection, and recovery—but what underlies them all is a lack of capacity.

Building local capacity

Most funding for emergency management comes when agencies’ communities receive a presidential disaster declaration. This is extremely important funding that helps survivors and communities recover (a long and difficult process). The steady stream of declared disasters in the Gulf has meant that emergency management budgets look big on paper, but recovery is only one part of the disaster life cycle; there is much less being spent on preparedness efforts, including building the capacity of the emergency management system. 

This underfunding of preparedness creates a context where local emergency management agencies not only struggle to respond to disasters in a community but also lack funds and personnel to reduce its vulnerability. Research has found that for every $1 the federal government spends on mitigation projects, around $6 is saved in response and recovery.

Emergency management agencies are at the center of getting this complex system to function, but as the country faces more disasters, more quickly, the ability of these agencies to do so is in trouble.

For some emergency managers in the Gulf, this cycle of disaster and response has been going on for years. When Edward McCrane Jr. first arrived at Sarasota County Emergency Management during Florida’s response to Hurricane Wilma in 2005, he discovered that the staff was worn out by two years of nonstop hurricane seasons. As other disasters came throughout his 18-year tenure, he tried to implement better strategies to divvy up the work, but it remained a persistent difficulty. 

McCrane explained to me the challenge of understaffing in the field: an agency with just one emergency manager is expected to meet the same set of regulatory requirements as an agency of 30. One person working only 20 hours a week simply cannot navigate the Army Corps of Engineers planning process for building a levee while also building relationships across the community to encourage preparedness, leading responses to disasters, and being ready to manage the rebuilding of a town. It is not that emergency managers should not be doing all these tasks, but rather that they need the staffing and resources to do comprehensive emergency management well. McCrane noted that his agency was better off (with some half-dozen staffers) than the many others that must make do with a part-time emergency manager or even a volunteer emergency manager. 

I spoke with several Gulf Coast emergency managers who, although they are working hard to meet the needs of their communities, say they lack the resources, people, expertise, and authority to do so fully. Without increasing their capacity, there is little chance that emergency management agencies will have the ability to do mitigation, preparedness, response, and recovery effectively.

Sandra Tapfumaneyi, who took over in Sarasota when McCrane retired last year, said she would like to expand the work of the agency to emerging hazards and broader pre-disaster recovery planning. Cybersecurity is a growing concern because of increasing ransomware attacks on critical infrastructure—such as hospitals and water treatment and industrial facilities—as well as misinformation campaigns during disasters. Emergency managers know these tasks are important, but they often fall to the bottom of the list because there are no legal requirements to address them. As Tapfumaneyi put it, she needs more staff to be able to “tackle some of the ‘extra.’”

One consequence of this feeling of constantly falling short on staffing and to-do lists is significant amounts of burnout among emergency managers. Kesley Richardson, who works at the intersection of emergency management and public health, estimated that in the last emergency management agency he worked in, the average turnover rate was six months. Emergency management agencies cannot effectively meet the needs of communities in crisis when they themselves are in crisis.

Burnout is not only a problem for local agencies; it also affects the overall emergency management system, which counts on calling in emergency managers from across the country to pitch in when disaster strikes. If disasters are rare, this process works well, but the system has become strained. For example, during the 2017 hurricane season, the US emergency management system had to respond to Hurricanes Harvey, Irma, and Maria in the span of just two months. By the time Hurricane Maria reached Puerto Rico, many of FEMA’s resources were already deployed to Texas and Florida.

Without increasing their capacity, there is little chance that emergency management agencies will have the ability to do mitigation, preparedness, response, and recovery effectively.

In 2023, the Government Accountability Office reported that FEMA had a staffing gap in 2022 of 35%—6,200 people. These staffing issues are at once a consequence and an accelerator of the problems caused by uneven and unpredictable agency budgets. A relatively small amount of money ($355 million in 2023) is divvied up annually among the states for capacity-building and preparedness efforts through FEMA’s Emergency Management Performance Grant (EMPG) program. It is left up to the states to decide how that money is spent. Some states keep the EMPG funding for the state agency while others distribute the funding at the local level, with the result that county governments generally contribute very little funding to the day-to-day work of emergency management agencies. Many—particularly those in rural areas—are just scraping by.

One way to address the growing response needs is to build the capacity of all local agencies so they are more self-sufficient as well as better equipped to send help to other parts of the country. Local emergency management agencies are the roots in each community that facilitate the rest of the emergency management system, but they have very little power to increase their own funding. FEMA and Congress could both direct additional funds into local capacity-building. Governors, state legislatures, and city and county governments also need to increase their investment in emergency management. Doing so could expand the resources directly available to residents before, during, and after a disaster. There could be state-funded household recovery programs, for example, for those who do not meet the requirements of federal assistance. Such investments would both strengthen local agencies and enable better emergency management across the region.

An emerging discipline

In addition to increasing local capacities, an effort should be made to root the disaster system in empirical research. Today’s practice is selectively learning from recent failures. Since the early days of emergency management, agencies have written an after-action report following disasters to outline what went well and what should be changed for the next disaster. If an approach was altered, it was usually based on these reports. But of course, no two disasters are the same. There’s a danger in basing emergency management responses on a single past event. A better way to determine future actions would be for researchers to synthesize empirical research findings across many disasters.

For example, there are important distinctions among what we colloquially call “disasters.” Current emergency management practice recognizes that emergency response and disaster response are distinct from one another, but it does not recognize the distinction between a disaster and a catastrophe. The 2023 tornadoes outside New Orleans were emergencies handled using local resources. By contrast, the BP oil spill was a disaster—requiring federal resources. And Katrina along with the levee failure was a catastrophe, overwhelming even federal resources.

Researchers have demonstrated that the way people respond to an emergency is not the same as the way they should respond to a disaster, or to a catastrophe. This distinction needs to underlie the research that is done post-disaster and inform the generalizability of findings. It is not appropriate to apply the findings from one type of event to another. Adopting the hazard scale in emergency management policy would enable better use of resources to prepare for, respond to, and manage such crises.

The link between many common emergency management practices and outcomes is simply underexplored. If a community has an existing shelter plan, does that lead to better outcomes when a shelter is opened? Or has the turn toward framing disaster work around the vague language of “resilience” led to more effective outcomes? 

There is reason to think that some of the traditional approaches taken in emergency management are not effective, or at least not as effective as they could be if they were done holistically. For example, the public is advised to prepare by assembling a “kit” that may include flashlights, canned goods, and other supplies. These items, although based on common sense, are not empirically grounded. It is not clear that these are the items most needed to survive a given disaster. In fact, evidence suggests that there are many other factors such as social networks, technological integration, and adaptive capacity that are equally or even more important.

A better way to determine future actions would be for researchers to synthesize empirical research findings across many disasters.

Further, the current approach to individual and household preparedness falls short of incorporating the ability to prepare for both response and recovery. Researchers Trevor Johnson and Jessica Jensen point out that the “kit” approach does not include items that can be used to repair a damaged home, nor does it help people apply for government aid or otherwise navigate post-disaster logistics. By contrast, a more holistic view of preparedness engages with the many factors that influence individuals’ and households’ ability to survive and recover from disasters. This is the type of research that emergency management scholars can produce that, if brought into policy and practice, could lead to more effective outcomes.

It is abundantly clear to emergency management researchers and many practitioners that this work must be done—but it’s difficult to accomplish. Funding aside, there are few people to do the research. Although there are many disaster researchers, there are only around 60 people in the United States who have a PhD in emergency management, and some of those programs are being cut. Support for the development of additional emergency management doctoral programs is needed for the discipline to be able to meet the challenges of practice.

The National Science Foundation is well positioned to support development of holistic emergency management research and facilitate its movement into practice. Research grants for basic emergency management research should also include funding to hire dedicated science communicators to help translate the findings into policy and inform the wider community. This support, paired with increased funding for state and local agencies, would enable emergency managers to address the full and evolving range of tasks involved in comprehensive emergency management.

Defining the future

Building community resilience to climate and other hazards requires both collective action and attention to root-building—much like planting grasses in Louisiana’s coastal wetlands. In the same way that California bulrushes can withstand a storm surge when planted together, so too can well-resourced local emergency management agencies help the region weather the effects of a changing climate. 

Members of the US emergency management system are among those who should be leading this endeavor. Increasing the capacity of emergency management agencies throughout the Gulf would enable emergency managers to work across mitigation, preparedness, response, and recovery and so reduce impacts on their communities. Moreover, expanded capacity would strengthen the whole region as it faces compounding disasters. And, among the bulrushes, it’s also possible to see how empirical research on mitigation, combined with deliberate coordination of both government and volunteer resources, could help to build a Gulf less defined by disasters and more free to define its future.

How Space Art Shaped National Identity

Space exploration became firmly interwoven with American culture through influential speeches by President John F. Kennedy at a time of heightened awareness of the “space race” between the United States and the Soviet Union. However, space art also played a significant role in shaping American perspectives on space—helping to bridge the gaps between scientific, sociopolitical, and cultural viewpoints on exploration. More directly, space art inspired the nation to dream bigger and reach for the stars. By depicting the fantastic landscapes of other worlds, artists helped to sell the idea of space travel to the American public, instilling curiosity about the mysteries of the universe that ultimately made space exploration a part of the national identity.

Nearly a century before the moonshot, French-born artist and astronomer Étienne Léopold Trouvelot created stunning astronomical drawings, many using powerful telescopes at the Harvard Observatory and the US Naval Observatory. Trouvelot’s detailed images of Saturn, Jupiter, and other planets amplified the colors and features of these celestial bodies, turning them into objects of wonder and beauty. His 1882 book, The Trouvelot Astronomical Drawings Manual, and accompanying chromolithograph portfolio sets, made the science of astronomy accessible to a wider audience and helped to usher in public interest in amateur astronomy.

By the turn of the twentieth century, American literature, music, and entertainment had already incorporated themes of traveling to the moon. In 1903, Luna Park opened at New York’s Coney Island, featuring a ride called “A Trip to the Moon.” Visitors boarded a large wooden sailing vessel named Luna with red wings that flapped as the ride made an imaginary voyage to the moon. Interestingly, the idea of traveling to the moon was already a part of the American consciousness even before the Wright brothers’ first successful powered airplane flight in December of 1903. This fascination with space exploration continued to grow throughout the twentieth century.

In the late 1940s, artist Chesley Bonestell collaborated with scientists to create futuristic space illustrations of convincingly lifelike tableaus that made space travel seem almost familiar. In the 1950s, Bonestell illustrated several covers and articles for Collier’s magazine, which ran alongside a series of evocative articles written by expert scientists, including Wernher von Braun—a pioneer of rocket and space technology—that provided factual, detailed descriptions of spaceships and launch vehicles. Bonestell’s art, in this authoritative context, removed space travel from the realm of science fiction by showing that it was taken seriously by the country’s top scientists.

As NASA began sending astronauts into space in the 1960s, artists came to play a critical role in documenting the space program and its aspirations. Bruce Stevenson’s 1962 painting of Alan Shepard, the first American in space, inspired NASA administrator James Webb to start the NASA Art Program. In a 1963 NASA press release, Webb said, “Important events can be interpreted by artists to give a unique insight into significant aspects of our history-making advance into space. An artistic record of this nation’s program of space exploration will have great value for future generations.”

NASA employee Jim Dean and curator Hereward Lester Cooke from the National Gallery of Art sought out artists to define the nation’s early human space program. They initially chose Realism to avoid criticism from conservative audiences and congressmen. This choice of genre also helped to humanize NASA’s astronauts. For example, in 1969, artist Paul Calle captured intimate moments of Neil Armstrong, Edwin “Buzz” Aldrin, and Michael Collins eating breakfast before blasting off to the Moon on Apollo 11.

Prominent American artists like Norman Rockwell and Robert Rauschenberg also played significant roles in promoting space exploration. Rockwell went to NASA as a contract artist for Look magazine, where his Americana paintings then reached an audience of approximately 6 million subscribers. In 1969, Rauschenberg created the Stoned Moon lithograph series from his experience observing the Apollo 11 launch in Florida. In the lithograph “Sky Garden, 69” he posits the coexistence of nature and human-made technology, amplifying the environmental concerns surrounding space exploration. NASA’s art collection became a book called Eyewitness to Space and was exhibited at the National Gallery of Art in Washington, DC, and elsewhere through the Smithsonian Institution Traveling Exhibition Service.

Outside of the NASA Art Program, other artists were also fascinated by space. African American artist Alma Thomas, who is now considered a force in the movement known as the Washington Color School, painted abstractions inspired by moments witnessed on her television screen. Blast Off is a series of color blocks in the shape of a flame to convey the velocity and intense heat at lift-off. Glimpse of the Earth was inspired by the image commonly referred to as “Blue Marble,” from a photograph captured during Apollo 17, the last human mission to the moon in 1972. Thomas reinterpreted this famous photograph as equally illuminated with multiple colors—a more perfect world viewed from 28,000 miles away. Space art continues to shape the nation’s vision of its future on the celestial frontier. In November 2022, the Artemis 1 rocket successfully launched to the moon, marking the first step toward an eventual long-term human presence on the lunar surface, which has been in development for more than a decade. In anticipation of future space missions, NASA commissioned retro-inspired posters of fantasy missions—such as a trip to Jupiter to observe the planet’s auroras from above the polar regions, “studying them in a way never before possible.” In these images, space tourism transforms a futuristic concept into a reality, just as companies like Virgin Galactic, Blue Origin, and SpaceX are making significant strides in bringing civilians into space.

An Innovation Economy in Every Backyard

Grace J. Wang’s timely essay, “Revisiting the Connection Between Innovation, Education, and Regional Economic Growth” (Issues, Winter 2024), warrants further attention given the foundational impact of a vibrant innovation ecosystem—ideas, technologies, and human capital—on the nation’s $29 trillion economy. She aptly notes that regional innovation growth requires “a deliberate blend of ideas, talent, placemaking, partnerships, and investment.”

To that end, I would like to amplify Wang’s message by drawing attention to the efforts of three groups: the ongoing work of the Brookings Institution, the current focus of the US Council on Competitiveness, and the catalytic role of the National Academies Government-University-Industry Research Roundtable (GUIRR) in advancing the scientific and innovation enterprise.

First, Brookings has placed extensive emphasis on regional innovation, focusing on topics such as America’s advanced industries, clusters and competitiveness, urban research universities, and regional universities and local economies. Recently, Mark Muro at Brookings collaborated with Robert Atkinson at the Information Technology and Innovation Foundation to produce The Case for Growth Centers: How to Spread Tech Innovation Across America. The report identified 35 place-based metropolitan locations that are utilizing the right ingredients—population; growing employment; university spending on R&D in science, technology, engineering, and mathematics per capita; patents; STEM doctoral degree production; and innovation sector job share—to realize innovation growth centers driven by targeted, peer-reviewed federal R&D investments.

The US Council on Competitiveness has also focused on place-based innovation. In 2019, the council launched the National Commission on Innovation and Competitiveness Frontiers, which involves a call to action described in the report Competing in the Next Economy: The New Age of Innovation. The council also formed four working groups, including one called The Future of Place-Based Innovation: Broadening and Deepening the Innovation Ecosystem. From these and other efforts, the council has proposed new recommendations that call for “establishing regional and national strategies to coordinate and support specialized regional innovation hubs, investing in expansion and retention of the local talent base, promoting inclusive growth and innovation in regional hubs, and strengthening local innovation ecosystems by enhancing digital infrastructure and local financing.”

Finally, I want to emphasize the important role GUIRR plays in advancing innovation and the national science and technology agenda. Through the roundtable, leaders from federal science agencies, universities, and industry proactively collaborate to frame issues and conduct activities that advance the national enterprise. GUIRR workshops and reports have also historically included elements to advance the innovation enterprise, including regional innovation.

Leaders from federal science agencies, universities, and industry proactively collaborate to frame issues and conduct activities that advance the national enterprise.

To end with a personal anecdote, I’ve witnessed the success that results from such a nexus, especially from one that was recently highlighted by Brookings: the automotive advanced manufacturing industry in eastern Tennessee. In my former position as chief research administrator at the University of Tennessee, I was deeply involved in that regional innovation ecosystem, along with other participants at Oak Ridge National Laboratory and in the automotive industry, allowing me to experience firsthand just how impactful these ingredients can be when combined and maximized.

More so, as GUIRR celebrates 40 years of impact this year, I know it will continue to serve as a strong proponent of the nation’s R&D and innovation enterprise while continually refining and advancing the deep and critical collaboration between government, universities, and industry as laid out in Wang’s article and amplified by Brookings and the US Council on Competitiveness.

President, The University of Texas at San Antonio

Council Member, National Academies Government-University-Industry Research Roundtable

National Commissioner, US Council on Competitiveness

As Grace J. Wang notes in her article, history has shown the transformative power of innovation clusters—the physical concentration of local resources, people brimming with creative ideas, and support from universities, the federal government, industry, investors, and state and local organizations.

In January 2024, the National Science Foundation made a groundbreaking announcement: the first Regional Innovation Engines awards, constituting the broadest and most significant investment in place-based science and technology research and development since the Morrill Land Grant Act over 160 years ago. Authorized in the bipartisan CHIPS and Science Act of 2022, the program’s initial two-year, $150 million investment will support 10 NSF Engines spanning 18 states, bringing together multisector coalitions to put these regions on the map as global leaders in topics of national, societal, and geostrategic importance. Subject to future appropriations and progress made, the teams will be eligible for $1.6 billion from NSF over the next decade.

NSF Engines have already unlocked another $350 million in matching commitments from state and local governments, other federal agencies, philanthropy, and private industry, enabling them to catalyze breakthrough technologies in areas as diverse as semiconductors, biotechnology, and advanced manufacturing while stimulating regional job growth and economic development. Places such as El Paso, Texas, and Greensboro, North Carolina, will see lasting impacts as they are transformed into inclusive, thriving hubs of innovation capable of evolving and sustaining themselves for decades to come.

Places such as El Paso, Texas, and Greensboro, North Carolina, will see lasting impacts as they are transformed into inclusive, thriving hubs of innovation capable of evolving and sustaining themselves for decades to come.

The NSF Engines program is led by NSF’s Directorate for Technology, Innovation, and Partnerships (TIP), which builds upon decades of NSF investments in foundational research to grow innovation and translation capacity. TIP recently invested another $20 million in 50 institutions of higher education—including historically Black colleges and universities, minority-serving institutions, and community colleges—to help them build new partnerships, secure future external funding, and tap into their regional innovation ecosystems. Similarly, NSF invested $100 million in 18 universities to expand their research translation capacity, build upon academic research with the potential for technology transfer and societal and economic impacts, and bolster technology transfer expertise to support entrepreneurial faculty and students.

NSF also works to meet people where they are. The Experiential Learning for Emerging and Novel Technologies (ExLENT) program opens access to quality education and hands-on experiences for people at all career stages nationwide, leading to a new generation of scientists, engineers, technicians, practitioners, entrepreneurs, and educators ready to pursue technological innovation in their own communities. NSF’s initial $20 million investment in 27 ExLENT teams is allowing individuals from diverse backgrounds and experiences to gain on-the-job training in technology fields critical to the nation’s long-term competitiveness, paving the way for good-quality, well-paying jobs.

NSF director Sethuraman Panchanathan has stated that we must create opportunities for everyone and harness innovation anywhere. These federal actions collectively acknowledge that American ingenuity starts locally and is stronger when there are more pathways for workers, startups, and aspiring entrepreneurs to participate in and shape the innovation economy in their own backyard.

Assistant Director for Technology, Innovation and Partnerships

National Science Foundation

Grace J. Wang does an excellent job of capturing the evolution of science and engineering research, technological innovation, and economic growth. She also connects these changes to science, technology, engineering, and mathematics education on the one hand and employment shifts on the other. And she implores us to seriously consider societal impacts in the process of research, translation, and innovation.

I believe developments over the past decade have made these issues far more urgent. Here, I will focus on three aspects of innovation: technological direction, geographic distribution, and societal impacts.

Can innovation be directed? Common belief in the scientific research community is that discovery and innovation are unpredictable. This supports the idea of letting hundreds of flowers bloom—fostered by broad support for all fields of science and engineering. Increasingly, however, the complexity and urgency of societal grand challenges are leading to a case for mission-oriented innovation. As Mariana Mazzucato pointed out in a report titled Mission-Oriented Research & Innovation in the European Union: “By harnessing the directionality of innovation, we also harness the power of research and innovation to achieve wider social and policy aims as well as economic goals. Therefore, we can have innovation-led growth that is also more sustainable and equitable.”

Increasingly, the complexity and urgency of societal grand challenges are leading to a case for mission-oriented innovation.

Can innovation be spread geographically? Technological innovations and their economic benefits have been far from uniformly distributed. Indeed, while some regions have prospered, many have been left behind, if not regressed. Scholars have offered several ways to address this distressing and polarizing situation. With modesty, I point to a 2021 workshop on regional innovation ecosystems, which Jim Kurose, Cheryl Martin, Susan Martinis, and I organized (and Grace Wang participated in). Funded by the National Science Foundation, the workshop led to the report National Networks of Research Institutes, which helped spur development of the NSF’s Regional Innovation Engines program, which recently awarded $1.6 billion to 10 innovation clusters distributed across the nation. Much, much more, of course, remains to be done.

Can the negative societal impacts of innovation be minimized, and the positive impacts maximized? As example of the downside, consider some of the profound negative impacts of smartphones, social media, and mobile internet technologies. As Jaron Lanier, a technology pioneer, pointed out: “I think the short version is that a lot of idealistic people were unwilling to consider the dark side of what they were doing, and the dark side developed in a way that was unchecked and unfettered and unconsidered, and it eventually took over.” At a minimum, everyone in the science and engineering research community should become more knowledgeable about the fundamental economic, sociological, political, and institutional processes that govern the real-world implementation, diffusion, and adoption of technological innovations. We should also ensure that our STEM education programs expose undergraduate and graduate students to these processes, systems, their dynamics, and their driving forces.

Fundamentally, I believe that we need to get better at anticipatory technology ethics, especially for emerging technologies. The central question all researchers must attempt to answer is: what will the possible positive and negative consequences be if their technology becomes pervasive and is adopted at large scale? Admittedly, due to inherent uncertainties in all aspects of the socio-technological ecosystem, this is not an easy question. But that is not enough reason to not try.

Vice Chancellor for Research

University of California, Irvine

Technology innovation can be a major force behind regional economic growth, but as Grace J. Wang notes, it takes intentional coordination for research and development-based regional change to happen. Over the past year, as parties coalesced across regions to leverage large-scale, federally funded innovation and economic growth programs, UIDP, an organization devoted to strengthening university-industry partnerships, has held listening sessions to better understand the challenges these regional coalitions face.

In conversations with invested collaborators in diverse regions—from Atlanta, New York, and Washington, DC, to New Haven, Connecticut, and Olathe, Kansas—we’ve learned that universities can easily fulfill the academic research aspects of these projects. Creating the organizational glue that engages and keeps academic, industry, local and state government, and nonprofit partners collaborating as a whole is more challenging. One solution successful communities use is creating a new, impartial governing body; others rely on an impartial community connector as neutral convener.

But other program requirements remain a black box—specifically, recruiting and retaining talent and developing short- and long-term metrics. At least for National Science Foundation Regional Innovation Engines awardees, it is hoped that replicable approaches to address these issues will be developed in coordination with that effort’s MIT-led Builder Platform.

Creating the organizational glue that engages and keeps academic, industry, local and state government, and nonprofit partners collaborating as a whole is more challenging.

Data specific to a region’s innovation strengths and gaps can lend incredible insight into the ecosystem-building process. Every community has assets that uniquely contribute to regional development; a comprehensive, objective assessment can identify and determine their value. Companies such as Elsevier and Wellspring use proprietary data to tell a story about a community’s R&D strengths, revealing connections between partners and identifying key innovators who may not otherwise have high visibility within a region.

We often hear about California’s Silicon Valley and North Carolina’s Research Triangle as models for robust innovation ecosystems. Importantly, both those examples emphasized placemaking early in their development.

Innovation often has its genesis in face-to-face interactions. High-value research parks and innovation districts, along with co-located facilities, offer services beyond incubators and lab space. The exemplars create intentional opportunities for innovators to interact—what UIDP and others call engineered serendipity. Research has tracked the value of chance meetings—a conversation by the copy machine or a chat in a café—for sparking innovation and fruitful collaboration.

The changing landscape of research and innovation is having a profound impact on the academy, where researchers have traditionally focused on basic research and are now being asked to expand into use-inspired areas to solve societal problems more directly; this is where government and private funders are making more investments.

Finally, Wang noted the difficulty in making technology transfer offices financially self-sustainable, and NSF’s recently launched program Accelerating Research Translation (ART) seeks to address this challenge. But it may be time to reevaluate the role of these offices. Today’s increasing emphasis on research translation is an opportune time to reassess the transactional nature of university-based commercialization and licensing and return to a role that places greater emphasis on faculty support and service rather than revenue generation. Placing these activities within the context of long-term strategic partnerships could generate greater return on investment for all.

President and CEO

UIDP

Taking Aristotle to the Moon and Beyond

In 2009, journalist Tom Wolfe, author of the space-age classic The Right Stuff, wrote an opinion piece for the New York Times entitled “One Giant Leap to Nowhere.” Commenting on the Space Shuttle program, Wolfe recapped the first four decades of the space race and quipped, “NASA never understood the need for a philosopher corps.” According to Wolfe, NASA would never recover its lost vitality and sense of purpose because it had no philosophy of space exploration.

I increasingly suspect he was on to something. For space exploration—whether robotic or human, expeditionary or remote, commercial or government—to pursue its full potential, contribute to the general welfare of the United States, and provide benefits for all humanity, there must be a deep, rigorous engagement with the concept from everyone and for everyone. In other words, to best explore space, society needs to have a communal conversation on exploration’s value, impact, and meaning.

We can learn from the past. In 1969, Apollo 11 accomplished exactly what President Kennedy called for in his 1962 speech at Rice University, when he challenged NASA to send a human into the heavens to walk upon the surface of another world and return to tell the tale. “We choose to go to the moon in this decade and do other things,” he said, “not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.” But when the first human landed on the moon, to great fanfare, that success created a paradox: going to the moon eliminated the reason for going to the moon. Three of the nine missions planned after Apollo 11 were canceled. Indeed, in the five decades since Apollo, no earthling has ventured beyond low Earth orbit.

When the first human landed on the moon, to great fanfare, that success created a paradox: going to the moon eliminated the reason for going to the moon.

The lack of a consistent, enduring approach to contemplating human activity in space has, I would argue, cast a pall on NASA’s deep space human exploration ambitions. The 2003 Columbia disaster prompted decisionmakers to reassess NASA’s human spaceflight aims, leading to the Bush administration’s decision to resume human expeditions beyond low Earth orbit. Since then, the agency has enjoyed relatively persistent, if modest, political support for an open-ended campaign of human deep space exploration. However, that support has manifested itself in different ways across, and even within, four administrations. Most recently, the Artemis program—formally launched by President Trump in 2017—set an ambitious goal to return humans to the moon in 2024. But that moon landing has already been delayed until at least 2026. And, tellingly, the Artemis Base Camp, initially proposed as an integral part of returning to the moon, has been caught in the budget squeeze. Work may be delayed well into the 2030s.

This hazy mandate to send humans to the moon and then Mars—without identifying a specific purpose for such an endeavor—leaves NASA with the substantial practical challenge of trying to sort out the complex ambitions, myriad options, and limited budgets of human expeditions into deep space. Still, predictable delays and budgetary shortfalls present an opportunity for NASA to revisit its reasons for sending humans to walk, once again, on the soil of alien worlds. If NASA’s planning is to ever really get ahead of its immediate mission ambitions and develop a sense of strategic coherency, now is the time to make that happen.

Telic goals vs. an infinite universe

As a space policy professional and, more recently, a student of the history and philosophy of space exploration, comparing the end of the Apollo program with the beginning of Artemis strikes me less as a matter of technology or budget and more as a matter of telos, or purpose. Aristotle identifies telos as a “final cause,” the end state toward which something’s existence ultimately leads. President Kennedy’s speech at Rice established a clear teleological foundation for Apollo—both in the explicit challenge of putting an astronaut on the moon and returning him safely to Earth before 1970, and in the implicit goal of beating the Soviets.

Aristotle’s telos forces consideration of an end, or, as he put it, “that for the sake of which everything is done.” Open-ended activities, though—like exploring the universe—can be described as atelic: they have no specific endpoint. Even if a country is the first to reach the moon, there is no point at which any nation can declare exploration of the universe complete. The former is a telic activity, the latter atelic. Apollo was launched on a firmly telic basis but lacked a sufficiently strong rationale to keep going.

Telic activities have a particular modern appeal; they lend themselves to bold proclamations, a multitude of program management tools, and regular progress reports. Concrete goals work for space exploration because they fuel a sense of direction and progress, and, most importantly, narrative. Narratives have a beginning, middle, and end. We start at the beginning; the telic goal defines the end. All that remains is the middle part: figuring out ways that available means can achieve those ends. Space exploration needs signposts and metrics to feed narratives of technological advancement, forward progress, and futurity. NASA excels at all of this. The catch, as NASA discovered, is that a telic goal can be completed, exhausting the mandate that set everything in motion and bringing the narrative to a close.

Space exploration needs signposts and metrics to feed narratives of technological advancement, forward progress, and futurity. NASA excels at all of this.

Atelic efforts, lacking discrete, concrete ends, are different. Without clear goals against which progress can be measured, atelic efforts are essentially everlasting. They emphasize process, not destination. If the atelic pursuit involves doing something that’s intrinsically good, it can resemble a virtuous activity. And, where telic goals invite debate about the particular, pragmatic value of reaching an end goal, the atelic emphasis on enduring value changes the character of that debate. Thus, applying an atelic approach to space exploration could give voice to the transcendent character of the endeavor, liberating the constraining concept of mission value from the strictures of cost, engineering, and schedule—or even complete agreement on ultimate objectives. Atelic rationales could make room for the same kind of thinking that put a golden phonograph record, The Sounds of Earth, on each Voyager spacecraft, destined to drift forever across the interstellar night.

Within the space community, decisionmakers are constantly grappling with questions of worth and value. Is spending money on space exploration worth it? In what way? To whom? With limited space exploration resources, should a country work toward specific, concrete goals, or broader, more enduring ones?

In space exploration, pragmatic, telic objectives are sometimes at odds with atelic, virtuous pursuits. Should astronauts investigating rock formations on the moon be focused on finding commercially viable mineral deposits, or should they be looking to learn more about how the moon was formed? Another atelic defense of space exploration might posit that sending people out into the cosmos to experience life beyond this world is good in itself. Also atelic: Elon Musk’s statement that he is working “to extend the light of consciousness to the stars.” Encouraging activities on other worlds could have multiple indirect benefits without any practical tangible impact.

Since its creation in 1958, NASA has periodically tried to grapple with deeper questions around the value and meaning of space exploration. By law, the agency’s goals are superordinate to the conduct of science. Title 51 of the US Code—which incorporates the original National Aeronautics and Space Act that created NASA—lists NASA’s purpose, authorities, and responsibilities. NASA exists to contribute to the “general welfare of the United States” by conducting aerospace and space activities that will meet both scientific and non-scientific objectives, such as economic competitiveness and international cooperation. Founding documents emphasize peaceful scientific activity led by a civilian agency for the betterment of all humanity. At NASA, science has a seat at the table, perhaps even a preeminent one—but not the only one.

With limited space exploration resources, should a country work toward specific, concrete goals, or broader, more enduring ones?

Title 51 doesn’t provide clear guidance on how NASA is to reconcile its different prerogatives, so the agency needs to find new ways to think about its endeavors that move beyond familiar quantitative measures like cost and schedule—especially for long-term planning. What is really needed are answers to the fundamental questions of purpose and telos posed by both the Apollo and Artemis programs: Why should humans aspire to tread upon the face of a heavenly body in the first place? If the objectives are telic, then at what ends should those efforts be aimed? If the purposes are atelic, what are they?

The 1965 volume The Railroad and the Space Program, edited by historian Bruce Mazlish, is one of NASA’s most significant early forays into pursuing these deeper questions. A similar attempt to understand space exploration through a larger conceptual frame has driven other efforts at the agency, including the recent report on the Artemis program’s ethical, legal, and societal impacts.

The challenge for the future is understanding how human passions and inclinations can inform and engage space exploration without succumbing to the “terrestrial privilege” of “armchair astronaut” commentary that is often long on wild speculation but short on concrete understanding of the engineering, budgetary, and political challenges facing NASA. How can we, in exploring space, discover and create value and meaning? How can we yoke space exploration to our finest impulses in a truly self-sustaining and beneficial way?

To build a moon base or not? 

Artemis provides a good opportunity to think about how a deeper engagement on space values, impact, and meaning might unfold. For example, in current plans, the goal is to build a base at the lunar south pole and use robots to carry out surface exploration elsewhere on the moon. However, there has been some quiet speculation that NASA might be better served by indefinitely delaying (or canceling) a permanent base in favor of conducting human-led scientific investigations at multiple locations around the moon. Another option is a mobile base—a robotic lunar RV, stuffed with lab equipment and living facilities—that could be telerobotically driven anywhere on the moon to greet astronauts wherever they land. In all of these options, there is a question of whether NASA should turn its attention from establishing a permanent outpost toward a more science-focused approach with human-led sorties.

Of course, the concept of telos is just one of many tools in the philosophical toolkit. Considering the Artemis effort from a broader philosophical standpoint can reveal widely divergent visions of what space exploration should be—and perhaps offer guidance in the choices ahead. For example, insider discussions about a permanent base versus a more peripatetic approach point to larger questions that are as philosophical as they are practical: Why go to the expense and danger of sending humans into space at all, rather than working with robots? Is there an inherent value to human presence in space? And if so, what is it? Is the scientific benefit commensurate with the added cost and risk? Are the benefits of human presence enhanced by continuous permanent residence?

In the case of building a base at the lunar south pole, many pragmatic, telic arguments are available—not least of which is the simple political value of having a discrete objective and creating a concrete psychological anchor for subsequent lunar activity. In my opinion, although base building provides an attractive telic goal with some hints of future pragmatic value, it ultimately does not present a strong enough atelic argument on its own and risks recreating the “goal attained” trap of Apollo. 

Artemis provides a good opportunity to think about how a deeper engagement on space values, impact, and meaning might unfold.

But, chosen carefully, some telic objectives could mature into enduring atelic efforts. For example, a goal-oriented presence could potentially be framed under an overarching atelic framework of expanding knowledge or advancing exploration. Alternately, a series of atelic activities can transform, after a few unexpected breakthroughs or discoveries, into a post hoc telic narrative, as if the goals had been clear all along. Or perhaps an atelic argument will surface on its own. Maintaining a persistent presence on the moon would create more open-ended opportunities, such as permitting NASA and its partners to more substantively weigh in on the values and standards to which humans should adhere as they reach out into the cosmos.

Norms offer a particularly interesting way to contemplate how telic and atelic aims consolidate support for space exploration. NASA will not be alone on the moon; several nations are joining the effort while a rival China has announced its own plans. Many of the norms that the international community has embraced explicitly set aside older, more familiar frameworks (such as sovereignty) that might otherwise guide our behavior. For instance, the UN Outer Space Treaty states that nations cannot claim sovereignty “by means of use or occupation or other means.” What that means is still undefined. Can I build a structure right in the middle of someone else’s landing zone? And then are they obligated to land elsewhere, or do I have to move? Either way, it looks like appropriation by occupation. Or, if two countries have their eye on the same location, who prevails? Does it matter if one is pursuing commercial use and the other scientific? Lunar surface activities could kick up a sizeable amount of dust that could interfere with other operations, but who sets those limits? Outer space is an alien environment that will expose and defy our unspoken assumptions and priorities. Philosophy gives us ways to frame and discuss them.

Fusing the telic strengths of base building and the atelic strengths of science itself could also be productive. The general pursuit of universal knowledge and truth, frequently associated with scientific investigation, can be described as valuable in its own right. Building bases that can sustain a longer, more resilient pursuit of scientific knowledge could be a more enduring approach than pursuing either path—pure base or pure science—alone.

For Artemis to succeed where Apollo failed—providing for its successors—decisionmakers must think carefully about value and meaning in all areas of the mission. One of the ongoing discussions within NASA is about what building, operating, and owning a surface lunar habitat might entail. Commercial space advocates have argued that the private sector can provide exploration infrastructure more cost-effectively than the government—a practical advantage. But in the case of an Artemis base camp, turning to the private sector for a lunar surface habitat would present political and symbolic liabilities to the mission—an atelic threat. Artemis is sending Americans to live on the moon on behalf of their country and their world; ethical considerations (or even political logic) mean that they should be sent for virtuous reasons, rather than in pursuit of profit. Sure, a commercial habitat might (in theory) be more cost-effective, but at what cost? And will those savings be worth jeopardizing the symbolic impact of Artemis?

Outer space is an alien environment that will expose and defy our unspoken assumptions and priorities. Philosophy gives us ways to frame and discuss them.

If it is to survive, Artemis cannot afford to appear as a way to turn scientific expeditions into expensive time shares in some rocket baron’s celestial hotel. In any lunar base, ownership will feed into symbolic logic and rationale. As a base grows beyond the initial habitat and the symbolic requirements of NASA ownership are satisfied, a diversity of participants—including commercial ventures and international partners—becomes a way to broaden the sense of ownership and demonstrate the virtue of diverse approaches to transforming the moon into a human world.

Clearly, when making these sorts of decisions around building a lunar base, NASA must make choices that escape the bounds of quantitative, engineering, or cost-benefit analyses. Although it is one of the world’s preeminent engineering organizations, NASA is not institutionally well equipped by culture, precedent, or inclination to incorporate considerations that fall beyond the telic utilitarian and practical aspects of completing a mission. Yet, NASA’s core constituency is the American public, and to better serve that public, the agency needs a way to engage questions of values and visions and offer more straightforward and durable narratives of space exploration.

Philosophy for clearer public purpose

NASA needs to embrace philosophy so that it can better explain what it is doing and why to the public and itself. This is particularly important because, as a federal agency, NASA derives its overall purpose and direction from the public through elected officials. But even when Congress and the White House set the overall agenda (and budget!), the agency still needs an internal logic guiding its decisions.

Throughout NASA’s research and exploration portfolio, a wide range of societal impacts, ethical considerations, and inspirational elements come into play. There are decisions to be made between favoring human or robotic expeditions that require understanding their differences and harmonizing them. And what should the agency’s position be, for instance, on developing technologies that will ultimately be used by the private sector? Absent clearer, systematic thinking about such issues, NASA is compelling its scientists and engineers to act as philosophers on the spot whenever they favor a robotic or human mission, authorize a commercial contract, or make myriad seemingly routine decisions.

Although this ad hoc approach may seem like an organic way to deal with the problem of purpose, it is a missed opportunity. Leaving all decisions about societal values to engineering program managers on a case-by-case basis means NASA doesn’t develop the ability to think more systematically about values, vision, and norms. And these are the core ingredients in shaping the guiding logic and narrative needed for a coherent strategy of space exploration. 

NASA needs to embrace philosophy so that it can better explain what it is doing and why to the public and itself.

Without a real way to consider what it does, NASA falls back on institutional interests and bureaucratic inertia. In other words, failing to deliberately engage philosophical debates about values and visions means that any exogenous NASA vision could become erratic, meaningless, or even subject to intellectual fads. The agency risks foundering as administrations and mandates change over time. It could get caught in the kind of pointless ideological food fights that would rob it of its broad, bipartisan appeal. Without a stronger sense of self, NASA risks getting dragged into someone else’s ideological fantasy and souring the public on space exploration. Instead, NASA should cultivate a strong self-awareness about vision and mission.

And that self-awareness should be broad. One of the most persistent difficulties with thinking about space exploration is the immense amount of terrestrial bias that humans automatically bring to the table. Our cultures, norms, and institutions are grounded in the geographical and biological reality of where we live. Simply porting over terrestrial solutions means bringing along terrestrial assumptions, a potentially fatal mistake in the hostile and unforgiving domain of space exploration.

Terrestrial bias pops up in many small design decisions on spacecraft, including the occasional inclusion of drawers, which can jam without the aid of gravity to keep their contents in place. A broader philosophical framework can help explorers create a culture appropriate to the reality of living and working beyond Earth. Another example of the benefits of freeing our decisionmaking of terrestrial bias in favor of new ways of understanding the meaning and value of presence can be seen in the discussion around in situ propellants. Historically, plans to explore places like Mars assumed that astronauts would need to carry all the propellant for a return trip with them. By contrast, in situ resource utilization (ISRU) calls for sending robotic equipment in advance of a human landing to process carbon dioxide from the Martian atmosphere and manufacture the necessary propellant on site. The ISRU approach shows the importance of finding different ways to think about the value of Mars itself—reimagining it as a site of both scientific and industrial production—through rigorous philosophical engagement with space exploration.

Becoming interplanetary

Through exploration, a culture invests places with meaning, value, and context. The humanities of space exploration (including philosophy) will be much more than a series of ethics discussions or a set of telic and atelic goals. They will require a new consideration of the universe beyond humanity’s tiny terrestrial oasis, along with a refined sense of our particular human and nationalistic baggage. 

A broader philosophical framework can help explorers create a culture appropriate to the reality of living and working beyond Earth.

Leaving the world of living things to live and work in the vast, abiotic heavens is necessarily a matter of profound uncertainty and difficulty. Space exploration is, in general, a field that is influenced by an overabundance of enthusiasm and ideas. Less than a century into its expansion beyond Earth, humanity is still comparatively ignorant about the rest of the universe. We have imposed precious little meaning and structure to guide the ways we will collectively interact with worlds beyond. As a result, much of the speculation about the promises and perils of space exploration—often found in both popular and even some academic press—is essentially science-fictional. Many of the scenarios that excite popular imaginations and fears today are light-years from fruition. For example, large economically and technologically self-sufficient space settlements are decades and centuries away, not years. But in developing a serious space philosophy, NASA could help us learn to think like interplanetary people over that much longer time frame.

In his closing paragraphs, Tom Wolfe argues that what NASA needs is the power of clarity and vision. What NASA needs to succeed and endure is purpose, a sense of objective, and a guiding logic to animate its strategic thinking. Congress and the White House can give NASA its goals and the resources to reach them. But first, NASA must be able to provide better ways to address the deep questions of space exploration: Why? To what end? And for what purpose?”  The smallest step on the moon—or anywhere in the heavens—starts with a giant, collective leap of the mind.