In the Winter 2024 Issues, the essays collectively titled “An AI Society” offer valuable insight into how artificial intelligence can benefit society—and also caution about potential harms. As many other observers have pointed out, AI is a tool, like so many that have come before, and humans use tools to increase their productivity. Here, I want to concentrate on generative AI, as do many of the essays. Generative AI is a special kind of tool designed to improve human productivity, but like all tools it has limitations. Growth, innovation, and progress in AI are inevitable, and the essays provide an opportunity to invite collaboration between professionals in the social sciences and humanities to work with computer scientists and AI developers to better understand and address the limitations of AI tools.
The rise and overall awareness of generative AI has been nothing short of remarkable. The generative AI-powered ChatGPT took only five days to reach 1 million users. Compare that with Instagram, which took about 2.5 months to reach that mark, or Netflix, which took about 3.5 years. Additionally, ChatGPT took only about two months to reach 100 million users, while Facebook took about 4.5 years and Twitter just under 5.5 years to hit that mark.
Why has the uptake of generative AI been so explosive? Certainly one reason is that it helps productivity. There is of course plenty of anecdotal evidence to this effect, but there is a growing body of empirical evidence as well. To cite a few examples, in a study involving professional writing skills, people who used ChatGPT decreased writing time by 40% and increased writing quality by 18%. In a study of nearly 5,200 customer service representatives, generative AI increased productivity by 14% while also improving customer sentiment and employee retention. And in a study of software developers, those who were paired with a generative AI developer tool completed a coding task 55.8% faster than those who were not. With that said, we are also beginning to understand the kinds of tasks and people that benefit most from generative AI and those that don’t benefit or may even experience a loss of productivity. Knowing when and why it doesn’t work is as important as knowing when and why it does.
Unfortunately, one of the downsides of today’s class of generative AI tools is that they are prone to what are called “hallucinations”—they output information that is not always correct. The large language model technology upon which the systems are based is good at producing fluent and coherent text, but not necessarily factual text. While it is hard to know how frequently these hallucinations occur, one estimate puts the figure at between 3% and 27%. Indeed, currently there seems to be an inherent trade-off between creativity and accuracy.
So we have a situation today where generative AI tools are extremely popular and demonstrably effective. At the same time, they are far from perfect, with many problems identified. Just as we drive cars and use the internet, there are risks, but we use these tools anyway because we decide the benefits outweigh the risks. Apparently people are making a similar judgment in deciding to use generative AI tools. With that said, it is critically important that users be well informed about the potential risks of these tools. It is also critical that policymakers—with public input—work to ensure that AI safety and user protection are given the utmost priority.
The essays on artificial intelligence provide interesting and informative insights into this emerging technology. All new technologies bring both positive and negative results—what I have called the yin and yang of new technologies. AI will be no exception. Advocates for a new technology usually emphasize its advantages and dismiss consideration of possible adverse effects. It is only later, when the technology has been allowed to operate widely, that actual positive and negative effects become apparent. As Emmanuel Didier points out in his essay, “Humanity is better at producing new technological tools than foreseeing their future consequences.” The more disruptive the new technology, the greater will be its effects of both kinds.
With AI, it’s not just the bias and machine learning gone amok, which are the current criticisms levied against the technology. AI’s influences can go far beyond what we envision at this time. For example, users of AI who rely on it to produce outputs reduce their opportunities for growth of creative abilities and development of social skills and other functional capabilities that we normally associate with well-adjusted human adults. A graphic example of what I mean can be seen in a recent entry in the comic strip Zits, in which a teenager named Jeremy is talking with his friend. He says, “If using a chatbot to do homework is cheating … but AI technology is something we should learn to use … how do we know what’s right or wrong?” And his buddy responds, “Let’s ask the chatbot!” By relying on the AI program to answer their ethical quandary, they lost the opportunity to think through the issue at hand and develop their own ethos. It is not hard to imagine similar experiences for AI users in the real world who are otherwise expected to grow in wisdom and social abilities.
It will probably not be the use of AI in individual circumstances that becomes problematic, but the overreliance on AI that is almost bound to develop. Similarly, social media are not, by themselves, a bad thing. But social media have now overtaken a whole generation of users and led to personal antisocial and asocial behaviors. The potential for similar negative outcomes when AI use becomes widespread is very strong.
Back when genetic modification was a new and potentially disruptive technology, it was foreseen as possibly dangerous to society and to the environment. In response, policymakers and concerned scientists put safeguards in place to prohibit the unfettered release of gene-edited organisms into the environment, as well as the editing of human germ cells that transfer genetic traits from one generation to the next. Most of these restrictions are still in effect. AI could possibly be just as disruptive as genetic modification, but there are no similar safeguards in place to allow us time to better understand the extent of AI influences. And it is not very likely that the do-nothing Congress we have now would be able to handle an issue as complex as this.
Arthur T. Johnson
Professor Emeritus
Fischell Department of Bioengineering
University of Maryland
The Bondage of Data Tyranny
In “The Limits of Data” (Issues, Winter 2024), C. Thi Nguyen identifies key unspoken assumptions that pervade modern life. He skillfully illustrates the problems associated with reducing all phenomenon to data and ignoring those realities that cannot be captured by data, especially when it comes to human beings. He identifies examples of how the focus on quantification frequently strips data of context and introduces bias in the name of objectivity. Here, I offer some thoughts that complement the essay’s essential points while approaching them from slightly different perspectives.
While forcing people into groups to enable better data collection may lead to unwanted outcomes, some social categorization is necessary. Society needs legal thresholds to enable the equal treatment of citizens under the law. Sure, there are responsible 15-year-old geniuses and immature 45-year-old fools, but society has to offer some reasonable, but ultimately arbitrary, dividing line in allowing people to vote, or drive, or drink, or serve in the army. The need to codify legal standards for society remains an imperative, but, as Nguyen argues, those standards need not be strictly quantitative.
The universal drive for quantification and reducing phenomenon to data is driven by the architecture of the digital databases that process that data. Storing the data and analyzing them demands that all information inputs be in a format that must ultimately translate to 1s and 0s. This assumption itself, that all information is reducible to 1s and 0s, contains within it the conclusion that concepts, and by extension human thinking, can be reduced to binary terms. An attitude emerges that information that cannot be reduced to 1s and 0s is not worthy of attention. Holistic notions such as art, human emotion, and the soul must be either reduced to strict mathematical patterns or treated as a collection of examples from the internet or other databases.
A further motivation for the universal embrace of data and the fixation with quantification lies deep in the roots of Anglo-Saxon, and particularly American, culture. Early in the eighteenth century, the ideas of the British philosopher John Locke initiated a tradition that placed far greater value on practical facts that can be sensed (i.e., measured) rather than spiritual beliefs or cultural traditions that are the products of human reflection. By the end of the century, America’s founding fathers, including Benjamin Franklin and Thomas Jefferson, followed Locke’s tradition by emphasizing practicality and measurement. The advent of mass production and consumption—capitalism—only further sharpened the focus on the practical and obtainable. Entering the twentieth century, the great British physicist Lord Kelvin summed up his commitment to empiricism by declaring: “To measure is to know.”
Society leverages the power of current data processing technologies but is subject to their limits. An enduring fixation with data stems from modern beliefs about what type of knowledge is worthwhile. Freeing society from the bias and bondage of data tyranny will require responding to these deeply embedded technological and behavioral factors that keep society limited by contemporary data structures.
Iddo Wernick
Senior Research Associate, Program for the Human Environment
The Rockefeller University
Bioliteracy, Bitter Greens, and the Bioeconomy
The success of biotechnology innovations is predicated not only on how well the technology itself works, but also on how society perceives it, as Christopher Gillespie eloquently highlights in “What Do Bitter Greens Mean to the Public?” (Issues, Winter 2024), paying particular attention to the importance of ensuring that diverse perspectives inform regulatory decisions.
To this end, the author calls on the Biden administration to establish a bioeconomy initiative coordination office (BICO) to coordinate between regulatory agencies and facilitate the collection and interpretation of public acceptance data. This would be a much-needed improvement to the current regulatory system, which is fragmented and opaque for nonexperts. For maximum efficiency, care should be taken to avoid redundancy between BICO and other proposals for interagency coordination. For example, in its interim report, the National Security Commission on Emerging Biotechnology formulated two relevant Farm Bill proposals: the Biotechnology Oversight Coordination Act and the Agriculture Biotechnology Coordination Act.
In addition to making regulations more responsive to public values, as Gillespie urges, I believe that increasing the general public’s bioliteracy is critical. This could involve improving K–12 science education and updating it to include contemporary topics such as gene editing, as well as amending civics curriculums to better explain the modern functions of regulatory agencies. Greater bioliteracy could help the public make more informed judgments about complex topics. Its value can be seen in what befell genetic use restriction technology (GURT), commonly referred to as terminator technology. GURTs offered solutions to challenges such as the efficient production of hybrid seeds and the prevention of pollen contamination from genetically modified plants. However, activists early on seized on the intellectual property protection aspect of GURT to turn public opinion against it, resulting in a long-standing moratorium on its commercialization. More informed public discourse could have paved a path toward leveraging the technology’s benefits while avoiding potential drawbacks.
Gillespie began his essay by examining how some communities and their cultural values were missing from conversations during the development of a gene-edited mustard green. The biotech company Pairwise modified the vegetable to be less bitter—but bitterness, the author notes, is a feature, not a flaw, of a food that is culturally significant to his family.
This example resonated keenly with me. I have attended a company presentation on this very same de-bittered mustard green. Like Gillespie, I do not oppose the innovation itself. Indeed, I’m excited by how rapidly gene-edited food products have made it into the market, and by the general lack of public freakout over them. But like Gillespie, I was bemused by this product, though for a different reason. According to the company representative, Pairwise’s decision to focus on de-bittering mustard greens as its first product was informed by survey data indicating that American consumers wanted more diversity of choice in their leafy greens. My immediate thought was: just step inside an Asian grocery store, and you’ll find a panoply of leafy greens, many of which are not bitter.
Genetic engineering has opened the doors to new plant varieties with a dazzling array of traits—but developing a single product still takes extensive time and money. Going forward, it would be heartening to see companies focus more on traits such as nutrition, shelf stability, and climate resilience than on reinventing things that nature (plus millennia of human agriculture) has already made.
Vivian Zhong
PhD Candidate, Stanford University
Policy Entrepreneurship Fellow, Federation of American Scientists
Christopher Gillespie notes that inclusive public engagement is needed to best advance innovation in agricultural biotechnology. As an immigrant daughter of a smallholder farmer at the receiving end of products stemming from biotechnology, I agree.
Growing up, I witnessed firsthand the challenges and opportunities that smallholder farmers face. So I am excited by the prospect that innovations in agricultural biotechnology can bring positive change for farming families like mine. At the same time, since farming practices have been passed down in my family for generations, I directly feel the importance of cultural traditions. Thus, the author’s emphasis on the importance of obtaining community input during the early development process resonates deeply.
Such public consultation, however, often gets overlooked—to common detriment. In the author’s example of gene-edited mustard greens, the company behind the innovation could have greatly benefited from a targeted stakeholder engagement process, soliciting input from the very communities whose lives would be impacted. Such a collaborative effort can not only enhance the relevance of an innovation but also address cultural concerns. I believe that many agricultural biotechnology companies are already doing public engagement, but how it is being done makes a difference.
In this regard, while the participatory technology assessments methods that Gillespie describes represent an effective way to gather input from members of the public whose opinions are systemically overlooked, it is important to recognize that this approach may hold certain challenges. Companies might encounter roadblocks in getting communities to open up or welcome their innovation. This resistance could be due to historical reasons, past experiences, or a perceived lack of transparency. Public engagement programs should be created and facilitated through a decentralized approach, where a company chooses a member of a community to lead and engage in ways that resonate with the community’s values. Gillespie calls this person a “third party or external grantee.” This individual should ideally adopt the value-based communication approach of grassroots engagement, where stories are exchanged and both the company and the community connect on shared values and strategize ways forward to benefit equally from the innovation.
Another step that the author proposes—establishing a bioeconomy initiative coordination office within the White House Office of Science and Technology Policy, focusing on improved public engagement—would also be a step in the right direction. But here again, it is crucial that this office adopt a value-based inclusive and decentralized approach to public engagement.
Though challenges remain, I look forward to a future filled with advancements in agricultural biotechnology and their attendant benefits in areas such as improved crop nutrition, flavor, and yield, as well as in pest control and climate resilience. And I return to my belief that fostering a transparent dialogue among innovators, regulators, and communities is key to building and maintaining the trust needed to ensure this progress for all concerned.
Modesta N. Abugu
PhD Candidate, Department of Horticultural Science
North Carolina State University
She is an AgBioFEWS Fellow of the National Science Foundation and a Global Leadership Fellow of the Alliance for Science at the Boyce Thompson Institute
“Ghosts” Making the World a Better Place
In “Bring on the Policy Entrepreneurs” (Issues, Winter 2024), Erica Goldman proposes that “every graduate student in the hard sciences, social sciences, health, and engineering should be able to learn some of the basic tools and tactics of policy entrepreneurship as a way of contributing their knowledge to a democratic society.” I wholeheartedly support that vision.
When I produced my doctoral dissertation on policy entrepreneurs in the 1990s, only a handful of scholars, most notably the political scientist John Kingdon, mentioned these actors. I described them as “ghost like” in the policy system. Today, researchers from across the social sciences are studying policy entrepreneurs and many new contributions are being published each year. Consequently, we can now discern regularities in what works to increase the likelihood that would-be policy entrepreneurs will meet with success. I summarized these regularities in an article in the journal Policy Design and Practice titled “So You Want to be a Policy Entrepreneur?”
When weighing the prospects of investing time to build the skills of policy entrepreneurship, many professionals in scientific, technological, and health fields might worry about the opportunity costs involved. If they work on these skills, what will they be giving up? It’s legitimate to worry about trade-offs. And, certainly, none of us want highly trained professionals migrating away from their core business to go bare knuckle in the capricious world of political influence.
But to a greater extent than has been acknowledged so far, building skills to influence policymaking can be consistent with becoming a more effective professional across a range of fields. The same skills it takes to be a policy entrepreneur are those that can make you a higher performer in your core work.
My studies of policy entrepreneurship show collaboration is a foundational skill for anyone wanting to have policy influence. Policy entrepreneurs do not have to become political advisers, lobbyists, or heads of think tanks. But they do need to be highly adept at participating in diverse teams. They need to find effective ways to connect and work with others who have different knowledge and skills and who come from different backgrounds than their own. Thinking along these lines, it doesn’t take much reflection to see that core skills attributed to policy entrepreneurs are of enormous value for all ambitious professionals, no matter what they do or where they work.
We can all improve our productivity—and that of others—by improving our teamwork skills. Likewise, it’s well established that strategic networking is crucial for acquiring valuable inside information. Skills in framing problems, resolving conflicts, making effective arguments, and shaping narratives are essential for ambitious people in every professional setting. And these are precisely the skills that, over and over, we see are foundational to the success of policy entrepreneurs.
So, yes, let’s bring on the policy entrepreneurs in the hard sciences, social sciences, health, and engineering. They’ll have a shot at making the world a better place through policy change. Just as crucially, they’ll also build the skills they need to become leaders in their chosen professional domains.
Michael Mintrom
Professor of Public Policy
Monash University
Melbourne, Victoria, Australia
Erica Goldman makes the important case that we need to better enable scientists and technologists to seek to impact policy. She asserts that by providing targeted training, creating a community of practice, and raising awareness, experts can become better at translating their ideas into policy action. We should build an academic field around policy entrepreneurship as a logical next step to support this effort.
One key reason why people don’t pursue policy entrepreneurship is, as Goldman suggests, “they often have to pick up their skills on the job, through informal networks, or by serendipitously meeting someone who shows them the ropes.” This is in part because these skills are not regularly taught in the classroom. The academic field of policy analysis relies on a client-based model, which assumes that the student already has or will obtain sufficient connections or professional experience to work for policy clients. But how do you get a policy client without a degree or existing policy network?
Many experts in science, technology, engineering, and mathematics who have tremendous professional experience—precisely the people we should want to be informing policy—do not have the skills to take on client-based policy work. Take a Silicon Valley engineer who wants to change artificial intelligence policy, or a biochemist who wants to reform the pharmaceutical industry. Most such individuals will not enroll in a master’s degree program or move to Washington, DC, to build a policy network. As Goldman emphasizes, we instead need “a practical roadmap or curriculum” to “empower more people from diverse backgrounds and expertise to influence the policy conversation.”
What if we developed instead a field designed specifically to teach subject matter experts how to impact policy from the outside, how to help them get a role that will give them leverage from within, or how to reach both goals? At the Aspen Tech Policy Hub, we are working with partners such as the Federation of American Scientists to kick-start the development of this field. We focus on teaching the practical skills required to impact policy—such as how to identify key stakeholders, how to develop a policy campaign that speaks to those stakeholders, and how to communicate ideas to generalists. By investing in the field of policy entrepreneurship, we will make it more likely that the next generation of scientists and technologists have a stronger voice at the policy table.
Betsy Cooper
Director, Tech Policy Hub
The Aspen Institute
To Fix Health Misinformation, Think Beyond Fact Checking
When tackling the problem of misinformation, people often think first of content and its accuracy. But contering misinformation by fact-checking every erroneous or misleading claim traps organizations in an endless game of whack-a-mole. A more effective approach may be to start by considering connections and communities. That is particularly important for public health, where different people are vulnerable in different ways.
On this episode, Issues editor Monya Baker talks with global health professionals Tina Purnat and Elisabeth Wilhelm about how public health workers, civil society organizations, and others can understand and meet communities’ information needs.Purnat led the World Health Organization’s team that strategized responses to misinformation during the coronavirus pandemic. She is also a coeditor of the book Managing Infodemics in the 21st Century. Wilhelmhas worked in health communications at the US Centers for Disease Control and Prevention, UNICEF, and USAID.
Resources
Visit Tina Purnat and Elisabeth Wilhelm’s websites to learn more about their work and find health misinformation resources.
Check out Community Stories Guide to explore how public health professionals can use stories to understand communities’ information needs and combat misinformation.
How is an infodemic manager like a unicorn? Visit the WHO Infodemic Manager Training website to find training resources created by Purnat and Wilhelm, and learn about the skills needed to become an infodemiologist.
Transcript
Monya Baker: Welcome to the Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.
How many of these have you heard? “Put on a jacket or you’ll catch a cold.” “Don’t crack your joints or you’ll get arthritis.” “Reading in low light will ruin your eyes.” Health misinformation has long been a problem, but the rise of social media and the COVID-19 pandemic has escalated the speed and scale at which misinformation can spread and the harm that it can do. Countering this through fact-checking feels like an endless game of whack-a-mole. As soon as one thing gets debunked, five more appear. Is there a better way to defuse misinformation?
My name is Monya Baker, senior editor at Issues. On this episode, I’m joined by global health professionals, Tina Purnat and Elisabeth Wilhelm. We’ll discuss how to counter misinformation by building trust and by working with communities to understand their information needs and then deliver an effective response. Tina Purnat led the team at the World Health Organization that strategized responses to misinformation during the coronavirus pandemic. And Elisabeth Wilhelm has worked in health communications at the US Centers for Disease Control, UNICEF and USAID. Tina, Lis, welcome!
Tina Purnat: Hi.
Elisabeth Wilhelm: Thanks.
Baker: Could each of you tell me what you were doing during the pandemic and what did you see in terms of misinformation?
Wilhelm: So, during the pandemic, I was working at CDC and Tina was working at WHO. We’re going to talk a little bit about our experiences, but those don’t represent our former employers or our current ones. They’re just from our own personal experiences and our own personal war stories.
So, early on, I was sent as a responder to Indonesia to support the COVID-19 response in February of 2020. And at the time, there was officially no cases in Indonesia, but several colleagues in several different agencies were quite worried about this. And so, they asked for support. I saw huge challenges regarding COVID there specifically about misinformation, lack of information, too much information. And all of this really affected the government’s ability to respond and build trust with a really anxious public because so little information was available.
Information overload that was causing this anxiety and panic. And that was paralyzing not just for the public, but for the government and public health institutions that were trying to respond to it.
And at the end of March, I had already ended up in quarantine and I was sent to my hotel because I was in a meeting with too many high level officials in a very poorly ventilated room. And at the time, I reconnected with Tina because she had decided to set up a mass consultation from her dining room table through WHO to really understand and unpack this new phenomenon and misinformation. At the time, it was calling it the infodemic, and it was that information overload that was causing this anxiety and panic. And that was paralyzing not just for the public, but for the government and public health institutions that were trying to respond to it.
Purnat: I mean, we both saw writing on the wall, how big of a problem this was going to become globally. So, in February 2020, I was actually pulled from my day job at WHO. I was working in artificial intelligence and digital health, and I was pulled into the search support to the WHO emergency response team. And initially, the focus of my work was on how to quickly and effectively understand the different concerns and questions and narratives people were sharing online about COVID more broadly. It looked really at how the digitized, chaotic information environment is impacting people’s health.
So, I collaborated with Lis and many, many other people from practically all over the world, both on the science of infodemiology and building new public health tools, training more people, and also building collaborative networks. That later became known as infodemic management in health emergencies.
Baker: Just to sum up, Lis was in Indonesia before any cases of COVID had been reported publicly. And you, Tina, were called to manage a lot of things from your kitchen table as WHO tried to ramp up a response. What surprised both of you in terms of misinformation?
Wilhelm: Well, I could say that in Indonesia, it was really clear that everyone was caught flatfooted. But this was, of course, I think the story all over the world of how fast misinformation grew and spread and where people’s questions concerns were not getting answered and then trying to understand who felt like they were responsible for trying to address this misinformation or who was in a position to do something about it.
There’s really no vaccine against misinformation, although people would like there to be. There isn’t one simple answer.
I learned it’s not just government policymakers who play a role in addressing this problem, but it’s also journalists. It’s working with community-based organizations. It’s working with doctors, with nurses, with other health workers and with digital tech experts. And actually, it’s a lot of the lessons that we learned in Indonesia I would bring back to the US to apply in the US context. And there’s a lot of global lessons learned on addressing this information that we were able to bring back home.
And it’s just part of, I think, the largest story that misinformation is a complex phenomenon. The information environment is increasingly complex. No country is not affected by it. And health systems are just starting to understand and wrestle how to deal with it and recognizing that there isn’t one silver bullet. There’s really no vaccine against misinformation, although people would like there to be. There isn’t one simple answer. And I think that became increasingly clear during the pandemic.
And a lot of it has to do with trust. You have to build trust and do that in the middle of a pandemic. And it’s really hard to do that when you’re trying to address misinformation where people have laid their trust in others and not necessarily those that are in front of a bank of cameras and are an official spokesperson speaking to the public every day during a press conference. And so, that to me was a big revelation.
Baker: And Tina, I think I heard this phrase from you first, that instead of taking this very content-focused approach to misinformation, that a more effective way would be a public health approach to information. What does that mean?
If they find the information, the right information at the right time from the right person, then there’s much less opportunity or a chance that they would actually turn to a less credible source. So, we need to really be thinking much further upstream in this evolution of, well, what does actually create rumors and misinformation.
Purnat: One of the principles in public health, for example, is doing no harm. Another principle is really focusing on prevention instead of only mitigation or just treating disease, but actually preventing it. And I think actually what we’ve learned really most during the pandemic is the need to really understand how the information environment works, how misinformation actually takes hold, how it spreads, and what actually drives the creation and spread of it.
So, if you want to be really proactive, really what we’ve learned is that you need to be paying attention to what are actually people’s questions and concerns, or what is the health information that they cannot find because that basically are the needs that they’re trying to address. If we meet them, if they find the information, the right information at the right time from the right person, then there’s much less opportunity or a chance that they would actually turn to a less credible source. So, we need to really be thinking much further upstream in this evolution of, well, what does actually create rumors and misinformation. And not only basically play whack-a-mole chasing different posts.
Baker: How do you go about figuring out what a community’s information needs are?
Wilhelm: Ask them. Just don’t assume that a survey is really going to fully encapsulate what people’s information needs are. The best way is to ask them directly. And there are ways of engaging with communities, understanding their needs, and then deciding better health services to meet those needs. And that really is a community-centered approach that I hope becomes far more normed than it has been. It’s the whole idea of not for us without us.
And so, recognizing that blasting messages at communities that we think are going to be important or relevant to their context and that they’re more likely to follow, that’s the way of doing public health from 50 years ago. And we got to change how we understand and work with communities and involve them in the entire process in the business of getting people healthy.
Blasting messages at communities that we think are going to be important or relevant to their context and that they’re more likely to follow, that’s the way of doing public health from 50 years ago. And we got to change how we understand and work with communities and involve them in the entire process.
Public health is about the fact that your individual decisions can have population level impacts. I like to think of it in this way that everybody should wash their hands after they use the bathroom, but there are policies that also encourage that in places where people eat food. When you go to a restaurant and you go to the bathroom, there’s a big sign on the side of the door that says, “Employees must wash their hands.” So, while there might be also social norms and healthcare providers recommending that people wash their hands after using the bathroom, there also are policies and regulations in place that encourage that and enforce that so that everybody stays healthy and you can get a burger without getting food poisoning.
One of the projects I worked on at Brown really tried to understand people’s experiences on a health topic through stories. We tell each other’s stories. We understand the world through stories. Stories are incredibly motivating and powerful, and they’re usually emotionally based. They’re not fact-based necessarily. My story is my experience. But if I share it with you, you might be convinced of a certain thing because I’ve had this experience. If you can look at stories like that in aggregate, you can start identifying, well, are there common experiences that people in this community have and what can that tell us about how they’re being bombarded by the information environment or the common kinds of misinformation they’re seeing or the concerns they have? Or what are some of the social norms here that might be helpful or harmful for people protecting their health? And what can we do to better design services to meet people’s needs? It’s not just understanding how people are affected by misinformation, but it’s the totality of the information environment and when they want to do the healthy thing, is it easy to do?
Misinformation is often spread by people successfully when their values align with what they’re saying.
Purnat: Misinformation is often spread by people successfully when their values align with what they’re saying, that narrative. So, if a person values autonomy and their own control over their health, then they’re much more likely to discuss and share misinformation or health information that is underpinned by protecting people’s freedoms and rights. Or if people have historically had bad experiences with their physicians or their health service, then they might discuss and share health information and misinformation that offers alternative treatments or remedies that don’t require a visit to the doctor’s office.
That’s literally where you could say vulnerabilities also come in. And this is where the challenge of addressing health misinformation is because it requires solutions that go beyond only communicating, but actually you need to understand and address the underlying reasons and context and situations that people are in that leads to them sharing or believing in specific health information narratives.
So, in public health, we’re often organized in a particular disease, specific health topic, et cetera, but that’s not how people actually experience that day-to-day or their communities don’t experience it in day-to-day. So, when we plan on meeting their information and service needs, we have to look at the big picture and then work with all the relevant services and organizations that may meet the community where they’re at.
Baker: I wonder if you could have examples of situations where a community’s information needs were met well and situations where community needs were not met well?
Purnat: What’s happening right now in the US, it’s the H5N1 bird flu outbreak in cows. Just yesterday, I did a short search on what people are searching for on Google related to the bird flu. And there’s plenty of questions that people have from their day-to-day life that are not being answered yet by any big credible source of health information. Like the first questions people have when Googling it is about the symptoms of the H5N1 infection. But then the next concern is how is this affecting their pets? And then there’s various questions about food safety related to consuming milk and eggs and beef, and also questions in relation to the risk of infection to farmers also via handling animal manure.
And these are all information voids that the Googling public and affected workers have, but it’s likely just the tip of the iceberg. And the challenging part here is that it’s not only the public that isn’t getting the information, it’s also the public health and other trusted messengers don’t know what’s going on either. They’re complaining about slow and incomplete access to data and lack of communication from animal and public health agencies. So, this is a very common situation in outbreaks. And, I don’t know, Lis, can you think of any examples of successful?
Wilhelm: I really struggled to think of examples. And I don’t think there’s a single health topic where absolutely everyone’s information needs were met because if that were true, then we would have 100% coverage of all the things your healthcare provider recommends for you. I mean, I think the example I gave of there’s 30,000 books on pregnancy and childbirth on Amazon yet more keep getting published points to the fact that despite the thought that in the year 2024 you think every single question that could be asked about pregnancy and childbirth has probably been asked, apparently there’s still demand for more information. And that’s just books.
I don’t think there’s a single health topic where absolutely everyone’s information needs were met because if that were true, then we would have 100% coverage of all the things your healthcare provider recommends for you.
I mean, the most trusted sources of information on health, regardless of the topic, is almost always going to be your healthcare provider. And so, it’s that relationship that people have with their healthcare providers that’s also really critically important, if you’re lucky enough to have a primary healthcare provider.
I think the other side of the coin here is what are we doing to ensure that doctors and nurses and midwives and all kinds of health professionals, pharmacists, which are increasingly important during the pandemic because they started vaccinating people for things more than just flu vaccine. These are people who are having direct one-on-one conversations with individuals who have questions and concerns. What are we doing to ensure that they’re getting the training they need to have those effective conversations on health topics, but also recognize that their patients are having all kinds of stuff show up on their Facebook and social media feeds, and how do they address questions and concerns and misinformation that their patients are seeing on their screens, and how do we get health workers to recognize that that’s also part of their job. The information environment is starting to affect how doctors and nurses and other healthcare providers provide care.
And I don’t think even medical education is really caught up to the fact that the majority of people get their health information through a small screen. And that is also going to mediate how they understand and take that information on board, and that also might affect their health behavior. How many people do you know that you regularly see for a checkup or to discuss a medical topic that is a digital native or understands how to send out a tweet? We’re working in a space that’s increasingly digital, but sometimes the people who are in charge of our public health efforts who are in charge of our healthcare systems are not digital natives.
Baker: Yeah. Lis, you had said sometimes in public health, we are our own worst enemies. And I wonder if each of you could tell me what’s the one or two thing that you’ve seen that just frustrates you?
Purnat: There’s a long list actually.
Wilhelm: I want to take out a banjo and sing a song and tell you a story. I think the biggest challenges in public health is that science translation piece between what does the research tell us, how do we talk to the general public about it, how do we talk to patients about it and make sure that it’s understood. And sometimes things break down in that translation process.
There is a bible for people who are communicators, who do risk communication, who do crisis and emergency communication. And there are seven principles in this bible of how you’re supposed to communicate to the public. The first three are be first, be right, be credible. The problem is is that if you spend all of your efforts trying to ascertain whether or not you’re right, you might not be first. You end up being second, third, fourth, or fifth.
We’re really bad at exploring complex information. And we tend to believe the first thing we hear.
And the problem is is that we know from psychology and science that in emergencies and during outbreaks and crises, people’s brains operate differently. And the way it works differently is that we tend to seek out new sources of information. We’re really bad at exploring complex information. And we tend to believe the first thing we hear, which means if we’re not the first thing you heard, but the second, third or fourth or fifth, it’s really, really difficult to dislodge the first thing that you heard. So, that to me is shooting ourselves in the foot.
It’s really difficult to work as a communicator when you’re trying to balance a lack of evidence and science and being able to speak from a place of evidence. And when you’re trying to talk to an anxious public that has questions that we don’t have great answers to yet. And that is a problem that we’re experiencing every single time that there’s a new outbreak or a new disease or a new health threat, where we are racing against time to catch up.
Unfortunately, the internet will always move faster than that. And those questions and concerns will mushroom and turn into misinformation extremely quickly before someone with credibility could step in front of those cameras and deliver those remarks at a press conference. And at the end of the day, who actually listens to that press conference and who believes what is said by that spokesperson?
Purnat: I mean, just to build on what Lis said, one thing that we’re not yet I think appreciating is that this swirl of information and the conversations and reactions impacts our ability to promote public health. Literally, it cuts across individual people communities, but also it impacts health system and even health workers themselves, and we’re not really fully appreciating while this is a systemic challenge.
So, think about the teen vaping epidemic that basically seems to have caught everyone by surprise. It’s been propagated by very, very effective lifestyle-based social marketing campaigns and attractive design of the vapes that specifically spoke to teens. And while we were working to understand the epidemiological picture and we’re really putting in an effort of getting reliable evidence around it to understand the teen vaping problem, while the marketing that was targeting the teens continued to for many years and be unaddressed.
Baker: One thing I have heard is that too often when planning a response, people focus on this—I think it’s Liz who called them magic messages. Tell me about that and why it’s not going to be the most effective thing.
Wilhelm: So, maybe to put it this way, when was the last time that you had a conflict or disagreement with someone and you were searching for the right words and you found the right words and you said your magic words and it solved the problem immediately? This doesn’t happen in real life. If messages were in fact magical and if you just had to find them and identify them, the entire marketing industry will be out of a job and everyone would follow their healthcare provider’s advice on getting adequate exercise and protein in their diets, right?
If you want to understand what a person’s thinking or feeling, ask them. Just don’t make assumptions because that’s how poorly designed messages are developed and those can be actually harmful.
That’s just not how humans work. We’re not empty brains walking around waiting for messages to be filled in our brains that we then follow. We come with our own basket case of experiences, of biases, of our own literacies or lack thereof, our own perspectives on the world, our culture, our religious beliefs, our values. Those all color how we interact with the world and how we seek and get health services.
And so, there’s no magic messages that’s going to cut through that. People are different. Every community is different, and we have to recognize that in that diversity, trying to identify what people’s information needs are is going to look very different from place to place and from topic to topic, which goes back to if you want to understand what a person’s thinking or feeling, ask them. Just don’t make assumptions because that’s how poorly designed messages are developed and those can be actually harmful.
Purnat: And actually, this links also to how the media environment in general that we live in has changed. The days when people sat around the living room and listened to the nightly newscast, that’s like from a hundred years ago. Nowadays, we don’t receive information on health or other topics from single one source that we trust. We’re more like information omnivores. We consume information from different sources online and offline. We trust some more than others. So, when you attempt to blast out health messages into the world like a radio signal, and then you’re hoping that people are tuning in, that’s destined to fail.
When you attempt to blast out health messages into the world like a radio signal, and then you’re hoping that people are tuning in, that’s destined to fail.
But the problem there is that there’s also not anymore, one organization or person that has monopoly on speaking about credible health information. And that challenges how we need to be dealing with or interacting with information environments. We wouldn’t recommend that you hire a beauty influencer to talk about vaccine safety. And that’s just because they may be credible to their audience because of their beauty know-how, but probably won’t really move the needle in terms of public health outcomes. But we could work with beauty influencers probably about things that relate to social media because they’re experts in that.
Baker: So, not just the message, also the messengers?
Wilhelm: It’s the medium. It’s the message. It’s the messengers. It’s everything. I mean, think about it. For example, when you get alert on your phone saying that a tornado watch has just become a tornado warning and that you should go seek shelter or shelter in place, you’re getting the right information at the right time at the right place because geographically, the phone knows where you’re located and it overlaps where there’s this event that’s occurring. But also when we think about magic messages and we think about trust, we assume that people trust the messenger. What if people don’t trust the Weather Channel or the National Weather Service that provides those alerts to their phone?
And if we kind of extrapolate that to other areas of health, people’s trust in their doctor and the CDC and the National Pediatric Association might all be very different. We know that these are credible sources of information, but if these are not trusted, people will seek information from other alternative sources that better align with their values and their information needs. And that’s the real issue.
It’s not about we need to improve trust in these big institutions. It’s just recognizing people of varying levels of trust with different groups of people, different voices, different messengers, different on and on, different platforms, and recognizing that people get information and work with trusted information from different spaces.
Baker: And Tina, you’ve been thinking about how it’s not just information that needs to be supplied, that it’s not just messages that need to be supplied. It’s important to also know how the services will be delivered or make sure that services are being delivered.
Purnat: In ways that actually meet the needs, yes. So, example, during the pandemic, when the vaccine rollout started happening, many different countries used digital portals, digital tools that people could use to schedule their vaccine shot. But some communities either didn’t have internet access, didn’t have devices they could use to schedule an appointment, or they were just too far from locations that were providing the vaccine. That meant that actually, even though on paper the arrangement and the logistics sounded really well thought out, well, some people missed out because they weren’t able to actually take advantage of what the health system was asking them to do and offering.
Baker: Right. So, the message was delivered, but the services not really, not so much?
Purnat: And probably generated some frustration, which led to erosion of trust and frustration with the health authorities.
Wilhelm: A colleague of ours would say, “You want to make a health service fun, easy and accessible.” And so, just recognizing that if you want people to do something, you want to make it as easy as possible for them to do it. And so, that’s the example that Tina gave is a really great one, where there’s a mismatch.
Or early in the pandemic, you are instructing people who might have family members that may have been exposed to the COVID virus, that they should isolate at home, that they should take these precautions so they don’t transmit the virus to other family members. But how exactly is that supposed to work if you are living in a multigenerational household in a slum somewhere where you don’t have access to running water? So, the public health guidance might be very nice, but completely incomprehensible and completely unactionable by the average person that’s living in that type of community.
You don’t want to set people up to fail. If you’re talking to the general public about what they should do, you really need to be specific.
And so, we also have to recognize you don’t want to set people up to fail. If you’re talking to the general public about what they should do, you really need to be specific as to, “Well, what do I do if I have an elderly person that has accessibility issues or somebody who’s immunocompromised in my family,” or “What do I do if a family member has recovered from COVID? Are they eligible to receive the COVID vaccine?” I mean, these are common questions that people were asking, and the guidance wasn’t always really clear as to what people were supposed to do in those situations.
Baker: You said that just improving communications is not going to make everything better. So, what else could people be doing systematically?
Wilhelm: My pet peeve really is this focus to jump to solutions, which actually can do I think more damage in the long run, and that tend to be coercive in nature, content takedowns versus the more harder and necessary work of building trust and improving the breadth and depth of how healthcare workers and health systems engage with communities and with patients. There’s no magic button you can push just like there’s no magic message that increases trust. And there’s no magic button that you can push that can defeat all the underlying reasons why someone might believe misinformation instead of what you’re telling them.
Misinformation represents a failure—not of that individual or that community—but of a government and a health system that is not worthy of trust.
People who believe misinformation in communities that are acting on misinformation represents a failure—not of that individual or that community—but of a government and a health system that is not worthy of trust. If people believe misinformation instead of their healthcare provider, that tells me that something has gone horribly wrong and it isn’t on the individual.
We need to understand this, that this is a systemic public health problem. And we as public health professionals are on the hook to address these complex problems just like we’ve addressed other complex societal problems such as drunk driving or smoking cessation where it requires a lot of the levers, a lot of different levels.
Baker: I’ve really enjoyed learning more about this. I guess I’ll just ask each of you for one thing that you think could be done or that must be understood to move from sort of a less effective narrow approach to a more effective, broader approach.
Wilhelm: You know, the power of the internet is in your hands. As a consumer, as an individual, what you say and what you do and how you interact with people in your online communities and your offline communities can be extremely powerful. And so, take advantage of that power. Have conversations with family members and friends when they have questions or concerns. Point people in the direction of credible information. Engage with people. Do it so respectfully. Not everything has to be a shouting match on the internet.
And that can go a long way to creating a much healthier information environment where people feel like they can voice their questions and concerns without being shouted at down or talked over or dismissed just because they have legitimate concerns. And so, if we can bring some of that into our online and offline interactions every day, I think that would make things a little bit healthier.
We do need public health leadership that understands the critical and integral role that the digital information environment has in health.
Purnat: We do need public health leadership that understands the critical and integral role that the digital information environment has in health. And we need to be able to deal with how technology might be misdirecting people to the wrong health advice or all too often different health authorities still treat their websites like digital magazines. But in reality, they need to publish health information in ways that gets picked up and disseminated automatically online and used by people.
So, one thing that we need to recognize in public health is that this isn’t just in a domain of one or two functions or offices in a CDC or a National Institute of Public Health or a Ministry of Health or a health department. This is actually something that is challenging every role within the health system. And that means that patient-facing, community-facing roles or researchers and analysts and even policy advisors.
And that means we need to recognize that we need to invest in updating of our tools the way that we understand commercial information, social-economic determinants of health, and that needs to trickle into and be integrated both into our tools the way that we support our health workforce, as well as how it informs policy. It’s a bit tough nut to crack, but we can mobilize and use the expertise of practically every person that works in public health and beyond actually.
Wilhelm: This is a global problem. This affects every country from Afghanistan to the US to Greece to Zimbabwe. Everybody’s got the same issues trying to understand and address this complex information environment. And so, we can all learn from one another and recognize that this is a truly global new public health problem that we need to come up with better strategies to address. So, I think paying attention to this increasingly smaller planet that we live on, what happens in other countries affects what happens in ours, especially when it comes to how information is shared and amplified online.
Baker: I’d like to end by asking you about the Infodemic Manager training program that you worked on with the World Health Organization. You have called it a unicorn factory. Why do infodemic managers call themselves unicorns?
The perfect infodemic manager is someone who has public health experience that understands how the internet works, understands digital health, understands communication and social and behavioral science. They understand public health, epidemiology, outbreak response, emergency management. And there are very few humans on the planet who have all these skill sets.
Wilhelm: It’s the idea that the perfect infodemic manager is someone who has public health experience that understands how the internet works, understands digital health, understands communication and social and behavioral science. They understand public health, epidemiology, outbreak response, emergency management. And there are very few humans on the planet who have all these skill sets in one body.
And so, when we developed this training, we invited a very large group of humans from many different backgrounds to come together to learn some of these skills. And so, the joke became that the trainings were unicorn factories, where people went in with their existing and they upgraded a few new ones, and then they came out the other end with a little bit more sparkle and a little bit more ability to address health misinformation. And this took a life of its own. And these people decided to call themselves unicorns. They’re out there in the world, and you will see them with little unicorn buttons and stickers that they’ll have. And it’s kind of cool.
Purnat: And they were extremely committed and found this so valuable that we had people who were still wanting to participate while their country had massive flooding and monsoons or, for example, with family tragedy. And this was just a testament to the fact that really these challenges, people who worked in the communities, who worked in the COVID-19 response, they were recognizing that actually when they talk to each other, to people from other countries, they were actually seeing the same challenges. They were not alone experiencing this. This was not only specific to their country. And it was a big revelation to everyone that actually we can help each other a lot by talking to each other, supporting each other, and sharing what we’re experiencing and what we’re doing, and trying out to try to address these issues.
132 countries is how many people that we’ve trained from over the course of several years throughout this process. And it’s a small moment of joy in what was otherwise a very difficult, complex and horrifying outbreak response because many of the people that we’re being trained were doing this at all hours of the night, all parts of the world on crappy internet connections sitting together to try and solve this problem and learn together for four weeks when they’re off and also in their day job responding to their country’s COVID outbreak.
Wilhelm: So, you would have the Canadian nurse talking to the polio worker in Afghanistan, talking to the behavioral scientists in Australia, talking to the journalists in Argentina who all were taking the training and saying, “Let’s compare notes,” and then realizing how similar the challenges were that they were facing, but also a great way to come up with new solutions to some of those problems together.
Baker: Tina, Lis, thank you for this wonderful conversation. I hope it has inspired more people to become unicorns. Find out more about how to counter health misinformation by visiting our show notes.
Please subscribe to the Ongoing Transformation wherever you get your podcast. And thanks to our podcast producer, Kimberly Quach and our audio engineer, Shannon Lynch. My name is Monya Baker, Senior Editor at Issues in Science and Technology. Thank you for listening.
Missing Links for an Advanced Workforce
Recent investments in the US advanced manufacturing industry have generated a national workforce demand. However, meeting this demand for workers—particularly technicians—is inhibited by a skills gap. In the sector of microelectronics manufacturing, it is critical that we not only pursue effective technician education but also minimize barriers that hinder quality of education and program completion. For example, there are limited accessible avenues for students to gain hands-on industry experiences. Educational programs also face difficulties coordinating curriculum with local workforce needs. In “The Technologist” (Issues, Winter 2024), John Liu and William Bonvillian suggest an educational pathway targeting these challenges. Their proposals align with our efforts at the Micro Nano Technology Education Center (MNT-EC) to effectively train microelectronic industry technicians.
As the authors highlight, we must strengthen the connective tissue across the workforce education system. MNT-EC was founded with the understanding that there is strength in community bonds. We facilitate partnerships between students, educators, and industry groups to offer support, mentoring, and connections to grow the technician workforce. As part of our community of practice, we partner with over 40 community colleges in a coordinated national approach to advance microelectronic technician education. Our programs include an internship connector, which directs students toward hands-on laboratory education; a mentorship program supporting grant-seeking educators; and an undergraduate research program that backs students in two-year technical education programs.
In the sector of microelectronics manufacturing, it is critical that we not only pursue effective technician education but also minimize barriers that hinder quality of education and program completion.
These programs highlight community colleges’ critical partnership role within the advanced manufacturing ecosystem. As Liu and Bonvillian note, community colleges have unique attributes: connections to their local region, diverse student bodies, and workforce orientations. Ivy Tech Community College, one of MNT-EC’s partners, is featured in the article as an institution utilizing its strengths to educate new technologists. Ivy Tech, as well as other MNT-EC partners, understands that modern manufacturing technicians must develop innovative systems thinking alongside strong technical skills. To implement these goals, Ivy Tech participates in a partnership initiative funded by Silicon Crossroads Microelectronics Commons Hub. Ivy Tech works with Purdue University and Synopsis to develop a pathway that provides community college technician graduates with a one-year program at Purdue, followed by employment at Synopsis. This program embodies the “technologist” education, bridging technical education content taught at community colleges with engineering content at Purdue.
As we collectively develop this educational pathway for producing technologists, I offer two critical questions for consideration. First, how can we recruit and retain the dedicated technicians who will evolve into technologists? MNT-EC has undertaken strategic outreach to boost awareness of the advanced manufacturing industry. However, recruitment and retention remain a national challenge. Second, how can we ensure adequate and sustained funding to support community colleges in this partnership? Investing in the nation’s manufacturing workforce by building effective educational programs that support future technologists capable of meeting industry needs will take a team and take funding.
Jared M. Ashcroft
Principal Investigator, Micro Nano Technology Education Center
Professor, Pasadena Community College
Justine Gluck
Reports & Communications,
MNT-EC
Jennifer P. Hipp
Communications & Outreach,
MNT-EC
Anyone concerned about the state of US manufacturing should read with care John Liu and William B. Bonvillian’s essay. They propose a new occupational category that they maintain can both create opportunities for workers and position the United States to lead in advance manufacturing.
Their newly coined job, “technologist,” requires “workers with a technician’s practical know-how and an engineer’s comprehension of processes and systems.” This effectively recognizes that without an intimate connection between innovation (where the United States leads) and manufacturing (where it lags), the lead will dissipate, as recent history has demonstrated. In this context, the authors lament the US underinvestment in workforce education and particularly the low funding for community colleges, which can serve as critical cogs in training skilled workers.
Indeed, the availability of a skilled workforce ready to support twenty-first century production is the most significant and immediate problem the United States faces in trying to restore its overall manufacturing capability. And semiconductors are on the front line in the struggle. A report released in December 2023 by the Commerce Department’s Bureau of Industry and Security, Assessment of the Status of the Microelectronics Industrial Base in the United States, which summarizes industry respondents to a survey, “consistently identified workforce-related challenges as the most crucial to their business,” with respondents most frequently citing workforce-related issues (e.g., labor availability, labor cost, and labor quality) as important to expansion or construction decisions.
Other data support this perception. A July 2023 report from the Semiconductor Industry Association, Chipping Away: Assessing and Addressing the Labor Market Gap Facing the U.S. Semiconductor Industry, projects that by 2030 the semiconductor’s workforce will grow to 460,000 jobs from 345,000 jobs, with 67,000 jobs at risk of going unfilled at current degree completion rates. And this problem is economywide: by the end of 2030, an estimated 3.85 million additional jobs requiring proficiency in technical fields will be created—with 1.4 million jobs at risk of going unfilled.
The availability of a skilled workforce ready to support twenty-first century production is the most significant and immediate problem the United States faces in trying to restore its overall manufacturing capability.
The US CHIPS and Science Act, passed in 2022, appropriated over $52 billion in grants, plus tens of billions more in tax credits and loan authorization, through new programs at the Department of Defense, the National Institute of Standards and Technology (NIST), and the National Science Foundation. Central to these new initiatives is workforce development. For example, all new CHIPS programs must include commitments to provide workforce training. In addition, NIST’s National Semiconductor Technology Center proposes establishing a Workforce Center of Excellence, a national “hub” to convene, coordinate, and set standards for the highly decentralized and fragmented workforce delivery system.
To rapidly scale up regionally structured programs to meet the demand, it is wise to examine existing initiatives that have demonstrated success and can serve as replicable models. Two examples with a national footprint are:
NIST’s Manufacturing Extension Program has built a sustained business model in all states by helping firms reconfigure their operations through lean manufacturing practices, including shop floor reorganization. And the market for this service is not just tiny machine shops, but also enterprises with up to 500 employees, which represent over 95% of all manufacturing entities and employ 50% of all workers.
The National Institute for Innovation and Technology, a nonprofit sponsored by the Department of Labor, has developed an innovative Registered Apprenticeship Program in collaboration with industry. Several leading semiconductor companies are using the system to attract unprecedented numbers of motivated workers.
Liu and Bonvillian have described a creative approach to the major impediment to restoring US manufacturing. Rapid national scale-up is essential to success.
Phillip Singerman
Senior Advisor
American Manufacturing Communities Collaborative
Former NIST Associate Director for Innovation and Industry Services
Effective recruitment and training programs are often billed as the key to creating the deep and capable talent pool needed by the nation’s industrial base. The task of creating them, however, has proven Sisyphean for educators. Pathways nationwide are afflicted with the same trio of problems: lagging enrollment; high attrition; and disappointing problem solving, creative thinking, and critical reasoning skills in graduates.
In response to these anemic results, the government has increased funding for manufacturing programs, hoping educators can produce the desired talent through improved outreach and instruction. Looking at the causes of the key problems, however, reveals that even the best programs, such as the one at the Massachusetts Institute of Technology that John Liu and William B. Bonvillian describe, are limited in their potential to solve them.
Recruitment is primarily hamstrung by the sector’s low wages (particularly at the entry level for workers with less than a bachelor’s degree). In many markets, entry-level technician compensation is on par with that offered by burger chains and big box stores. Technologist salaries ring in higher, but many promising candidates (especially high schoolers) opt for a bachelor’s degree instead, because the return on investment is often better. Until that math changes, technician/technologist pathways will never outmatch the competition from other sectors or four-year degrees, both of which pay more, provide a more attractive job structure, or both.
Furthermore, educators cannot easily teach skills such as aptitude for innovation and technical agility in class: students master theory in school and practical application on the job. As a former Apple engineer explained to me, it is not until entering the workforce that people are routinely exposed to the conditions that develop diversity of thought: open-ended problems that require workers to engage with an infinite solution space to arrive at an answer. While approaches like project-based learning can help students acquire a foundation prior to graduation, companies must accept that the bulk of the learning that drives creativity and problem solving will take place on the factory floor, not in the classroom.
It is not until entering the workforce that people are routinely exposed to the conditions that develop diversity of thought: open-ended problems that require workers to engage with an infinite solution space to arrive at an answer.
This means that to address the nation’s manufacturing workforce shortcomings, we must turn to industry, not education. Compensation needs to be raised to reflect the complexity and effort demanded by manufacturing jobs when compared with other positions that pay similar wages. Companies also need to embrace their role as a critical learning environment. Translating classroom-based knowledge into real-world skill takes time and effort by both students and industry. Many European countries with strong manufacturing economies run multiyear apprenticeship programs in recognition of this fact. To date, the United States has resisted the investment and cooperation required to create a strong national apprenticeship program. Unless and until that changes, we should not expect our recent graduates to have the experience and skill of their European counterparts.
In sum, programs such as the one at MIT should be replicated in every manufacturing market across the nation. But in the absence of competitive compensation and scaled apprenticeships, educators cannot create a labor pool with the quantity of candidates or technical chops to shore up the country’s industrial sector.
Emily McGrath
Senior Fellow and Director of Workforce Policy
The Century Foundation
John Liu and William B. Bonvillian make a compelling case for bridging the gap between engineers and technicians to support the US government’s efforts for reshoring and reindustrialization. They call for new training programs to produce people with a skill level between technician and engineer—or “technologists,” in their coinage. But before creating new programs, we should examine how the authors’ vision might fit within the nation’s existing educational system.
It is surprising that Liu and Bonvillian don’t explain how their new field differs from one that already bridges the technician-engineer gap: engineering technology. Engineering technology programs offer degrees at the associate’s, bachelor’s, master’s, and even PhD levels. And the programs graduate substantial numbers of students. According to the US Department of Education, more than 50,000 associate’s and 18,000 bachelor’s degrees in engineering technology were awarded in 2021–22. The number of bachelor’s degrees represents about 15% of all engineering degrees awarded during that period. The field also has a strong institutional foothold. Programs are accredited by the Accreditation Board for Engineering and Technology and the field has an established Classification of Instructional Programs code (15.00).
Engineering and engineering technology programs have roots that go back to the late nineteenth century. They were not completely distinct from one another until the 1950s, when engineering schools adopted many of the curricular recommendations made by an American Society of Engineering Education’s 1955 report, commonly known as the Grinter Report, and made engineering education more “scientifically oriented.” Engineering technology programs tend to require less advanced mathematics and science but much more applied and implementation work with real-world equipment.
Engineering technology programs tend to require less advanced mathematics and science but much more applied and implementation work with real-world equipment.
A more recent report from the National Academies, Engineering Technology Education in the United States, published in 2017, describes the state of the field, its evolution, and the need to elevate its branding and visibility among students, workers, educators, and employers. The report describes graduates of engineering technology programs as technologists, the same job title Liu and Bonvillian use for their new type of worker who possesses skills that combine what they term “a technician’s practical know-how and an engineer’s comprehension of processes and systems.”
The preface of the National Academies report provides a warning to those taking a “build it and they will come” approach. It states that engineering technology, despite its importance, is “unfamiliar to most Americans and goes unmentioned in most policy discussions about the US technical workforce.” Liu and Bonvillian are advocating that a new, apparently similar, field be created. How do they ensure it won’t suffer the same fate?
The market gap that the authors identify, along with the lack of awareness about engineering technology, point to a deeper problem in the US workforce development system: employers are no longer viewed as being responsible for taking the lead role in guiding and investing in workforce development. Employers are the ones that can specify skills needs, and they profit from properly trained workers, yet we have come to expect too little from them. Until we shift the policy conversation by asking employers to do more, creating programs that develop technologists will fail to live up to Liu and Bonvillian’s hopeful vision.
Ron Hira
Associate Professor
Department of Political Science
Howard University
John Liu and William Bonvillian put forth a thoughtful proposal that US manufacturing needs a new occupational category called “technologist,” a mid-level position sitting between technician and engineer. To produce more of this new breed, the authors encourage community colleges to deliver technologist education, particularly by adopting the curricula framework used in an online program in manufacturing run by the Massachusetts Institute of Technology. And in a bit of good news, the US Defense Department has started funding its adaptation for technologist education.
But more is needed. In scaling up technologist programs across community colleges, Liu and Bonvillian propose focusing first on new students, followed by programs for incumbent workers. I might suggest the inverse strategy to center job quality in the creation of technologist jobs. In this regard, the authors state something critically important: “to incentivize and enable workers to pursue educational advances in manufacturing, companies need to offer high-wage jobs to employees.” Here, the United States might take some lessons from Germany, where manufacturers pay their employees 60% more than US companies do, have a robust apprenticeship system, and generally prioritize investments in human capital over capital equipment purchases.
For too long, US workforce policy has prioritized primarily employer needs. It’s time to add back workers at the heart of workforce policy, as my colleague Mary Alice McCarthy recently argued in a coauthored article in DC Journal. Efforts by community colleges can be important here. By partnering with employers, labor unions, and manufacturing intermediaries such as federal Manufacturing Extension Partnerships to upskill incumbent technicians to become technologists, community colleges can expand upward mobility for workers who are part of the 40 million “some college, no degree” population and set the stage for discussing competitive wages and job quality with employers. Plus, they can ensure that these bold new programs are aligned with employers’ needs—especially critical for emerging jobs.
Community colleges can expand upward mobility for workers who are part of the 40 million “some college, no degree” population and set the stage for discussing competitive wages and job quality with employers.
Indeed, the million-plus workers already employed across 56,000 companies within the US industrial base represent an opportunity to recruit program enrollees and provide mobility in a critical sector of manufacturing that arguably ought to be at the forefront of technologist-enabled digital transformation. Then, with the technologist role cemented in manufacturing—with fair pay—community colleges can turn to recruiting new students for the new occupation.
Policymakers should also consider ways to promote competitive pay and job quality as they fund and promote technologist education. Renewing worker power in manufacturing is one such avenue. Here, labor unions can prove useful. The politics of unions have changed. An August 2023 Gallup poll found that 67% of respondents approved of labor unions on the heels of a summer when both President Biden and former President Trump made history by joining picket lines during the United Auto Workers strike.
The time is right for manufacturing technologists. New federal funding, such as through the National Science Foundation’s Enabling Partnerships to Increase Innovation Capacity program and the Experiential Learning for Emerging and Novel Technologies program, is optimally suited to boost technologist program creation at community colleges. But even with such added support, ensuring that technologist jobs are quality jobs ought to be an imperative for employers who will benefit by bringing the authors’ sensible and needed vision to fruition.
Shalin Jyotishi
Senior Advisor on Education, Labor, and the Future of Work
Grace J. Wang’s timely essay, “Revisiting the Connection Between Innovation, Education, and Regional Economic Growth” (Issues, Winter 2024), warrants further attention given the foundational impact of a vibrant innovation ecosystem—ideas, technologies, and human capital—on the nation’s $29 trillion economy. She aptly notes that regional innovation growth requires “a deliberate blend of ideas, talent, placemaking, partnerships, and investment.”
To that end, I would like to amplify Wang’s message by drawing attention to the efforts of three groups: the ongoing work of the Brookings Institution, the current focus of the US Council on Competitiveness, and the catalytic role of the National Academies Government-University-Industry Research Roundtable (GUIRR) in advancing the scientific and innovation enterprise.
First, Brookings has placed extensive emphasis on regional innovation, focusing on topics such as America’s advanced industries, clusters and competitiveness, urban research universities, and regional universities and local economies. Recently, Mark Muro at Brookings collaborated with Robert Atkinson at the Information Technology and Innovation Foundation to produce The Case for Growth Centers: How to Spread Tech Innovation Across America. The report identified 35 place-based metropolitan locations that are utilizing the right ingredients—population; growing employment; university spending on R&D in science, technology, engineering, and mathematics per capita; patents; STEM doctoral degree production; and innovation sector job share—to realize innovation growth centers driven by targeted, peer-reviewed federal R&D investments.
The US Council on Competitiveness has also focused on place-based innovation. In 2019, the council launched the National Commission on Innovation and Competitiveness Frontiers, which involves a call to action described in the report Competing in the Next Economy: The New Age of Innovation. The council also formed four working groups, including one called The Future of Place-Based Innovation: Broadening and Deepening the Innovation Ecosystem. From these and other efforts, the council has proposed new recommendations that call for “establishing regional and national strategies to coordinate and support specialized regional innovation hubs, investing in expansion and retention of the local talent base, promoting inclusive growth and innovation in regional hubs, and strengthening local innovation ecosystems by enhancing digital infrastructure and local financing.”
Finally, I want to emphasize the important role GUIRR plays in advancing innovation and the national science and technology agenda. Through the roundtable, leaders from federal science agencies, universities, and industry proactively collaborate to frame issues and conduct activities that advance the national enterprise. GUIRR workshops and reports have also historically included elements to advance the innovation enterprise, including regional innovation.
Leaders from federal science agencies, universities, and industry proactively collaborate to frame issues and conduct activities that advance the national enterprise.
To end with a personal anecdote, I’ve witnessed the success that results from such a nexus, especially from one that was recently highlighted by Brookings: the automotive advanced manufacturing industry in eastern Tennessee. In my former position as chief research administrator at the University of Tennessee, I was deeply involved in that regional innovation ecosystem, along with other participants at Oak Ridge National Laboratory and in the automotive industry, allowing me to experience firsthand just how impactful these ingredients can be when combined and maximized.
More so, as GUIRR celebrates 40 years of impact this year, I know it will continue to serve as a strong proponent of the nation’s R&D and innovation enterprise while continually refining and advancing the deep and critical collaboration between government, universities, and industry as laid out in Wang’s article and amplified by Brookings and the US Council on Competitiveness.
Taylor Eighmy
President, The University of Texas at San Antonio
Council Member, National Academies Government-University-Industry Research Roundtable
National Commissioner, US Council on Competitiveness
As Grace J. Wang notes in her article, history has shown the transformative power of innovation clusters—the physical concentration of local resources, people brimming with creative ideas, and support from universities, the federal government, industry, investors, and state and local organizations.
In January 2024, the National Science Foundation made a groundbreaking announcement: the first Regional Innovation Engines awards, constituting the broadest and most significant investment in place-based science and technology research and development since the Morrill Land Grant Act over 160 years ago. Authorized in the bipartisan CHIPS and Science Act of 2022, the program’s initial two-year, $150 million investment will support 10 NSF Engines spanning 18 states, bringing together multisector coalitions to put these regions on the map as global leaders in topics of national, societal, and geostrategic importance. Subject to future appropriations and progress made, the teams will be eligible for $1.6 billion from NSF over the next decade.
NSF Engines have already unlocked another $350 million in matching commitments from state and local governments, other federal agencies, philanthropy, and private industry, enabling them to catalyze breakthrough technologies in areas as diverse as semiconductors, biotechnology, and advanced manufacturing while stimulating regional job growth and economic development. Places such as El Paso, Texas, and Greensboro, North Carolina, will see lasting impacts as they are transformed into inclusive, thriving hubs of innovation capable of evolving and sustaining themselves for decades to come.
Places such as El Paso, Texas, and Greensboro, North Carolina, will see lasting impacts as they are transformed into inclusive, thriving hubs of innovation capable of evolving and sustaining themselves for decades to come.
The NSF Engines program is led by NSF’s Directorate for Technology, Innovation, and Partnerships (TIP), which builds upon decades of NSF investments in foundational research to grow innovation and translation capacity. TIP recently invested another $20 million in 50 institutions of higher education—including historically Black colleges and universities, minority-serving institutions, and community colleges—to help them build new partnerships, secure future external funding, and tap into their regional innovation ecosystems. Similarly, NSF invested $100 million in 18 universities to expand their research translation capacity, build upon academic research with the potential for technology transfer and societal and economic impacts, and bolster technology transfer expertise to support entrepreneurial faculty and students.
NSF also works to meet people where they are. The Experiential Learning for Emerging and Novel Technologies (ExLENT) program opens access to quality education and hands-on experiences for people at all career stages nationwide, leading to a new generation of scientists, engineers, technicians, practitioners, entrepreneurs, and educators ready to pursue technological innovation in their own communities. NSF’s initial $20 million investment in 27 ExLENT teams is allowing individuals from diverse backgrounds and experiences to gain on-the-job training in technology fields critical to the nation’s long-term competitiveness, paving the way for good-quality, well-paying jobs.
NSF director Sethuraman Panchanathan has stated that we must create opportunities for everyone and harness innovation anywhere. These federal actions collectively acknowledge that American ingenuity starts locally and is stronger when there are more pathways for workers, startups, and aspiring entrepreneurs to participate in and shape the innovation economy in their own backyard.
Erwin Gianchandani
Assistant Director for Technology, Innovation and Partnerships
National Science Foundation
Grace J. Wang does an excellent job of capturing the evolution of science and engineering research, technological innovation, and economic growth. She also connects these changes to science, technology, engineering, and mathematics education on the one hand and employment shifts on the other. And she implores us to seriously consider societal impacts in the process of research, translation, and innovation.
I believe developments over the past decade have made these issues far more urgent. Here, I will focus on three aspects of innovation: technological direction, geographic distribution, and societal impacts.
Can innovation be directed? Common belief in the scientific research community is that discovery and innovation are unpredictable. This supports the idea of letting hundreds of flowers bloom—fostered by broad support for all fields of science and engineering. Increasingly, however, the complexity and urgency of societal grand challenges are leading to a case for mission-oriented innovation. As Mariana Mazzucato pointed out in a report titled Mission-Oriented Research & Innovation in the European Union: “By harnessing the directionality of innovation, we also harness the power of research and innovation to achieve wider social and policy aims as well as economic goals. Therefore, we can have innovation-led growth that is also more sustainable and equitable.”
Increasingly, the complexity and urgency of societal grand challenges are leading to a case for mission-oriented innovation.
Can innovation be spread geographically? Technological innovations and their economic benefits have been far from uniformly distributed. Indeed, while some regions have prospered, many have been left behind, if not regressed. Scholars have offered several ways to address this distressing and polarizing situation. With modesty, I point to a 2021 workshop on regional innovation ecosystems, which Jim Kurose, Cheryl Martin, Susan Martinis, and I organized (and Grace Wang participated in). Funded by the National Science Foundation, the workshop led to the report National Networks of Research Institutes, which helped spur development of the NSF’s Regional Innovation Engines program, which recently awarded $1.6 billion to 10 innovation clusters distributed across the nation. Much, much more, of course, remains to be done.
Can the negative societal impacts of innovation be minimized, and the positive impacts maximized? As example of the downside, consider some of the profound negative impacts of smartphones, social media, and mobile internet technologies. As Jaron Lanier, a technology pioneer, pointed out: “I think the short version is that a lot of idealistic people were unwilling to consider the dark side of what they were doing, and the dark side developed in a way that was unchecked and unfettered and unconsidered, and it eventually took over.” At a minimum, everyone in the science and engineering research community should become more knowledgeable about the fundamental economic, sociological, political, and institutional processes that govern the real-world implementation, diffusion, and adoption of technological innovations. We should also ensure that our STEM education programs expose undergraduate and graduate students to these processes, systems, their dynamics, and their driving forces.
Fundamentally, I believe that we need to get better at anticipatory technology ethics, especially for emerging technologies. The central question all researchers must attempt to answer is: what will the possible positive and negative consequences be if their technology becomes pervasive and is adopted at large scale? Admittedly, due to inherent uncertainties in all aspects of the socio-technological ecosystem, this is not an easy question. But that is not enough reason to not try.
Pramod P. Khargonekar
Vice Chancellor for Research
University of California, Irvine
Technology innovation can be a major force behind regional economic growth, but as Grace J. Wang notes, it takes intentional coordination for research and development-based regional change to happen. Over the past year, as parties coalesced across regions to leverage large-scale, federally funded innovation and economic growth programs, UIDP, an organization devoted to strengthening university-industry partnerships, has held listening sessions to better understand the challenges these regional coalitions face.
In conversations with invested collaborators in diverse regions—from Atlanta, New York, and Washington, DC, to New Haven, Connecticut, and Olathe, Kansas—we’ve learned that universities can easily fulfill the academic research aspects of these projects. Creating the organizational glue that engages and keeps academic, industry, local and state government, and nonprofit partners collaborating as a whole is more challenging. One solution successful communities use is creating a new, impartial governing body; others rely on an impartial community connector as neutral convener.
But other program requirements remain a black box—specifically, recruiting and retaining talent and developing short- and long-term metrics. At least for National Science Foundation Regional Innovation Engines awardees, it is hoped that replicable approaches to address these issues will be developed in coordination with that effort’s MIT-led Builder Platform.
Creating the organizational glue that engages and keeps academic, industry, local and state government, and nonprofit partners collaborating as a whole is more challenging.
Data specific to a region’s innovation strengths and gaps can lend incredible insight into the ecosystem-building process. Every community has assets that uniquely contribute to regional development; a comprehensive, objective assessment can identify and determine their value. Companies such as Elsevier and Wellspring use proprietary data to tell a story about a community’s R&D strengths, revealing connections between partners and identifying key innovators who may not otherwise have high visibility within a region.
We often hear about California’s Silicon Valley and North Carolina’s Research Triangle as models for robust innovation ecosystems. Importantly, both those examples emphasized placemaking early in their development.
Innovation often has its genesis in face-to-face interactions. High-value research parks and innovation districts, along with co-located facilities, offer services beyond incubators and lab space. The exemplars create intentional opportunities for innovators to interact—what UIDP and others call engineered serendipity. Research has tracked the value of chance meetings—a conversation by the copy machine or a chat in a café—for sparking innovation and fruitful collaboration.
The changing landscape of research and innovation is having a profound impact on the academy, where researchers have traditionally focused on basic research and are now being asked to expand into use-inspired areas to solve societal problems more directly; this is where government and private funders are making more investments.
Finally, Wang noted the difficulty in making technology transfer offices financially self-sustainable, and NSF’s recently launched program Accelerating Research Translation (ART) seeks to address this challenge. But it may be time to reevaluate the role of these offices. Today’s increasing emphasis on research translation is an opportune time to reassess the transactional nature of university-based commercialization and licensing and return to a role that places greater emphasis on faculty support and service rather than revenue generation. Placing these activities within the context of long-term strategic partnerships could generate greater return on investment for all.
Anthony M. Boccanfuso
President and CEO
UIDP
Harvesting Insights From Crop Data
In “When Farmland Becomes the Front Line, Satellite Data and Analysis Can Fight Hunger” (Issues, Winter 2024), Inbal Becker-Reshef and Mary Mitkish outline how a standing facility using the latest satellite and machine learning technology could help to monitor the impacts of unexpected events on food supply around the world. They do an excellent job describing the current dearth of public real-time information and, through the example of Ukraine, demonstrating the potential power of such a monitoring system. I want to highlight three points the authors did not emphasize.
First, a standing facility of the type they describe would be incredibly low-cost relative to the benefit. A robust facility could likely be established for $10–20 million per year. This assumes that it would be based on a combination of public satellite data and commercial data accessed through larger government contracts that are now common. Given the potential national security benefits of having accurate information on production shortfalls around the world, the cost of the facility is extremely small, well below 0.1% of the national security spending of most developed countries.
Second, the benefits of the facility will likely grow quickly, because the number of unexpected events each year is very likely to increase. One well-understood reason is that climate changes are making severe events such as droughts, heat waves, and flooding more common. Less appreciated is the continued drag that climate trends are having on global productivity, which puts upward pressure on prices of food staples. The impact of geopolitical events such as the Ukraine invasion then occur on top of an already stressed food system, magnifying the impact of the event on global food markets and social stability. The ability to quickly assess and respond to shocks around the world should be viewed as an essential part of climate adaptation, even if every individual shock is not traceable to climate change. Again, even the facility’s upper-end price tag is small relative to the overall adaptation needs, which are estimated at over $200 billion for developing countries alone.
Third, a common refrain is that the private sector (e.g., food companies, commodity traders) and national security outfits are already monitoring the global food supply in real time. My experience is that they are not doing it with the sophistication and scope that a public facility would have. But even if they could, having estimates in the public domain is critical to achieving the public benefit. This is why the US Department of Agriculture regularly releases both its domestic and foreign production assessments.
The era of Earth observations arguably began roughly 50 years ago with the launching of the original Landsat satellite in 1972. That same year, the United States was caught by surprise by a large shortfall in Russian wheat production, a surprise that reoccurred five years later. By the end of the decade the quest to monitor food supply was a key motivation for further investment in Earth observations. We are now awash in satellite observations of Earth’s surface, yet we have still not realized the vision of real-time, public insight on food supply around the world. The facility that Becker-Reshef and Mitkish propose would help to finally realize that vision, and it has never been more needed than now.
David Lobell
Professor, Department of Earth System Science
Director, Center on Food Security and the Environment
Stanford University
Member, National Academy of Sciences
Given the current global food situation, the importance of the work that Inbal Becker-Reshef and Mary Mitkish describe cannot be emphasized enough. In 2024, some 309 million people are estimated to be acutely food insecure in the 72 countries with World Food Program operations and where data are available. Though lower than the 2023 estimate of 333 million, this marks a massive increase from pre-pandemic levels. The number of acutely hungry people in the world has more than doubled in the last five years.
Conflict is one of the key drivers of food insecurity. State-based armed conflicts have increased sharply over the past decade, from 33 conflicts in 2012 to 55 conflicts in 2022. Seven out of 10 people who are acutely food insecure currently live in fragile or conflict-affected settings. Food production in these settings is usually disrupted, making it difficult to understand how much food they are likely to produce. While Becker-Reshef and Mitkish focus on “crop production data aggregated from local to global levels,” having local-level data is critical for any groups trying to provide humanitarian aid. It is this close link between conflict and food insecurity that makes satellite-based techniques for estimating the extent of croplands and their production so vital.
This underpins the important potential of the facility the authors propose for monitoring the impacts of unexpected events on food supply around the world. Data collected by the facility could lead to a faster and more comprehensive assessment of crop production shortfalls in complex emergencies. Importantly, the facility should take a consensual, collaborative approach involving a variety of stakeholder institutions, such as the World Food Program, that not only have direct operational interest in the facility’s results, but also frequently possess critical ancillary datasets that can help analysts better understand the situation.
While satellite data is an indispensable component of modern agricultural assessments, estimation of cropland area (particularly by type) still faces considerable challenges, especially regarding smallholder farming systems that underpin the livelihoods of the most vulnerable rural populations. The preponderance of small fields with poorly defined boundaries, wide use of mixed cropping with local varieties, and shifting agricultural patterns make analyzing food production in these areas notoriously difficult. Research into approaches that can overcome these limitations will take on ever greater importance in helping the proposed facility’s output have the widest possible application.
In order to maximize the impact of the proposed facility and turn the evidence from rapid satellite-based assessments into actionable recommendations for humanitarians, close integration of its results with other streams of evidence and analysis is vital. Crop production alone does not determine whether people go hungry. Other important factors that can influence local food availability include a country’s stocks of basic foodstuffs or the availability of foreign exchange reserves to allow importation of food from international markets. And even when food is available, lack of access to food, for either economic or physical reasons, or inability to properly utilize it can push people into food insecurity. By combining evidence on a country’s capacity to handle production shortfalls with data on various other factors that influence food security, rapid assessment of crop production will be able to fully unfold its power.
Friederike Greb
Head, Market and Economic
Analysis Unit
Rogerio Bonifacio
Head, Climate and Earth
Observation Unit
World Food Program
Rome, Italy
Inbal Becker-Reshef and Mary Mitkish use Ukraine to reveal an often-overlooked impact of warfare on the environment. But it is important to remember that soil, particularly the topsoil of productive farmlands, can be lost or diminished in other equally devastating ways.
Globally, there are about 18,000 distinct types of soil. Soils have their own taxonomy, and the different soil types are sorted into one of 12 orders, with no two being the same. In the case of Ukraine, it has an agricultural belt that serves as a “breadbasket” for wheat and other crops. This region sustains its productivity in large part because of its particular soil base, called chernozem, which is rich in humus, contains high percentages of phosphorus and ammonia, and has a high moisture storage capacity—all factors that promote crop productivity.
Even as the world has so many types of soil, the pressures on soil are remarkably consistent across the globe. Among the major source of pressures, urbanization is devouring farmland, as such areas are typically flat and easy to design upon, making them widely marketable. Soil is lost from erosion, which can be gradual and almost unrecognized, or sudden, as following a natural disaster. And soil is lost or degraded from salinization and desertification.
So rather than waiting for a war to inflict damage to soils and flash warning signs about soil health, are there not things that can be done now? As Becker-Reshef and Mitkish mention, “severe climate-related events and armed conflicts are expected to increase.” And while managing such food disruptions is key to ensuring food security, forward-looking polices and enforcements to protect the planet’s base foundation for agriculture would seem to be an important part of food security planning.
In the United States, farmland is being lost at an alarming rate; one reported study found that 11 million acres were lost or paved over between 2001 and 2016. Based on those calculations, it is estimated that another 18.4 million acres could be lost between 2016 and 2040. As for topsoil, researchers agree that it can take from 200 to 1,000 years to form and add an additional inch in depth, which means that topsoil is disappearing faster than it can be replenished.
While the authors clearly show the loss of cultivated acreage from warfare, to fully capture the story would require equivalent projections for agricultural land lost to urbanization and to erosion or runoff. This would then paint a fuller picture as to how one vital resource, that of topsoil, is faring during this time of farmland reduction, coupled with greater expectations for what each acre can produce.
Joel I. Cohen
Visiting Scholar, Nicholas School of the Environment
Duke University
Forks in the Road to Sustainable Chemistry
In “A Road Map for Sustainable Chemistry” (Issues, Winter 2024), Joel Tickner and Ben Dunham convincingly argue that coordinated government action involving all federal funding agencies is needed for realizing the goal of a sustainable chemical industry that eliminates adverse impacts on the environment and human health. But any road map should be examined to make sure it heads us in the right direction.
At the outset, it is important to clear misinterpretations about the definition of sustainable chemistry stated in the Sustainable Chemistry Report the authors examine. They opine that the definition is “too permissive in failing to exclude activities that create risks to human health and environment.” On the contrary, the definition is quite clear in including only processes and products that “do not adversely impact human health and the environment” across the overall life cycle. Further, the report’s conclusions align with the United Nations Sustainable Development Goals, against which progress and impacts of sustainable chemistry and technologies are often assessed.
The nation’s planned transition in the energy sector toward net-zero emissions of carbon dioxide, spurred by the passage of several congressional acts during the Biden administration, is likely to cause major shifts in many industry sectors. While the exact nature of these shifts and their ramifications are difficult to predict, it is nevertheless vital to consider them in road-mapping efforts aimed at an effective transition to a sustainable chemical industry. Although some of these shifts could be detrimental to one industry sector, they could give rise to entirely new and sustainable industry sectors.
As an example, as consumers increasingly switch to electric cars, the government-subsidized bioethanol industry will face challenges as demand for ethanol as a fuel additive for combustion-engine vehicles erodes. But bioethanol may be repurposed as a renewable chemical feedstock to make a variety of platform chemicals with significantly more value compared to its value as a fuel. Agricultural leftovers such as corn stover and corn cobs can also be harnessed as alternate feedstocks to make renewable chemicals and materials, further boosting ethanol biorefinery economics. Such biorefineries can spur thriving agro-based economies.
Although some of these shifts could be detrimental to one industry sector, they could give rise to entirely new and sustainable industry sectors.
Another major development in decarbonizing the energy sector involves the government’s recent investments in hydrogen hubs. The hydrogen produced from carbon-free energy sources is expected to decarbonize fertilizer production, now a significant source of carbon emissions. The hydrogen can also find other outlets, including its reaction with carbon dioxide captured and sequestered in removal operations to produce green methanol as either a fuel or a platform chemical. Carbon-free oxygen, a byproduct of electrolytic hydrogen production in these hubs, can be a valuable reagent for processing biogenic feedstocks to make renewable chemicals.
Another untapped and copious source of chemical feedstock is end-of-use plastics. For example, technologies are being developed to convert used polyolefin plastics into a hydrocarbon crude that can be processed as a chemical feedstock in conventional refineries. In other words, the capital assets in existing petroleum refineries may be repurposed to process recycled carbon sources into chemical feedstocks, thereby converting them into circular refineries. There could well be other paradigm-shifting possibilities for a sustainable chemical industry that could emerge from a carefully coordinated road-mapping strategy that involves essential stakeholders across the chemical value chain.
Bala Subramaniam
Dan F. Servey Distinguished Professor, Department of Chemical and Petroleum Engineering
Director, Center for Environmentally Beneficial Catalysis
University of Kansas
Joel Tickner and Ben Dunham describe the current opportunity “to better coordinate federal and private sector investments in sustainable chemistry research and development, commercialization, and scaling” through the forthcoming federal strategic plan to advance sustainable chemistry. They highlight the unfortunate separation in many federal efforts between “decarbonization” of the chemical industry (reducing and eliminating the sector’s massive contribution to climate change) and “detoxification” (ending the harm to people and the environment caused by the industry’s reliance on toxic chemistries).
As Tickner and Dunham note, transformative change is urgently needed, and will not result from voluntary industry measures or greenwashing efforts. So-called chemical recycling (which is simply a fancy name for incineration of plastic waste, with all the toxic emissions and climate harm that implies), and other false solutions (such as carbon capture and sequestration) that don’t change the underlying toxic chemistry and production models of the US chemical industry will fail to deliver real change and a sustainable industry that isn’t poisoning people and the planet.
Transformative change is urgently needed, and will not result from voluntary industry measures or greenwashing efforts.
The 125-plus diverse organizations that have endorsed the Louisville Charter would agree with Tickner and Dunham. As the Charter states: “Fundamental reform is possible. We can protect children, workers, communities, and the environment. We can shift market and government actions to phase out fossil fuels and the most dangerous chemicals. We can spur the economy by developing safer alternatives. By investing in safer chemicals, we will protect peoples’ health and create healthy, sustainable jobs.”
Among other essential policy directions to advance sustainable chemistry and transform the chemical industry so that it is no longer a source of harm, the Charter calls for:
preventing disproportionate and cumulative impacts that harm environmental justice communities;
addressing the significant impacts of chemical production and use on climate change;
acting quickly on early warnings of harm;
taking urgent action to stop the harms occurring now, and to protect and restore impacted communities;
ensuring that the public and workers have full rights to know, participate, and decide;
ending subsidies for toxic, polluting industries, and replacing them with incentives for safe and sustainable production; and
building an equitable and health-based economy.
Federal leadership on sustainable chemistry that advances the vision and policy recommendations of the Louisville Charter would be a welcome addition to ongoing efforts for chemical industry transformation.
Steve Taylor
Program Director
Coming Clean
Joel Tickner and Ben Dunham offer welcome and long-overdue support for sustainable chemistry, but the article only scratches the surface of societal concerns we should have about toxicants that result from exposure to fossil fuel emissions, to plastics and other products derived from petrochemicals, and to toxic molds or algal blooms. Their proposals continue to rely on the current classical dose-response approach to regulating chemical exposures. But contemporary governmental standards and industrial policies built on this model are inadequate for protecting us from a variety of compounds that can disrupt the endocrine system or act epigenetically to modify specific genes or gene-associated proteins. And critically, present practices ignore a mechanism of toxicity called toxicant-induced loss of tolerance (TILT), which Claudia Miller and I first described a quarter-century ago.
TILT involves the alteration, likely epigenetically, of the immune system’s “first responders”—mast cells. Mast cells evolved 500 million years ago to protect the internal milieu from the external chemical environment. In contrast, our exposures to fossil fuels are new since the Industrial Revolution, a mere 300 years ago. Once altered and sensitized by substances foreign to our bodies, tiny quantities (parts per billion or less) of formerly tolerated chemicals, foods, and drugs trigger degranulation of mast cells, resulting in multisystem symptoms. TILT and mast cell sensitization offer an expanded understanding of toxicity occurring at far lower levels than those arrived at by customary dose-response estimates (usually in the parts per million range). Evidence is emerging that TILT modifications of mast cells explain seemingly unrelated health conditions such as autism, attention deficit hyperactivity disorder (ADHD), chronic fatigue syndrome, and long COVID, as well as chronic symptoms resulting from exposure to toxic molds, burn pits, breast implants, volatile organic compounds (VOCs) in indoor air, and pesticides.
Most concerning is evidence from a recent peer-reviewed study suggesting transgenerational transmission of epigenetic alterations in parents’ mast cells, which may lead to previously unexplained conditions such as autism and ADHD in their children and future generations. The two-stage TILT mechanism is illustrated in the figure below, drawn from the study cited. We cannot hope to make chemistry sustainable until we recognize the results of this and other recent studies, including by our group, that go beyond classical dose-response models of harm and acknowledge the complexity of multistep causation.
Nicholas A. Ashford
Professor of Technology and Policy
Director, Technology and Law Program
Massachusetts Institute of Technology
What the Energy Transition Means for Jobs
In “When the Energy Transition Comes to Town” (Issues, Fall 2023), Jillian E. Miles, Christophe Combemale, and Valerie J. Karplus highlight critical challenges to transitioning US fossil fuel workers to green jobs. Improved data on workers’ skills, engagement with fossil fuel communities, and increasingly sophisticated models for labor outcomes are each critical steps to inform prudent policy. However, while policymakers and researchers focus on workers’ skills, the larger issue is that fossil fuel communities will not experience green job growth without significant policy intervention.
A recent article I coauthored in Nature Communications looked at data from the US Bureau of Labor Statistics and the US Census Bureau to track the skills of fossil fuel workers and how they have moved between industries and states historically. The study found that fossil fuel workers’ skills are actually well-matched to green industry jobs, and that skill matching has been an important factor in their past career mobility. However, the bigger problem is that fossil fuel workers are historically unwilling to relocate to the regions where green jobs will emerge over the next decade. Policy interventions, such as the Inflation Reduction Act, could help by incentivizing job growth in fossil fuel communities, but success requires that policy be informed by the people who live in those communities.
While policymakers and researchers focus on workers’ skills, the larger issue is that fossil fuel communities will not experience green job growth without significant policy intervention.
Even with this large-scale federal data, it’s still unclear what the precise demands of emerging green jobs will be. For example, will emerging green jobs be stable long-term career opportunities? Or will they be temporary jobs that emerge to support an initial wave of green infrastructure but then fade once the infrastructure is established? We need better models of skills and green industry labor demands to distinguish between these two possibilities.
It’s also hard to describe the diversity of workers in “fossil fuel” occupations. The blanket term encompasses coal mining, natural gas extraction, and offshore drilling, each of which vary in the skills and spatial mobility required by workers. Coal miners typically live near the mine, while offshore drilling workers are on-site for weeks at a time before returning to homes anywhere in the country.
Federal data may represent the entire US economy, but new alternative data offer more nuanced insights into real-time employer demands and workers’ career trajectories. Recent studies of technology and the future of work utilize job postings and workers’ resumes from online job platforms, such as Indeed and LinkedIn. Job postings enable employers to list preferred skills as they shift to reflect economic dynamics—even conveying shifting skill demands for the same job title over time. While federal labor data represent a population at a given time, resumes enable the tracking of individuals over their careers detailing things such as career mobility between industries, spatial mobility between labor markets, and seniority/tenure at each job. Although these data sources may fail to represent the whole population of fossil fuel workers, they have the potential to complement traditional federal data so that we can pinpoint the exact workers and communities that need policy interventions.
Morgan R. Frank
Assistant Professor
Department of Informatics and Networked Systems
University of Pittsburgh
A Focus on Diffusion Capacity
In “No, We Don’t Need Another ARPA” (Issues, Fall 2023), John Paschkewitz and Dan Patt argue that the current US innovation ecosystem does not lack for use-inspired research organizations and should instead focus its attention on diffusing innovations with potential to amplify the nation’s competitive advantage. Diffusion encompasses multiple concepts, including broad consumption of innovation; diverse career trajectories for innovators; multidisciplinary collaboration among researchers; improved technology transition; and modular technology “stacks” that enable components to be invented, developed, and used interoperably to diversify supply chains and reduce barriers to entry for new solutions.
Arguably, Advanced Research Project Agencies (ARPAs) are uniquely equipped to enable many aspects of diffusion. They currently play an important role in promoting multidisciplinary collaborations and in creating new paths for technology transition by investing at the seam between curiosity-driven research and product development. They could be the unique organizations that establish the needed strategic frameworks for modular technology stacks, both by helping define the frameworks and investing in building and maintaining them.
Perhaps a gap, however, is that ARPAs were initially confined to investing in technologies that aid the mission of the community they support. The target “use” for the use-inspired research was military or intelligence missions, and any broader dual-use impact was secondary. But today the United States faces unique challenges in techno-economic security and must act quickly to fortify its global leadership in critical emerging technologies (CETs), including semiconductors, quantum, advanced telecommunications, artificial intelligence, and biotechnology. We need ARPA-like entities to advance CETs independent of a particular federal mission.
Arguably, ARPAs are uniquely equipped to enable many aspects of diffusion.
The CHIPS and Science Act addresses this issue in a fragmented way. The new ARPAs being established in health and transportation have some of these attributes, but lack direct alignment with CETs. In semiconductors, the National Semiconductor Technology Center could tackle this role. In quantum, the National Quantum Initiative has the needed cross-agency infrastructure and during its second five-year authorization seeks to expand to more applied research. The Public Wireless Supply Chain Innovation Fund is advancing 5G communications by investing in Open Radio Access Network technology that allows interoperation between cellular network equipment provided by different vendors. However, both artificial intelligence and biotechnology remain homeless. Much attention is focused on the National Institute of Standards and Technology to lead these areas, but it lacks the essential funding and extramural research infrastructure of an ARPA.
The CHIPS and Science Act also created the Directorate for Technology, Innovation, and Partnerships (TIP) at the National Science Foundation, with the explicit mission of investing in CETs through its Regional Innovation Engines program, among others. Additionally, TIP established the Tech Hubs program within the Economic Development Administration. Both the Engines and Tech Hubs programs lean heavily into the notion of place-based innovation, where regions of the nation will select their best technology area and build the ecosystem of universities, start-ups, incubators, accelerators, venture investors, and state economic development agencies. While this structure may address aspects of diffusion, it lacks the efficiency of a more directed, use-inspired ARPA.
Arguably the missing piece of the puzzle is an ARPA for critical emerging technologies that can undertake the strategic planning necessary to more deliberately advance US techno-economic needs. Other nations have applied research agencies that strategically execute the functions that the United States distributes across the Economic Development Administration, the TIP directorate, various ARPAs, and state-level economic development and technology agencies. This could be a new agency within the Department of Commerce; a new function executed by TIP within its existing mission; or a shift within the existing ARPAs to ensure that their mission includes investing in CETs, not only because they are dual-use technologies that advance their parent department’s mission but also to advance US techno-economic competitiveness.
Charles Clancy
Chief Technology Officer, MITRE
Senior Vice President and General Manager of MITRE Labs
Cofounder of five venture-backed start-ups in cybersecurity, telecommunications, space, and artificial intelligence
John Paschkewitz and Dan Patt make a strong argument that the biggest bottleneck in the US innovation ecosystem is in technology “diffusion capacity” rather than new ideas out of labs; that there are several promising approaches to solving this problem; and that the nation should implement these solutions. The implicit argument is that another ARPA isn’t needed because the model was created in the world of the 1950s and ’60s where diffusion was all but guaranteed by America’s strong manufacturing ecosystem, and as a result is not well-suited to address modern diffusion bottlenecks.
In my view, however, the need to face problems that the ARPA model wasn’t originally designed for doesn’t necessarily mean that we don’t need another ARPA, for three reasons:
1. While it’s not as common as it could be, there are examples of ARPAs doing great diffusion work. The authors highlight the Semiconductor Manufacturing Technology consortium as a positive example of what we should be doing—but SEMATECH was in fact spearheaded by DARPA, the progenitor of the ARPA model.
2. New ARPAs can modify the model to help diffusion in their specific domains. ARPA-E in the energy sector has “tech to market advisors” who work alongside program directors to strategize how technology will get out into the world. DAPRA has created a transition team.
The need to face problems that the ARPA model wasn’t originally designed for doesn’t necessarily mean that we don’t need another ARPA.
3. At the core, the powerful thing about ARPAs is that they give program managers the freedom and power to take whatever actions they need to accomplish the mission of creating radically new technologies and getting them out into the world. There is no inherent reason that program managers can’t focus more on manufacturability, partnerships with large organizations, tight coordination to build systems, and other actions that can enable diffusion in today’s evolving world.
Still, it may be true that we don’t need another government ARPA. Over time, the way that DARPA and its cousins do things has been increasingly codified: they are under constant scrutiny from legislators, they can write only specific kinds of contracts, they must follow set procedures regarding solicitations and applications, and they may show a bias toward established organizations such as universities or prime contractors as performers. These bureaucratic restrictions will make it hard for government ARPAs to make the creative “institutional moves” necessary to address current and future ecosystem problems.
Government ARPAs run into a fundamental tension: taxpayers in a democracy want the government to spend money responsibly. However, creating new technology and getting it out into the world often requires acting in ways that, at the time, seem a bit irrational. There is no reason an ARPA necessarily needs to be run by the government. Private ARPAs such as Actuate and Speculative Technologies may offer a way for the ARPA model to address technology diffusion problems of the twenty-first century.
Ben Reinhardt
CEO, Speculative Technologies
John Paschkewitz and Dan Patt make some fantastic points about America’s innovation ecosystem. I might suggest, however, a different framing for the article. It could instead have been called “Tough Tech is… Tough; Let’s Make it Easier.” As the authors note, America’s lab-to-market continuum in fields such as biotech, medical devices, and software isn’t perfect. But it is far from broken. In fact, it is the envy of the rest of the world.
Still, it is undeniably true that bringing innovations in materials science, climate, and information technology hardware from the lab to the market is deeply challenging. These innovations are often extremely capital intensive; they take many years to bring to market; venture-backable entrepreneurs with relevant experience are scarce; many innovations are components of a larger system, not stand-alone products; massive investments are required for manufacturing and scale-up; and margins are often thin for commercialized products. For these and various other reasons, many great innovations fail to reach the market.
Bringing innovations in materials science, climate, and information technology hardware from the lab to the market is deeply challenging.
The solutions that Paschkewitz and Patt suggest are excellent—in particular, ensuring that fundamental research is happening in modular components and developing alternative financing arrangements such as “capital stacks” for late-stage development. However, I don’t believe they are the only options, nor are they sufficient on their own to close the gap.
More support and reengineered processes are needed across the entire technology commercialization continuum: from funding for research labs, to support for tech transfer, to securing intellectual property rights, to accessing industry data sets and prototyping equipment for validating the commercial viability of products, to entrepreneurial education and incentives for scientists, to streamlined start-up deal term negotiations, to expanding market-pull mechanisms, and more. This will require concerted efforts across federal agencies focused on commercializing the nation’s amazing scientific innovations. Modularity and capital are part of the solution, but not all of it.
The good news is that we are at the start of a breathtaking experiment centered on investing beyond (but in lieu of) the curiosity-driven research that has been the country’s mainstay for more than 70 years. The federal government has launched a variety of bold efforts to re-envision how its agencies promote innovation and commercialization that will generate good jobs, tax revenues, and exports across the country (not just in the existing start-up hubs). Notable efforts include the National Science Foundation’s new Directorate for Technology, Innovation and Partnerships and its Regional Innovation Engines program, the National Institutes of Health’s ARPA-H, the Commerce Department’s National Semiconductor Technology Center and its Tech Hubs program, and the Department of Treasury’s State Small Business Credit Initiative. Foundations are doing their part as well, including Schmidt Futures (where I am an Innovation Fellow working on some of these topics), the Lemelson Foundation, the Walmart Family Foundation, and many more.
As a final note, let me propose that the authors may have an outdated view of the role that US research universities play in this puzzle. Over the past decade, there has been a near-total reinvention of support for innovation and entrepreneurship. At Columbia alone, we offer proof-of-concept funding for promising projects; dozens of entrepreneurship classes; coaching and mentorship from serial entrepreneurs, industry executives, and venture capitalists; matching programs for venture-backable entrepreneurs; support for entrepreneurs wanting to apply to federal assistance programs; connections to venture capitalists for emerging start-ups; access for start-ups to core facilities; and so much more. Such efforts here and elsewhere hopefully will lead to even more successes in years to come.
Orin Herskowitz
Senior Vice President for Applied Innovation and Industry Partnerships, Columbia University
Executive Director, Columbia Technology Ventures
Embracing Intelligible Failure
In “How I Learned to Stop Worrying and Love Intelligible Failure” (Issues, Fall 2023), Adam Russell asks the important and provocative questions: With the growth of “ARPA-everything,” what makes the model succeed, and when and why doesn’t it? What is the secret of success for a new ARPA? Is it the mission? Is it the money? Is it the people? Is it the sponsorship? Or is it just dumb luck and then a virtuous cycle of building on early success?
I have had the privilege of a six-year term at the Department of Defense Advanced Research Projects Agency (DARPA), the forerunner of these new efforts, along with a couple of years helping to launch the Department of Homeland Security’s HSARPA and then 15 years at the Bill & Melinda Gates Foundation running and partnering with international development focused innovation programs. In the ARPA world, I have joined ongoing success, contributed to failure, and then helped launch new successful ARPA-like organizations in the international development domain.
During my time at the Gates Foundation, we frequently asked and explored with partners the question, What does it take for an organization to be truly good at identifying and nurturing new innovation? To answer, it is necessary to separate the process of finding, funding, and managing new innovations through proof-of-concept from the equally challenging task of taking a partially proven innovative new concept or product through development and implementation to achieve impact at scale. I tend to believe that Russell’s “aliens” (described in his Prediction 6 about “Alienabling”) are required for the early innovation management tasks, but I also believe that they are seldom well suited to the tasks of development and scaling. Experts are good at avoiding mistakes, but it is a different challenge to take a risk that is likely to fail and is in your own field of expertise, where you “should have known better” and where failure might be seen as a more direct reflection of your skills.
What does it take for an organization to be truly good at identifying and nurturing new innovation?
Adding my own predictions to the author’s, here are some other things that it takes for an organization to be good at innovation. Some are obvious, such as having sufficient human capital and financial resources, along with operational flexibility. Others are more nuanced, including:
An appetite for risk and a tolerance for failure.
Patience. Having a willingness to bet on long timelines (and possibly the ability to celebrate success that was not intended and that you do not directly benefit from).
Being involved with a network that provides deeper understanding of problems that need to be and are worth solving, and having an understanding of the landscape of potential solutions.
Recognition as a trusted brand that attracts new talent, is valued as a partner in creating unusual new collaborations, and is known for careful handling of confidential information.
Engaged and effective problem-solving in managing projects, and especially nimble oversight in managing the managers at an ARPA (whether that be congressional and administrative oversight in government or donor and board oversight in philanthropy).
Parent organization engagement downstream in “making markets,” or adding a “prize element” for success (and to accelerate impact).
To a large degree, these organizational attributes align well with many of Russell’s predictions. But I will make one more prediction that is perhaps less welcome. A bit like Anna Karenina’s view of happy and unhappy families, there are so many ways for a new ARPA to fail, but “happy ARPAs” likely share—and need—all of the attributes listed above.
Steven Buchsbaum
Principal, Bermuda Associates
Adam Russell is correct: studying the operations of groups built on the Advanced Research Projects Agency model, applying the lessons learned, and enshrining intelligible failure paradigms could absolutely improve outcomes and ensure that ARPAs stay on track. But not all of the author’s predictions require study to know that they need to be addressed directly. For example, efforts from entrenched external interests to steer ARPA agencies can corrode culture and, ultimately, impact. We encountered this when my colleague Geoff Ling and I proposed the creation of the health-focused ARPA-H. Disease advocacy groups and many universities refused to support creation of the agency unless language was inserted to steer it toward their interests. Indeed, the Biden administration has been actively pushing ARPA-H to invest heavily in cancer projects rather than keeping its hands off. Congress is likely to fall into the same trap.
But there is a larger point as well: if you take a fifty-thousand-foot view of the research enterprise, you can easily see that the principle Russell is espousing—that we should study how ARPAs operate—should also be more aggressively applied to all agencies funding research and development.
Efforts from entrenched external interests to steer ARPA agencies can corrode culture and, ultimately, impact.
There is another element of risk that was out of scope for Russell’s article, and that rarely gets discussed: commercialization. DARPA, developed to serve the Department of Defense, and IARPA, developed to serve the government’s Intelligence agencies, have built-in federal customers—yet they still encounter commercialization challenges. Newer ARPAs such as ARPA-H and the energy-focused ARPA-E are in a more difficult position because they do not necessarily have a means to ensure that the technologies they are supporting can make it to market. Again, this is also true for all R&D agencies and is the elephant in the room for most technology developers and funders.
While there have been more recent efforts to boost translation and commercialization of technologies developed with federal funding—through, for example, the National Science Foundation’s Directorate for Technology, Innovation, and Partnerships—there is a real need to measure and de-risk commercialization across the R&D enterprise in a more concerted and outcomes-focused manner. Frankly, one of the wisest investments the government could make with its R&D dollars would be dedicating some of them toward commercialization of small and mid-cap companies that are developing products that would benefit society but are still too risky to attract private capital investors.
The government is well-positioned to shoulder risk through the entire innovation cycle, from R&D through commercialization. Otherwise, nascent technological advances are liable to die before making it across the infamous “valley of death.” Federal support would ensure that the innovation enterprise is not subject to the economy or whims of private capital. The challenge is that R&D agencies are not staffed with people who understand business risk, and thus initiatives such as the Small Business Innovation Research program are often managed by people with no private-sector experience and are so cumbersome and limiting that many companies simply do not bother applying for funding. There are myriad reasons why this is the case, but it is definitely worth establishing an entity designed to understand and take calculated commercialization risk … intelligibly.
Michael Stebbins
President
Science Advisors
As Adam Russell insightfully suggests, the success of the Advanced Research Projects Agency model hinges not only on technical prowess but also on a less tangible element: the ability to fail. No technological challenge worth taking will be guaranteed to work. As Russell points out, having too high a success rate should indicate that the particular agency is not orienting itself toward ambitious “ARPA-hard problems.”
But failing is inherently fraught when spending taxpayer dollars. Politicians have been quick to publicly kneecap science funding agencies for high-profile failures. It is notable that two of the most successful agencies in this mold have come from the national security community: the original Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA). The Pentagon is famously tightlipped about its failures, which provides some shelter from the political winds for an ambitious, risk-taking research and development enterprise. Fewer critics will readily pounce on a “shrimp on a treadmill” story when four-star generals say it is an important area of research for national security.
Having too high a success rate should indicate that the particular agency is not orienting itself toward ambitious “ARPA-hard problems.”
There are reasons to be concerned about the political sustainability of frequent failure in ARPAs, especially as they move from a vehicle for defense-adjacent research into “normal” R&D areas such as health care, energy, agriculture, and infrastructure. Traditional federal funders already live in fear of selecting the next “Solyndra.” And although Tesla was a success story from the same federal loan portfolio, the US political system has a way of making the failures loom larger than the successes. I’ve personally heard federal funders cite the political maelstrom following the failed Solyndra solar panel company as a reason to be more conservative in their grantmaking and program selection. And it is difficult to put the breakthroughs we neglected to fund on posterboard—missed opportunities don’t motivate political crusades.
As a society and a political system, we need to develop a better set of antibodies to the opportunism that leaps on each failure and thereby smothers success. We need the political will to fail. Finding stories of success will help, yes, but at a deeper level we need to valorize stories of intelligible failure. One idea might be to launch a prestigious award for program managers who took a high-upside bet that nonetheless failed, and give them a public platform to discuss why the opportunity was worth taking a shot on and what they learned from the process.
None of this is to say that federal science and technology funders should be immune from critique. But that criticism should be grounded in precisely the kind of empiricism and desire for iterative improvement that Russell’s article embodies. In the effort to avoid critique, we can sometimes risk turning the ARPA model into a cargo cult phenomenon, copied and pasted wholesale without thoughtful consideration on the appropriateness of each piece. It was a refreshing change of pace, then, to see that Russell, when starting up the health-oriented ARPA-H, added several new questions, centered on technological diffusion and misuse, to the famous Heilmeier Catechism questions that a proposed ARPA project must satisfy to be funded. Giving the ARPA model the room to change, grow, and fail is perhaps the most important lesson of all.
Caleb Watney
Cofounder and co-CEO
Institute for Progress
A key obsession for many scientists and policymakers is how to fund more “high-risk” research—the kind for which the Defense Advanced Research Projects Agency (DARPA) is justifiably famous. There are no fewer than four lines of high-risk research awards at the National Institutes of Health, for example, and many agencies have launched their own version of an ARPA for [fill-in-the-blank].
Despite all of this interest in high-risk research, it is puzzling that “there is no consensus on what constitutes risk in science nor how it should be measured,” to quote Pierre Azoulay, an MIT professor who studies innovation and entrepreneurship. Similarly, the economics scholars Chiara Franzoni and Paula Stephan have reported in a paper for the National Bureau of Economic Research that the discussion about high-risk research “often occurs in the absence of well-defined and developed concepts of what risk and uncertainty mean in science.” As a result, meta-scientists who study this issue often use proxies that are not necessarily measures of risk at all (e.g., rates of “disruption” in citation patterns).
I suggest looking to terminology that investors use to disaggregate various forms of risk:
Execution risk is the risk that a given team won’t be able to complete a project due to incompetence, lack of skill, infighting, or any number of reasons for dysfunctionality. ARPA or not, no science funding agency should try to fund research with high execution risk.
Despite all of this interest in high-risk research, it is puzzling that “there is no consensus on what constitutes risk in science nor how it should be measured,” to quote Pierre Azoulay.
Market risk is the risk that even if a project works, the rest of the market (or in this case, other scientists) won’t think that it is worthwhile or useful. Notably, market risk isn’t a static and unchanging attribute of a given line of research. The curious genome sequences found in a tiny marine organism, reported in a 1993 paper and later named CRISPR, had a lot of market risk at the time (hardly anyone cared about the result when first published), but the market risk of this type of research wildly changed as CRISPR’s potential as a precise gene-editing tool became known. In other words, the reward to CRISPR research went up and the market risk went down (the opposite of what one would expect if risk and reward are positively correlated).
Technical risk is the risk that a project is not technically possible at the time. For example, in 1940, a proposal to decipher the structure of DNA would have had a high degree of technical risk. What makes the ARPA model distinct, I would argue, is selecting research programs that could be highly rewarding (and therefore have little market risk) and are at the frontier of a difficult problem (and therefore have substantial technical risk, but not so much as to be impossible).
Adam Russell’s thoughtful and inventive article points us in the right direction by arguing that, above all, we need to make research failures more intelligible. (I expect to see this and some of his other terms on future SAT questions!) After all, one of the key problems with any attempt to fund high-risk research is that when a research project “fails” (as many do), we often don’t know or even have the vocabulary to discuss whether it was because of poor execution, technical challenges, or any other source of risk. Nor, as Russell points out, do we ask peer reviewers and program managers to estimate the probability of failure, although we could easily do so (including disaggregated by various types of risk). As Russell says, ARPAs (any funding agency, for that matter) could improve only if they put more effort into actually enabling the right kind of risk-taking while learning from intelligible failures. More metascience could point the way forward here.
Stuart Buck
Executive Director, Good Science Project
Former Vice President of Research, Arnold Ventures
Adam Russell discusses the challenge of setting up the nascent Advanced Research Projects Agency for Health (ARPA-H), meant to transform health innovation. Being charged with building an organization that builds the future would make anyone gulp. Undeterred, Russell drank from a firehose of opinion on what makes an ARPA tick, and distilled from it the concept of intelligible failure.
As Russell points out, ARPA programs fail—a lot. In fact, failure is expected, and demonstrates that the agency is being sufficiently ambitious in its goals. ARPA-H leadership has explicitly stated that it intends to pursue projects “that cannot otherwise be pursued within the health funding ecosystem due to the nature of the technical risk”—in other words, projects with revolutionary or unconventional approaches that other agencies may avoid as too likely to fail. Failure is not usually a winning strategy. But paired with this willingness to fail, Russell says, is the mindset that “a technical failure is different from a mistake.”
By building feedback loops, technical failures can ultimately turn into insight regarding which approaches truly work. We absolutely agree that intelligible technical failure is crucial to any ARPA’s success, and find Russell’s description of it brilliantly apt. However, we believe Russell could have added one more note about failure. There are other types of failure, aside from technical failure, that ARPAs face as they pursue cutting-edge technology. Failures stemming from unanticipated accidents, misuse, or misperception are types of failures that do need to be worried about.
By building feedback loops, technical failures can ultimately turn into insight regarding which approaches truly work.
The history of DARPA technologies demonstrates the “dual use” nature of transformative innovation, which can unlock new useful applications as well as unintentional harmful consequences. DARPA introduced Agent Orange as a defoliation compound during the Vietnam War, despite warnings of its health harms. These are types of failures we believe any modern ARPA would wish to avoid. Harmful accidents and misuses are best proactively anticipated and avoided, rather than attempting to learn from them only after the disaster has occurred.
In fact, we believe the most ambitious technologies often prove the safest ones: we should aim to create the health equivalent of the safe and comfortable passenger jet, not simply a spartan aircraft prone to failure. To do this, ARPAs should pursue both technical intelligible failure and catastrophobia: an anticipation of, and commitment to avoiding, accidental and misuse failures of their technologies.
With regard to ARPA-H in particular, the agency has signaled its awareness of misuse and misperception risks of its technologies, and has solicited outside input into structures, strategies, and approaches to mitigating these risks. We hope consideration of accidental risks will also be included. With health technologies in particular, useful applications can be a mere step away from harmful outcomes. Technicians developing x-ray technology initially used their bare hands to calibrate the machines, resulting in cancers requiring amputation. Now, a modern hospital is incomplete without radiographic imaging tools. ARPA-H should lead the world in both transformative innovation and pioneering safety.
Jassi Pannu
Resident Physician, Stanford University School of Medicine
Jacob Swett
Executive Director, Blueprint Biosecurity
FLOE: A Climate of Risk
STEPHEN TALASNIK, Glacial Mapping 2023; Digitally printed vinyl wall print, 10’ x 14’ (h x w)
Imagination can be a fundamental tool for driving change. Through creative narratives, we can individually and collectively imagine a better future and, potentially, take actions to move toward it. For instance, science fiction writers have, at times, seemed to predict new technologies or situations in society—raising the question of whether narratives can create empathy around an issue and help us imagine and work toward a desirable outcome.
Philadelphia-born artist Stephen Talasnik takes this question of narratives seriously. He is a sculptor and installation artist whose exhibition, FLOE: A Climate of Risk, is on display at the Museum for Art in Wood in Philadelphia, Pennsylvania, from November 3, 2023, through February 18, 2024. Talasnik’s work is informed by time, place, and the complex relationship between ideas that form a kind of “functional fiction.” Through FLOE, Talasnik tells the story of a fictitious shipwreck that was carried to Philadelphia by the glacier in which it was buried. As global temperatures warmed, the glacier melted and surrendered the ship’s remains, which were discovered by mischievous local children. The archaeological remains and reconstructions are presented in this exhibition, alongside a sculptural representation of the ice floe that carried the ship to its final resting place. Talasnik uses architectural designs to create intricate wood structures from treated basswood. By building a large wooden model to represent the glacier, the artist evokes a shadowy memory of the iceberg and reminds visitors of the sublime power of nature and its constant, often destructive, search for equilibrium.
STEPHEN TALASNIK, A Climate of Risk – Debris Field (detail)
“FLOE emerged from the imagination of Stephen Talasnik, an artist known worldwide for his hand-built structures installed in natural settings,” writes Jennifer-Navva Milliken, executive director and chief curator at the Museum for Art in Wood. “The exhibition is based on a story created by the artist but touches on the realities of climate change, a problem that exposes the vulnerability of the world’s most defenseless populations, including the impoverished, houseless, and stateless. Science helps us understand the impact through data, but the impact to humanity is harder to quantify. Stephen’s work, through his complex storytelling and organic, fragmented sculptures, helps us understand this loss on the human scale.”
For more information about the exhibition and a mobile visitors’ guide, visit the Museum for Art in Wood website.
STEPHEN TALASNIK, Glacier, 2023. Pine stick infrastructure with bamboo flat reed, 12 ft tall with a footprint of 500 sq ft (approx.)STEPHEN TALASNIK, Leaning Globe, 1998 – 2023; Painted basswood with metallic pigment, 28 x 40 x 22 inches (h x w x d)
STEPHEN TALASNIK, Tunneling, 2007 – 2008; Wood in resin, 4 x 8 x 12 inches
STEPHEN TALASNIK, House of Bones, 2015-2023; Wood and mica; 32 x 24 x 6 inches
AI-Assisted Biodesign
AMY KARLE, BioAI Mycelium Grown Into the Form of Insulators, 2023
Amy Karle is a contemporary artist who uses artificial intelligence as both a medium and a subject in her work. Karle has been deeply engaged with AI, artificial neural networking, machine learning, and generative design since 2015. She poses critical questions about AI, illuminates future visions, and encourages us to actively shape the future we desire.
AMY KARLE, AI Bioforms for Carbon Capture, 2023AMY KARLE, Cell Forms (AI-assisted design), 2023AMY KARLE, AI Coral Bioforms, 2023
One of Karle’s projects focuses on how AI can help design and grow biomaterials and biosubstrates, including guiding the growth of mycelium-based materials. Her approach uses AI to identify, design, and develop diverse bioengineered and bioinspired structures and forms and to refine and improve the structure of biomaterials for greater functionality and sustainability. Another project is inspired by the seductive form of corals. Karle’s speculative biomimetic corals leverage AI-assisted biodesign in conjunction with what she terms “computational ecology” to capture, transport, store, and use carbon dioxide. Her goal with this series is to help mitigate carbon dioxide emissions from industrial sources such as power plants and refineries and to clean up highly polluted areas.
AMY KARLE, BioAI-Formed Mycelium, 2023AMY KARLE, AI Coral Bioforms, 2023AMY KARLE, Cell Forms (AI-assisted design), 2023AMY KARLE, BioAI-Formed Mycelium, 2023
Rethinking Engineering Education
We applaud Idalis Villanueva Alarcón’s essay, “How to Build Engineers for Life” (Issues, Fall 2023). As the leaders of an organization that has for 36 years sought to inspire, support, and sustain the next generation of professionals in science, technology, engineering, mathematics, and medicine (STEMM), we support her desire to improve the content and delivery of engineering education. One of us (Fortenberry) has previously commented in Issues (September 13, 2021) on the challenges in this regard.
We agree with her observation that education should place an emphasis on learning how to learn in order to support lifelong learning and an individual’s ability for continual adaptation and reinvention. We believe that in an increasingly technological society there is a need for professionals trained in STEMM to work in a variety of fields. Therefore, there is a need for a significant increase in the number of STEMM professionals graduating from certificate programs, community colleges, baccalaureate programs, and graduate programs. Thus, we agree that basic engineering courses should be pumps and not filters in the production of these future STEMM professionals.
We strongly support the author’s call for “stackable” certificates leading to degrees. The same holds for further increasing the trend toward pushing experiential learning activities (including laboratories, design-build contests, internships, and co-ops) earlier in engineering curricula.
We need to ensure that underserved students have opportunities for rigorous STEMM instruction in pre-college education.
Over the past 40 years, a number of organizations and individuals have worked to greatly improve engineering education. Various industrial leaders, the nongovernmental accrediting group ABET, the National Academies of Sciences, Engineering, and Medicine, the National Science Foundation, and the National Aeronautics and Space Administration, among others, have helped engineering move to a focus on competencies, recognize the urgency of interdisciplinary approaches, and emphasize the utility of situating problem-solving in systems thinking. But much work remains to be done.
Most particularly, significant work remains in engaging underserved populations. And these efforts must begin in the earliest years. The author begins her essay with her own story of being inspired to engineering by her father. We need to reach students whose caregivers and relatives have not had that opportunity. We need to provide exposure and reinforcement through early and sustained hands-on opportunities. We need to ensure that underserved students have opportunities for rigorous STEMM instruction in pre-college education. We need to remove financial barriers to attendance of high-quality collegiate STEMM programs. And for the precious 5–7% of high school graduates who enter collegiate STEMM majors, we must hold on to more than the approximately 50% national average that currently are retained in engineering through baccalaureate graduation. We need to ensure that having entered a STEMM profession, there are supports in place for retention and professional advancement. The nation’s current legal environment has caused great concern about our ability to target high-potential individuals from underserved communities for programmatic, financial, professional, and social support activities. We must develop creative solutions that allow us to continue and expand our efforts.
Great Minds in STEM is focused on contributing to the attainment of the needed changes and looks forward to collaborating with others in this effort.
Juan Rivera
Chair of Board of Directors, Great Minds in STEM
Retired Director of Mission 1 Advanced Technologies and Applications Space Systems, Aerospace Systems Sector, Northrop Grumman Corporation
Norman Fortenberry
Chief Executive Officer, Great Minds in STEM
In her essay, Idalis Villanueva Alarcón outlines ways to improve engineering students’ educational experience and outcomes. As leaders of the American Society for Engineering Education, we endorse her suggestions. ASEE is already actively working to strengthen engineering education with many of the strategies the author describes.
A system in which “weeder courses” are used to remove “defective products” from the educational pipeline is both outdated and counterproductive in today’s world. We can do better, and we must.
As Villanueva explains, a system in which “weeder courses” are used to remove “defective products” from the educational pipeline is both outdated and counterproductive in today’s world. We can do better, and we must. To help improve this system, ASEE is conducting the Weaving In, Not Weeding Out project, sponsored by the National Science Foundation, under the leadership of ASEE’s immediate past president, Jenna Carpenter, and in collaboration with the National Academy of Engineering (NAE). This project is focused on identifying and sharing best practices known to support student success, in order to replace outdated approaches.
Villanueva emphasizes that “barriers are integrated into engineering culture and coursework and grounded in assumptions about how engineering education is supposed to work, who is supposed to take part, and how engineers should behave.” This sentiment is well aligned with ASEE’s Mindset Project, developed in collaboration with NAE and sponsored by the National Science Foundation. Two leaders of this initiative, Sheryl Sorby and Gary Bertoline, reviewed its goals in the Fall 2021 Issues article “Stuck in 1955, Engineering Education Needs a Revolution.”
The project has five primary objectives:
Teach problem solving rather than specific tools
End the “pipeline mindset”
Recognize the humanity of engineering faculty
Emphasize instruction
Make graduate education more fair, accessible, and pragmatic
In addition, the ASEE Faculty Teaching Excellence Task Force, under the leadership of University of Akron’s Donald Visco, has developed a framework to guide professional development in engineering and engineering technology instruction. Conceptualized by educators for educators and also funded by NSF, the framework will enable ASEE recognition for levels of teaching excellence.
We believe these projects are helping transform engineering education for the future, making the field more inclusive, flexible, supportive, and multidisciplinary. Such changes will help bring about Villanueva’s vision, and they will benefit not only engineering students and the profession but also the nation and world.
Jacqueline El-Sayed
Executive Director,
American Society for Engineering Education
Doug Tougaw
2023–2024 President,
American Society for Engineering Education
The compelling insights in Idalis Villanueva Alarcón’s essay deeply resonate with my own convictions about the essence of engineering education and workforce development. She masterfully articulates a vision where engineering transcends its traditional academic confines to embrace an enduring voyage of learning and personal growth. This vision aligns with my philosophy that engineering is a lifelong journey, one that is continually enriched by a diversity of experiences and cultural insights.
I propose a call to action for all involved in the engineering education ecosystem to embrace and champion the cultural and experiential wealth that defines our society.
The narrative the author shares emphasizes the importance of informal learning, which often takes place outside the classroom and is equally crucial in shaping the engineering mindset. It is a call to action for educational systems to integrate a broader spectrum of knowledge sources, thus embracing the wealth of experiences that individuals bring to the table. This inclusive approach to education is essential for cultivating a dynamic workforce that is innovative, versatile, and responsive to the complex challenges of our time. I propose a call to action for all involved in the engineering education ecosystem to embrace and champion the cultural and experiential wealth that defines our society.
Fostering lifelong learning in engineering must be a collective endeavor that spans the entire arc of an engineer’s career, necessitating a unified effort from every learning partner who influences their journey—from educators instilling the foundations of science and mathematics to mentors guiding seasoned professionals. This collaborative call to action is to actively dismantle the barriers to inclusivity, ensuring that our educational and work cultures not only value but celebrate the diverse “funds of knowledge” each individual brings. By creating platforms where every voice is heard and every experience is valued, we can nurture an engineering profession marked by continual exploration, mutual respect, and a commitment to societal betterment—a profession that is as culturally adept and empathetic as it is technically proficient.
Also central to this partnership is the role of the student as an active participant in their learning journey. Students must be encouraged to take ownership of their continuous development, understanding that the field of engineering is one of perpetual evolution. This empowerment is fostered by learning partners at all life stages instilling in students and professionals the belief that their growth extends beyond formal education and work to include the myriad learning opportunities that life offers.
Inclusive leadership practices and models are the scaffolding that supports this philosophy. Leaders across the spectrum of an engineer’s life—from educators in primary schools to mentors in professional settings—are tasked with creating environments that foster inclusivity and encourage the exchange of ideas. Such leadership is not confined to policymaking; it is embodied in the day-to-day interactions that inspire students and professionals to push the boundaries of their understanding and capabilities.
Finally, we must advocate for frameworks and models that drive systemic change through collaborative leadership. The engineering journey is a tapestry woven from the threads of diverse experiences, continuous learning, and inclusive leadership. Let us, as educators and leaders, learning partners at all levels and stages, commit to empowering engineers to embark on this journey with the confidence and support they need to succeed.
What steps are we willing to take today to ensure that inclusivity and lifelong learning become the enduring legacy we leave for future engineers? Let us pledge to create a future where every engineer is a constant learner, fully equipped to contribute to a world that is richly diverse, constantly evolving, and increasingly interconnected.
Denise R. Simmons
Associate Dean for Workforce Development
Herbert Wertheim College of Engineering
University of Florida
Idalis Villanueva Alarcón calls deserved attention to new initiatives to enhance engineering education, while also reminding us of a failure of the profession to keep up with the changes it keeps causing. Engineering is the dynamic core of the technological changes and innovations that are mass producing a paradoxical societal fallout: glamorous prosperity and psychopolitical disorder. It’s driving us into an engineered world that is, in aggregate, wealthy and powerful beyond the ability to measure or imagine, yet in which a gap between those who call it home and those who struggle to do so ever widens.
It’s also unclear how much curriculum reform might contribute to the deeper political challenges deriving from the gap between the rich and powerful and those who have been uprooted from destroyed communities.
Villanueva’s call for the construction of a broader engineering curriculum and lifelong learning is certainly desirable; it is also something we’ve heard many times, with only marginal results. It’s also unclear how much curriculum reform might contribute to the deeper political challenges deriving from the gap between the rich and powerful and those who have been uprooted from destroyed communities. For many people, creative destruction is much more destruction than creation.
Should we nevertheless ask why such a salutary ideal has gotten so little traction? It’s complex and all the causes are not clear, but it’s hard not to suspect that just as there is a hidden curriculum in the universities that undermines the ideal, there is another in the capitalist economy to which engineering is so largely in thrall. And what are the hidden curricular consequences of not requiring a bachelor’s degree before enrollment in an engineering school, unlike as is required by schools of law and medicine? If engineering were made a truly professional degree, some of Villanueva’s proposals might not even be necessary.
Carl Mitcham
Professor Emeritus of Humanities, Arts, and Social Sciences
Colorado School of Mines
Idalis Villanueva Alarcón aptly describes the dichotomy within the US engineering education system between the driving need for innovation and an antiquated and disconnected educational process for “producing” engineers. Engineers walk into their fields knowing that what they will learn will be obsolete in a matter of years, yet the curricula remain the same. This dissonance, the author notes, stifles passion and perhaps, critically, the very thing that industry and academia are purportedly seeking—innovation and creative problem-solving. This “hidden curriculum” is one of the insidious tools that dehumanize engineering as not an option for those who want to innovate, to help others, and to be connected to a sustainable environment. Enrollments continue to decline nationally—are any of us surprised? Engineering is out of step with the values of US students and the needs of industry.
Engineering is out of step with the values of US students and the needs of industry.
Parallel to this discussion are data from the latest Business Enterprise Research and Development Survey showing that US businesses spent over $602 billion on research and development in 2021. This was a key driver for many engineering colleges and universities to expand “new” partnerships that were more responsive to developmental and applied research. While many were small and medium-size businesses, the majority were large corporations with more than 1,000 employees. Underlying Villanueva’s discussion are classic questions in engineering education: Are we developing innovative thinkers who can problem solve in engineering? Conversely, are we producing widgets who are paying their tuition, getting their paper, interviewing, getting hired, and logging into a terminal? Assembly lines are not typically for innovative development; they are the hallmarks of product development. No one believes that working with students is a form of assembly line production, yet why does it feel like it is? As access to information increases outside academia, new skills, sources of expertise, and experience arise for students, faculty, and industry to tap. If the fossilization of curricula and behaviors within the academy persists, then other avenues of accessing engineering education will evolve. These may be divergent pathways driven by factors surrounding industry and workforce development.
Villanueva suggests considering a more holistic and integrated approach that seeks to actively engage students’ families and social circles. No one is a stand-alone operation. Engineering needs to account for all of the variables impacting students. I wholeheartedly agree, and would add that by leveraging social capital and clarifying the schema for pathways for students (especially first-generation students), working engineers, educators, and other near peers can help connect the budding engineers to a network of potential support when the courses become challenging or the resources are not obvious. Not only would we begin to build capacity within underrepresented populations, but we also would enable the next-generation workforce to realize their dreams and help provide a community with some basic tools to mentor and support the ones they cherish and want to see succeed.
Monica Castañeda-Kessel
Research Program Manager
Oregon State University
Making Graduate Fellowships More Inclusive
In “Fifty Years of Strategies for Equal Access to Graduate Fellowships” (Issues, Fall 2023), Gisèle Muller-Parker and Jason Bourke suggest that examining the National Science Foundation’s efforts to increase the representation of racially minoritized groups in science, technology, engineering, and mathematics “may offer useful lessons” to administrators at colleges and universities seeking to “broaden access and participation” in the aftermath of the US Supreme Court’s 2023 decision limiting the use of race as a primary factor in student admissions.
Perhaps the most important takeaway from the authors’ analysis—and that also aligns with the court’s decision—is that there are no shortcuts to achieving inclusion. Despite its rejection of race as a category in the admissions process, the court’s decision does not bar universities from considering race on an individualized basis. Chief Justice John Roberts maintained that colleges can, for instance, constitutionally consider a student’s racial identity and race-based experience, be it “discrimination, inspiration or otherwise,” if aligned with a student’s unique abilities and skills, such as “courage, determination” or “leadership”—all of which “must be tied to that student’s unique ability to contribute to the university.” This individualized approach to race implies a more qualitatively focused application and review process.
The NSF experience, as Muller-Parker and Bourke show, also underscores the significance of qualitative applications and review processes for achieving more inclusive outcomes. Despite the decline in fellowship awards to racially minoritized groups starting in 1999, when the foundation ended its initial race-targeted fellowships, the awards pick up and even surpass previous levels of inclusion as the foundation shifted from numeric criteria to a holistic qualitative evaluation and review, for instance, by eliminating summary scores and GRE results and placing more importance on reference letters.
The individualized approach to race will place additional burdens on students of color to effectively make their case for how race has uniquely qualified them and made them eligible for admission.
Importantly, the individualized approach to race will place additional burdens on students of color to effectively make their case for how race has uniquely qualified them and made them eligible for admission, and on administrators to reconceptualize, reimagine, and reorganize the admissions process as a whole. Students, particularly from underserved high schools, will need even more institutional help and clearer instructions when writing their college essays, to know how to tie race and their racial experience to their academic eligibility.
In the context of college admissions, enhancing equal access in race-neutral ways will require significant changes in reconceptualizing applicants—as people rather than numbers or categories—and in connecting student access more closely to student participation. This will require significant resources and organizational change: admissions’ access goals would need to be closely integrated with participation goals of other offices such as student life, residence life, student careers, as well as with academic units; and universities would need to regularly conduct campus climate surveys, assessing not just the quantity of diverse students in the student body but also the quality of their experiences and the ways by which their inclusion enhances the quality of education provided by the university.
These holistic measures are easier said than done, especially among smaller teaching-centered or decentralized colleges and universities, and a measurable commitment to diversity will be even more patchy than is currently achieved across higher education, given the existence of numerous countervailing forces (political, social, financial) that differentially impact public and private institutions and vary significantly from state to state. However, as Justice Sotomayor wrote in closing in her dissenting opinion, “Although the court has stripped almost all uses of race in college admissions…universities can and should continue to use all available tools to meet society’s needs for diversity in education.” The NSF’s story provides some hope that this can be achieved if administrators are able and willing to reimagine (and not just obliterate) racial inclusion as a crucial goal for academic excellence.
Gwendoline Alphonso
Professor of Politics
Cochair, College of Arts and Sciences, Diversity, Equity and Inclusion Committee
Fairfield University
Gisele Muller-Parker and Jason Bourke’s discussion of what we might learn from the forced closure of the National Science Foundation’s Minority Graduate Fellowship Program and subsequent work to redesign the foundation’s Graduate Research Fellowship Program (GRFP) succinctly illustrates the hard work required to construct programs that identify and equitably promote talent development. As the authors point out, GRFP, established in 1952, has awarded fellowships to more than 70,000 students, paving the way for at least 40 of those fellows to become Nobel laureates and more than 400 to become members of the National Academy of Sciences.
The program provides a $37,000 annual stipend for three years and a $12,000 cost of education allowance with no postgraduate service requirement. It is a phenomenal fellowship, yet the program’s history demonstrates how criteria, processes, and structures can make opportunities disproportionally unavailable to talented persons based on their gender, racial identities, socioeconomic status, and where they were born and lived.
This is the great challenge that education, workforce preparation, and talent development leaders must confront: how to parse concepts of talent and opportunity such that we are able to equitably leverage the whole capacity of the nation.
This is the great challenge that education, workforce preparation, and talent development leaders must confront: how to parse concepts of talent and opportunity such that we are able to equitably leverage the whole capacity of the nation. This work must be undertaken now for America to meet its growing workforce demands in science, technology, engineering, mathematics, and medicine—the STEMM fields. This is the only way we will be able to rise to the grandest challenges threatening the world, such as climate change, food and housing instability, and intractable medical conditions.
By and large, most institutions of higher education are shamefully underperforming in meeting those challenges. Here, I point to the too-often overlooked and underfunded regional colleges and universities that were barely affected by the US Supreme Court’s recent decision to end the use of race-conscious admissions policies. Most regional institutions, by nature of their missions and students they serve, have never used race as a factor in enrollment, and yet they still serve more students from minoritized backgrounds than their Research-1 peers, as demonstrated by research from the Brookings Institution. Higher education leaders must undertake the difficult work of examining the ways in which historic and contemporaneous bias has created exclusionary structures, processes, and policies that helped reproduce social inequality instead of increasing access and opportunity for all parts of the nation.
The American Association for the Advancement of Science’s SEA Change initiative cultivates that exact capacity building among an institution’s leaders, enabling them to make data-driven, law-attentive, and people-focused change to meet their institutional goals. Finally, I must note one correction to the authors’ otherwise fantastic article: the Supreme Court’s pivotal decision in Students for Fair Admissions v. Harvard and Students for Fair Admissions v. University of North Carolina did not totally eliminate race and ethnicity as a factor in college admissions. Rather, the decision removed the opportunity for institutions to use race as a “bare consideration” and instead reinforced that a prospective student’s development of specific knowledge, skills, and character traits as they related to race, along with the student’s other lived experiences, can and should be used in the admissions process.
Travis T. York
Director, Inclusive STEMM Ecosystems for Equity & Diversity
American Association for the Advancement of Science
The US Supreme Court’s 2023 rulings on race and admissions have required universities to closely review their policies and practices for admitting students. While the rulings focused on undergraduate admissions, graduate institutions face distinct challenges as they work to comply with the new legal standards. Notably, graduate education tends to be highly decentralized, representing a variety of program cultures and admissions processes. This variety may lead to uncertainty about legally sound practice and, in some cases, a tendency to overcorrect or default to “safe”—because they have been uncontested—standards of academic merit.
Gisèle Muller-Parker and Jason Bourke propose that examining the history of the National Science Foundation’s Graduate Research Fellowship Program (GRFP) can provide valuable information for university leaders and faculty in science, technology, engineering, and mathematics working to reevaluate graduate admissions. The authors demonstrate the potential impact of admission practices often associated with the type of holistic review that NSF currently uses for selecting its fellows: among them, reducing emphasis on quantitative measures, notably GRE scores and undergraduate GPA, and giving careful consideration to personal experiences and traits associated with success. In 2014, for example, the GRFP replaced a requirement for a “Previous Research” statement, which privileged students with access to traditional research opportunities, with an essay that “allows applicants flexibility in the types of evidence they provide about their backgrounds, scientific ability, and future potential.”
These changes made a real difference in the participation of underrepresented students in the GRFP and made it possible for students from a broader range of educational institutions to have a shot at this prestigious fellowship.
There is no compelling evidence to support the idea that traditional criteria for admitting students are the best.
Critics of these changes may say that standards were lowered. But the education community at large must unequivocally challenge this view. There is no compelling evidence to support the idea that traditional criteria for admitting students are the best. Scientists must be prepared to study the customs of their field, examining assumptions (“Are experiences in well-known laboratories the only way to prepare undergraduates for research?”) and asking new questions (“To what extent does a diversity of perspectives and problem-solving strategies affect programs and research?”).
As we look to the future, collecting evidence on the effects of new practices, we will need to give special consideration to the following issues:
First, in introducing new forms of qualitative materials, we must not let bias in the back door. Letters and personal statements need careful consideration, both in their construction and in their evaluation.
Second, we must clearly articulate the ways that diversity and inclusion relate to program goals. The evaluation of personal and academic characteristics is more meaningful, and legally sound, when these criteria are transparent to all.
Finally, we must think beyond the admissions process. In what ways can institutions make diversity, equity, and inclusion integral to their cultures and to the social practices supporting good science?
As the history of the GFRP shows, equity-minded approaches to graduate education bring us closer to finding and supporting what the National Science Board calls the “Missing Millions” in STEM. We must question what we know about academic merit and rigorously test the impact of new practices—on individual students, on program environments, and on the health and integrity of science.
Julia D. Kent
Vice President, Best Practices and Strategic Initiatives
Council of Graduate Schools
Native Voices in STEM
Circular Tables, 2022, digital photograph, 11 X 14 inches.
“Many of the research meetings I have participated in take place at long rectangular tables where the power and primary conversation participants are at one end. I don’t experience this hierarchical power differential in talking circles. Talking circles are democratic and inclusive. There is still a circle at the rectangular table, just a circle that does not include everyone at the table. I find this to be representative of experiences I have had in my STEM discipline, in which it was difficult to find a place in a community or team or in which I did not feel valued or included.”
Native Voices in STEM: An Exhibition of Photographs and Interviews is a collection of photographs and texts created by Native scientists and funded by the National Science Foundation. It grew from a mixed-methods study conducted by researchers from TERC, the University of Georgia, and the American Indian Science and Engineering Society (AISES). According to the exhibition creators, the artworks speak to the photographers’ experiences of “Two-Eyed Seeing,” or the tensions and advantages from braiding together traditional Native and Western ways of knowing. The exhibition was shown at the 2022 AISES National Conference.
Getting the Most From New ARPAs
The Fall 2023 Issues included three articles—“No, We Don’t Need Another ARPA” by John Paschkewitz and Dan Patt, “Building a Culture of Risk-Taking” by Jennifer E. Gerbi, and “How I Learned to Stop Worrying and Love Intelligible Failure” by Adam Russell—discussing several interesting dimensions of new civilian organizations modeled on the Advanced Research Projects Agency at the Department of Defense. One dimension that could use further elucidation starts with the observation that ARPAs are meant to deliver innovative technology to be utilized by some end customer. The stated mission of the original DARPA is to bridge between “fundamental discoveries and their military use.” The mission of ARPA-H, the newest proposed formulation, is to “deliver … health solutions,” presumably to the US population.
When an ARPA is extraordinarily successful, it delivers an entirely new capability that can be adopted by its end customer. For example, DARPA delivered precursor technology (and prototype demonstrations) for stealth aircraft and GPS. Both were very successfully adopted.
When an ARPA is extraordinarily successful, it delivers an entirely new capability that can be adopted by its end customer.
Such adoption requires that the new capability coexist or operate within the existing processes, systems, and perhaps even culture of the customer. Understanding the very real constraints on adoption is best achieved when the ARPA organization has accurate insight into specific, high-priority needs, as well as the operations or lifestyle, of the customer. This requires more than expertise in the relevant technology.
DARPA uses several mechanisms to attain that insight: technology-savvy military officers take assignments in DARPA, then return to their military branch; military departments partner, via co-funding, on projects; and often the military evaluates a DARPA prototype to determine effectiveness. These relations with the end customer are facilitated because DARPA is housed in the same department as its military customer, the Department of Defense.
The health and energy ARPAs face a challenge: attaining comparable insight into their end customers. The Department of Health and Human Services does not deliver health solutions to the US population; the medical-industrial complex does. The Department of Energy does not deliver electric power or electrical appliances; the energy utilities and private industry do. ARPA-H and ARPA-E are organizationally removed from those end customers, both businesses (for profit or not) and the citizen consumer.
Technology advancement enables. But critical to innovating an adoptable solution is identification of the right problem, together with a clear understanding of the real-world constraints that will determine adoptability of the solution. Because civilian ARPAs are removed from many end customers, ARPAs would seem to need management processes and organizational structures that increase the probability of producing an adoptable solution from among the many alternative solutions that technology enables.
Anita Jones
Former Director of Defense Research and Engineering
Department of Defense
University Professor Emerita
University of Virginia
Connecting STEM with Social Justice
The United States faces a significant and stubbornly unyielding racialized persistence gap in science, technology, engineering, and mathematics. Nilanjana Dasgupta sums up one needed solution in the title of her article: “To Make Science and Engineering More Diverse, Make Research Socially Relevant” (Issues, Fall 2023).
Among the students who enter college intending to study STEM, persons excluded because of ethnicity or race (PEERs) which includes students identifying as Black, Indigenous, and Latine, have a twofold greater likelihood of leaving these disciplines than do non-PEERs. While we know what are not the reasons for the racialized gap—not lack of interest or preparation—we largely don’t know how to effectively close the gap. We know engaging undergraduates in mentored, authentic scientific research raises their self-efficacy and feeling of belonging. However, effective research experiences are difficult to scale because they require significant investments in mentoring and research infrastructure capacity.
Another intervention is much less expensive and much more scalable. Utility-value interventions (UVIs) provide a remarkably long-lasting positive effect on students. In this approach, over an academic term students in an introductory science course spend a modest amount of class time reflecting and writing about how the scientific topic just introduced is personally related to them and their communities. The UVIs benefit all students, resulting in little or no difference in STEM persistence between PEERs and non-PEERs.
The overhaul will be the creation of new courses that seamlessly integrate basic science concepts with society and social justice.
Can we do more? Rather than occasionally interrupting class to allow students to connect a science concept with real-world social needs, can we change the way we present the concept? The UVI inspires a vision of a new STEM curriculum comprising reimagined courses. We might call the result Socially Responsive STEM, or SR-STEM. SR-STEM would be more than distribution or general education requirements, and more than learning science in the context of a liberal arts education. Instead, the overhaul will be the creation of new courses that seamlessly integrate basic science concepts with society and social justice. The courses would encourage students to think critically about the interplay between STEM and non-STEM disciplines such as history, literature, religion, and economics, and explore how STEM affects society.
Here are a few examples from the life sciences; I think similar approaches can be developed for other STEM disciplines. When learning about evolution, students would investigate and discuss the evidence used to create the false polygenesis theory of human races. In genetics, students would evaluate the evidence of epigenetics effects resulting from the environment and poverty. In immunology, students would explore the sociology and politics of vaccine avoidance. The mechanisms of natural phenomena would be discussed from different perspectives, including indigenous ways of knowing about nature.
Implementing SR-STEM will require a complete overhaul of the learning infrastructure, including instructor preparation, textbooks, Advanced Placement courses, GRE and other standardized exams, and accreditation (e.g., ACS and ABET) criteria. The stories of discoveries we tell in class will change, from the “founding (mostly white and dead) fathers” to contemporary heroes of many identities and from all backgrounds.
It is time to begin a movement in which academic departments, professional societies, and funding organizations build Socially Responsive STEM education so that the connection of STEM to society and social justice is simply what we do.
David J. Asai
Former Senior Director for Science Education
Howard Hughes Medical Institute
To maximize the impact of science, technology, engineering, and mathematics in society, we need to do more than attract a diverse, socially concerned cohort of students to pursue and persist through our academic programs. We need to combine the technical training of these students with social skill building.
To advance sustainability, justice, and resilience goals in the real world (not just through arguments made in consulting reports and journal papers), students need to learn how to earn the respect and trust of communities. In addition to understanding workplace culture, norms, and expectations, and cultivating negotiation skills, they need to know to research the history, interests, racial, cultural, and equitable identities, and power imbalances in communities before beginning their work. They need to appreciate the community’s interconnected and, at times, conflicting needs and aspirations. And they need to learn how to communicate and collaborate effectively, to build allies and coalitions, to follow through, and to neither overpromise nor prematurely design the “solution” before fully understanding the problem. They must do all this while staying within the project budget, schedule, and scope—and maintaining high quality in their work.
One of the problems is that many STEM faculty lack these skills themselves. Some may consider the social good implications only after a project has been completed. Others may be so used to a journal paper as the culmination of research that they forget to relay and interpret their technical findings to the groups who could benefit most from them. Though I agree that an increasing number of faculty appear to be motivated by equity and multidisciplinarity in research, translation of research findings into real world recommendations is much less common. If it happens at all, it frequently oversimplifies key logistical, institutional, cultural, legal, or regulatory factors that made the problem challenging in the first place. Both outcomes greatly limit the social value of STEM research. While faculty in many fields now use problem-based learning to tackle real world problems in teaching, we are also notorious for attempting to address a generational problem in one semester, then shifting our attention to something else. We request that community members enrich our classrooms by sharing their lived experiences and perspectives with our students without giving much back in return.
Such practices must end if we, as STEM faculty, are to retain our credibility both in the community and with our students, and if we wish to see our graduates embraced by the communities they seek to serve.
Such practices must end if we, as STEM faculty, are to retain our credibility both in the community and with our students, and if we wish to see our graduates embraced by the communities they seek to serve. The formative years of today’s students were juxtaposed on a backdrop of bad news. If they chose STEM because of a belief that science has answers to these maddening challenges, these students need real evidence that their professional actions will yield tangible and positive outcomes. Just like members of the systematically disadvantaged and marginalized communities they seek to support, these students can easily spot hypocrisy, pretense, greenwashing, and superficiality.
As a socially engaged STEM researcher and teacher, I have learned that I must be prepared to follow through with what I have started—as long as it takes. I prep my students for the complex social dynamics they will encounter, without coddling or micromanaging them. I require that they begin our projects with an overview of the work’s potential practical significance, and that our research methods answer questions that are codeveloped with external partners, who themselves are financially compensated for their time whenever possible. By modeling these best practices, I try to give my students (regardless of their cultural or racial backgrounds) competency not just in STEM, but in application of their work in real contexts.
Franco Montalto
Professor, Department of Civil, Architectural, and Environmental Engineering
Drexel University
Nilanjana Dasgupta’s article inspired reflection on our approach at the Burroughs Wellcome Fund (BWF) to promoting diversity in science nationwide along with supporting science, technology, engineering, and mathematics education specifically in North Carolina. These and other program efforts have reinforced our belief in the power of collaboration and partnership to create change.
These and other program efforts have reinforced our belief in the power of collaboration and partnership to create change.
For nearly 30 years, BWF has supported organizations across North Carolina that provide hands-on, inquiry-based activities for students outside the traditional classroom day. These programs offer a wide range of STEM experiences for students. Some of the students “tinker,” which we consider a worthwhile way to experience the nuts-and-bolts of research, and others explore more socially relevant experiences. An early example is from a nonprofit in the city of Jacksonville, located near the state’s eastern coast. In the program, the city converted an old wastewater treatment plant into an environmental education center where students researched requirements for reintroducing sturgeon and shellfish into the local bay. More than 1,000 students spent their Saturdays learning about environmental science and its application to improve the quality of water in the local watershed. The students engaged their families and communities in a dialogue about environmental awareness, civic responsibility, and local issues of substantial scientific and economic interest.
For our efforts in fostering diversity in science, we have focused primarily on early-career scientists. Our Postdoctoral Diversity Enrichment Program provides professional development support for underrepresented minority postdoctoral fellows. The program places emphasis on a strong mentoring strategy and provides opportunities for the fellows to engage with a growing network of scholars.
Recently, BWF has become active in the Civic Science movement led by the Rita Allen Foundation, which describes civic science as “broad engagement with science and evidence [that] helps to inform solutions to society’s most pressing problems.” This movement is very much in its early stages, but it holds immense possibility to connect STEM to social justice. We have supported fellows in science communication, diversity in science, and the interface of arts and science.
Another of our investments in this space is through the Our Future Is Science initiative, hosted by the Aspen Institute’s Science and Society program. The initiative aims to equip young people to become leaders and innovators in pushing science toward improving the larger society. The program’s goals include sparking curiosity and passion about the connection between science and social justice among youth and young adults who identify as Black, Indigenous, or People of Color, as well as those who have low income or reside in rural communities. Another goal is to accelerate students’ participation in the sciences to equip them to link their interests to tangible educational and career STEM opportunities that may ultimately impact their communities.
This is an area ripe for exploration, and I was pleased to read the author’s amplification of this message. At the Burroughs Wellcome Fund, we welcome the opportunity to collaborate on connecting STEM and social justice work to ignite societal change. As a philanthropic organization, we strive to holistically connect the dots of STEM education, diversity in science, and scientific research.
Louis J. Muglia
President and CEO
Burroughs Wellcome Fund
As someone who works on advancing diversity, equity, and inclusion in science, technology, engineering, and mathematics higher education, I welcome Nilanjana Dasgupta’s pointed recommendation to better connect STEM research with social justice. Gone are the days of the academy being reserved for wealthy, white men to socialize and explore the unknown, largely for their own benefit. Instead, today’s academy should be rooted in addressing the challenges that the whole of society faces, whether that be how to sustain food systems, build more durable infrastructure, or identify cures for heretofore intractable diseases.
Approaching STEM research with social justice in mind is the right thing to do both morally and socially. And our educational environments will be better for it, attracting more diverse and bright minds to science. As Dasgupta demonstrates, research shows that when course content is made relevant to students’ lives, students show increases in interest, motivation, and success—and all these findings are particularly pronounced for students of color.
Despite focused attention on increasing diversity, equity, and inclusion over the past several decades, Black, Indigenous, and Latine students continue to remain underrepresented in STEM disciplines, especially in graduate education and the careers that require such advanced training. In 2020, only 24% of master’s and 16% of doctoral degrees in science and engineering went to Black, Indigenous, and Latine graduates, despite these groups collectively accounting for roughly 37% of the US population aged 18 through 34. Efforts to increase representation have also faced significant setbacks due to the recent Supreme Court ruling on the consideration of race in admissions. However, Dasgupta’s suggestion may be one way we continue to further the nation’s goal of diversifying STEM fields in legally sustainable ways, by centering individuals’ commitments to social justice rather than, say, explicitly considering race or ethnicity in admissions processes.
What if universities centered faculty hiring efforts on scholars who are addressing social issues and seeking to make the world a more equitable place, rather than relying on the otherwise standard approach of hiring graduates from prestigious institutions who publish in top-tier journals?
Moreover, while Dasgupta does well to provide examples of how we might transform STEM education for students, the underlying premise of her article—that connecting STEM to social justice is an underutilized tool—is relevant to several other aspects of academia as well.
For instance, what if universities centered faculty hiring efforts on scholars who are addressing social issues and seeking to make the world a more equitable place, rather than relying on the otherwise standard approach of hiring graduates from prestigious institutions who publish in top-tier journals? The University of California, San Diego, may serve as one such example, having hired 20 STEM faculty over the past three years whose research uses social justice frameworks, including bridging Black studies and STEM. These efforts promote diverse thought and advance institutional missions to serve society.
Science philanthropy is also well poised to prioritize social justice research. At Sloan, we have a portfolio of work that examines critical and under-explored questions related to issues of energy insecurity, distributional equity, and just energy system transitions in the United States. These efforts recognize that many historically marginalized racial and ethnic communities, as well as economically vulnerable communities, are often unable to participate in the societal transition toward low-carbon energy systems due to a variety of financial, social, and technological challenges.
In short, situating STEM in social justice should be the default, not the occasional endeavor.
Tyler Hallmark
Program Associate
Alfred P. Sloan Foundation
Building the Quantum Workforce
In “Inviting Millions Into the Era of Quantum Technologies” (Issues, Fall 2023), Sean Dudley and Marisa Brazil convincingly argue that the lack of a qualified workforce is holding back this field from reaching its promising potential. We at IBM Quantum agree. Without intervention, the nation risks developing useful quantum computing alongside a scarcity of practitioners who are capable of using quantum computers. An IBM Institute for Business Value study found that inadequate skills is the top barrier to enterprises adopting quantum computing. The study identified a small subset of quantum-ready organizations that are talent nurturers with a greater understanding of the quantum skills gap, and that are nearly three times more effective than their cohorts at workforce development.
Quantum-ready organizations are nearly five times more effective at developing internal quantum skills, nearly twice as effective at attracting talented workers in science, technology, engineering, and mathematics, and nearly three times more effective at running internship programs. At IBM Quantum, we have directly trained more than 400 interns at all levels of higher education and have seen over 8 million learner interactions with Qiskit, including a series of online seminars on using the open-source Qiskit tool kit for useful quantum computing. However, quantum-ready organizations represent only a small fraction of the organizations and industries that need to prepare for the growth of their quantum workforce.
As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve. As we move even further toward the age of quantum-centric supercomputing, we will need a larger workforce capable of orchestrating quantum and classical computational resources in order to address domain-specific problems.
Looking to academia, we need more quantum-ready institutions that are effective not only at teaching advanced mathematics, quantum physics, and quantum algorithms, but also are effective at teaching domain-specific skills such as machine learning, chemistry, materials, or optimization, along with teaching how to utilize quantum computing as a tool for scientific discovery.
As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve.
Critically, it is imperative to invest in talent early on. The data on physics PhDs granted by race and ethnicity in the United States paint a stark picture. Industry cannot wait until students have graduated and are knocking on company doors to begin developing a talent pipeline. IBM Quantum has made a significant investment in the IBM-HBCU Quantum Center through which we collaborate with more than two dozen historically Black colleges and universities to prepare talent for the quantum future.
Academia needs to become more effective in supporting quantum research (including cultivating student contributions) and partnering with industry, in connecting students into internships and career opportunities, and in attracting students into the field of quantum. Quoting Charles Tahan, director of the National Quantum Coordination Office within the White House Office of Science and Technology Policy: “We need to get quantum computing test beds that students can learn in at a thousand schools, not 20 schools.”
Rensselaer Polytechnic Institute and IBM broke ground on the first IBM Quantum System One on a university campus in October 2023. This presents the RPI community with an unprecedented opportunity to learn and conduct research on a system powered by a utility-scale 127-qubit processor capable of tackling problems beyond the capabilities of classical computers. And as lead organizers of the Quantum Collaborative, Arizona State University—using IBM and other industry quantum computing resources—is working with other academic institutions to provide training and educational pathways across high schools and community colleges through to undergraduate and graduate studies in the field of quantum.
Our hope is that these actions will prove to be only part of a broader effort to build the quantum workforce that science, industry, and the nation will need in years to come.
Bradley Holt
IBM Quantum
Program Director, Global Skills Development
Sean Dudley and Marisa Brazil advocate for mounting a national workforce development effort to address the growing talent gap in the field. This effort, they argue, should include educating and training a range of learners, including K–12 students, community college students, and workers outside of science and technology fields, such as marketers and designers. As the field will require developers, advocates, and regulators—as well as users—with varying levels of quantum knowledge, the authors’ comprehensive and inclusive approach to building a competitive quantum workforce is refreshing and justified.
At Qubit by Qubit, founded by the Coding School and one of the largest quantum education initiatives, we have spent the past four years training over 25,000 K–12 and college students, educators, and members of the workforce in quantum information science and technology (QIST). In collaboration with school districts, community colleges and universities, and companies, we have found great excitement among all these stakeholders for QIST education. However, as Dudley and Brazil note, there is an urgent need for policymakers and funders to act now to turn this collective excitement into action.
Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large.
The authors posit that the development of a robust quantum workforce will help position the United States as a leader of Quantum 2.0, the next iteration of the quantum revolution. Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large. With the interdisciplinary nature of QIST, learners gain exposure and skills in mathematics, computer science, physics, and engineering, among other fields. Thus, even for learners who choose not to pursue a career in quantum, they will have a broad set of highly sought skills that they can apply to another field offering a rewarding future.
With the complexity of quantum technologies, there are a number of challenges in building a diverse quantum workforce. Dudley and Brazil highlight several of these, including the concentration of training programs in highly resourced institutions, and the need to move beyond the current focus on physics and adopt a more interdisciplinary approach. There are several additional challenges that need to be considered and addressed if millions of Americans are to become quantum-literate, including:
Funding efforts have been focused on supporting pilot educational programs instead of scaling already successful programs, meaning that educational opportunities are not accessible widely.
Many educational programs are one-offs that leave students without clear next steps. Because of the complexity of the subject area, learning pathways need to be established for learners to continue developing critical skills.
Diversity, inclusion, and equity efforts have been minimal and will require concerted work between industry, academia, and government.
Historically, the United States has begun conversations around workforce development for emerging and deep technologies too late, and thus has failed to ensure the workforce at large is equipped with the necessary technical knowledge and skills to move these fields forward quickly. We have the opportunity to get it right this time and ensure that the United States is leading the development of responsible quantum technologies.
Kiera Peltz
Executive Director, Qubit by Qubit
Founder and CEO, The Coding School
To create an exceptional quantum workforce and give all Americans a chance to discover the beauty of quantum information science and technology, to contribute meaningfully to the nation’s economic and national security, and to create much-needed bridges with other like-minded nations across the world as a counterbalance to the balkanization of science, we have to change how we are teaching quantum. Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.
The imminence of quantum technologies has motivated physicists—at least in some places—to reinvent their teaching, listening to and working with their engineering, computer science, materials science, chemistry, and mathematics colleagues to create a new kind of course. In 2020, these early experiments in retooling led to a convening of 500 quantum scientists and engineers to debate undergraduate quantum education. Building on success stories such as the quantum concepts course at Virginia Tech, we laid out a plan, published in IEEE Transactions on Education in 2022, to bridge the gap between the excitement around quantum computing generated in high school and the kind of advanced graduate research in quantum information that is really so astounding. The good news is that as Virginia Tech showed, quantum information can be taught with pictures and a little algebra to first-year college students. It’s also true at the community college level, which means the massive cohort of diverse engineers who start their careers there have a shot at inventing tomorrow’s quantum technologies.
Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.
However, there are significant missing pieces. For one, there are almost no community college opportunities to learn quantum anything because such efforts are not funded at any significant level. For another, although we know how to teach the most speculative area of quantum information, namely quantum computing, to engineers, and even to new students, we really don’t know how to do that for quantum sensing, which allows us to do position, navigation, and timing without resorting to our fragile GPS system, and to measure new space-time scales in the brain without MRI, to name two of many applications. It is the most advanced area of quantum information, with successful field tests and products on the market now, yet we are currently implementing quantum engineering courses focused on a quantum computing outcome that may be a decade or more away.
How can we solve the dearth of quantum engineers? First, universities and industry can play a major role by working together—and several such collective efforts are showing the way. Arizona State University’s Quantum Collaborative is one such example. The Quantum consortium in Colorado, New Mexico, and Wyoming recently received a preliminary grant from the US Economic Development Administration to help advance both quantum development and education programs, including at community colleges, in their regions. Such efforts should be funded and expanded and the lessons they provide should be promulgated nationwide. Second, we need to teach engineers what actually works. This means incorporating quantum sensing from the outset in all budding quantum engineering education systems, building on already deployed technologies. And third, we need to recognize that much of the nation’s quantum physics education is badly out of date and start modernizing it, just as we are now modernizing engineering and computer science education with quantum content.
Lincoln D. Carr
Quantum Engineering Program and Department of Physics
Colorado School of Mines
Preparing a skilled workforce for emerging technologies can be challenging. Training moves at the scale of years while technology development can proceed much faster or slower, creating timing issues. Thus, Sean Dudley and Marisa Brazil deserve credit for addressing the difficult topic of preparing a future quantum workforce.
At the heart of these discussions are the current efforts to move beyond Quantum 1.0 technologies that make use of quantum mechanical properties (e.g., lasers, semiconductors, and magnetic resonance imaging) to Quantum 2.0 technologies that more actively manipulate quantum states and effects (e.g., quantum computers and quantum sensors). With this focus on ramping up a skilled workforce, it is useful to pause and look at the underlying assumption that the quantum workforce requires active management.
In their analysis, Dudley and Brazil cite a report by McKinsey & Company, a global management consulting firm, which found that three quantum technology jobs exist for every qualified candidate. While this seems like a major talent shortage, the statistic is less concerning when presented in absolute numbers. Because the field is still small, the difference is less than 600 workers. And the shortage exists only when considering graduates with explicit Quantum 2.0 degrees as qualified potential employees.
McKinsey recommended closing this gap by upskilling graduates in related disciplines. Considering that 600 workers is about 33% of physics PhDs, 2% of electrical engineers, or 1% of mechanical engineers graduated annually in the United States, this seems a reasonable solution. However, employers tend to be rather conservative in their hiring and often ignore otherwise capable applicants who haven’t already demonstrated proficiency in desired skills. Thus, hiring “close-enough” candidates tends to occur only when employers feel substantial pressure to fill positions. Based on anecdotal quantum computing discussions, this probably isn’t happening yet, which suggests employers can still afford to be selective. As Ron Hira notes in “Is There Really a STEM Workforce Shortage?” (Issues, Summer 2022), shortages are best measured by wage growth. And if such price signals exist, one should expect that students and workers will respond accordingly.
When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences.
If the current quantum workforce shortage is uncertain, the future is even more uncertain. The exact size of the needed future quantum workforce depends on how Quantum 2.0 technologies develop. For example, semiconductors and MRI machines are both mature Quantum 1.0 technologies. The global semiconductor industry is a more than $500 billion business (measured in US dollars), while the global MRI business is about 100 times smaller. If Quantum 2.0 technologies follow the specialized, lab-oriented MRI model, then the workforce requirements could be more modest than many projections. More likely is a mix of market potential where technologies such as quantum sensors, which have many applications and are closer to commercialization, have a larger near-term market while quantum computers remain a complex niche technology for many years. The details are difficult to predict but will dictate workforce needs.
When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences. Overproducing STEM workers benefits industry and academia, but not necessarily the workers themselves. If we prematurely attempt to put quantum computer labs in every high school and college, we may be setting up less-privileged students to pursue jobs that may not develop, equipped with skills that may not be easily transferred to other fields.
Daniel J. Rozell
Research Professor
Department of Technology and Society
Stony Brook University
An Evolving Need for Trusted Information
In “Informing Decisionmakers in Real Time” (Issues, Fall 2023), Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe describe how scientific organizations, funders, and researchers came together to provide vital insights in a time of global need. Their actions during the COVID-19 pandemic created new ways for researchers to coordinate with one another and better ways to communicate critical scientific insights to key end users. Collectively, these actions accelerated translations of basic research to life-saving applications.
Examples such as the Societal Experts Action Network (SEAN) that the authors highlight reveal the benefits of a new approach. While at the National Science Foundation, we pitched the initial idea for this project and the name to the National Academies of Sciences, Engineering, and Medicine (NASEM). We were inspired by NASEM’s new research-to-action workflows in biomedicine and saw opportunities for thinking more strategically about how social science could help policymakers and first responders use many kinds of research more effectively.
SEAN’s operational premise is that by building communication channels where end users can describe their situations precisely, researchers can better tailor their translations to the situations. Like NASEM, we did not want to sacrifice rigor in the process. Quality control was essential. Therefore, we designed SEAN to align translations with key properties of the underlying research designs, data, and analysis. The incredible SEAN leadership team that NASEM assembled implemented this plan. They committed to careful inferences about the extent to which key attributes of individual research findings, or collections of research findings, did or did not generalize to end users’ situations. They also committed to conducting real-time evaluations of their effectiveness. With this level of commitment to rigor, to research quality filters, and to evaluations, SEAN produced translations that were rigorous and usable.
With structures such as SEAN that more deeply connect researchers to end users, we can incentivize stronger cultures of responsiveness and accountability to thousands of end users.
There is significant benefit to supporting approaches such as this going forward. To see why, consider that many current academic ecosystems reward the creation of research, its publication in journals, and, in some fields, connections to patents. These are all worthy activities. However, societies sometimes face critical challenges where interdisciplinary collaboration, a commitment to rigor and precision, and an advanced understanding of how key decisionmakers use scientific content are collectively the difference between life and death. Ecosystems that treat journal publications and patents as the final products of research processes will have limited impact in these circumstances. What Groves and coauthors show is the value of designing ecosystems that produce externally meaningful outcomes.
Scientific organizations can do more to place modern science’s methods of measurement and inference squarely in the service of people who can save lives. With structures such as SEAN that more deeply connect researchers to end users, we can incentivize stronger cultures of responsiveness and accountability to thousands of end users. Moreover, when organizations network these quality-control structures, and then motivate researchers to collaborate and share information effectively, socially significant outcomes are easier to stack (we can more easily build on each other’s insights) and scale (we can learn more about which practices generalize across circumstances).
To better serve people across the world, and to respect the public’s sizeable investments in federally funded scientific research, we should seize opportunities to increase the impact and social value of the research that we conduct. New research-to-action workflows offer these opportunities and deserve serious attention in years to come.
Daniel Goroff
Alfred P. Sloan Foundation
Arthur Lupia
University of Michigan
As Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe describe in their article, the COVID-19 pandemic highlighted the value and importance of connecting social science to on-the-ground decisionmaking and solution-building processes, which require bridging societal sectors, academic fields, communities, and levels of governance. That the National Academies of Sciences, Engineering, and Medicine and public and private funders—including at the local level—created and continue to support the Societal Experts Action Network (SEAN) is encouraging. Still, the authors acknowledge that there is much work needed to normalize and sustain support for ongoing research-practice partnerships of this kind.
In academia, for example, the pandemic provided a rallying point that encouraged cross-sector collaborations, in part by forcing a change to business-as-usual practices and incentivizing social scientists to work on projects perceived to offer limited gains in academic systems, such as tenure processes. Without large-scale reconfiguration of resources and rewards, as the pandemic crisis triggered to some extent, partnerships such as those undertaken by SEAN face numerous barriers. Building trust, fostering shared goals, and implementing new operational practices across diverse participants can be slow and expensive. Fitting these efforts into existing funding is also challenging, as long-term returns may be difficult to measure or articulate. In a post-COVID world, what incentives will remain for researchers and others to pursue necessary work like SEAN’s, spanning boundaries across sectors?
In a post-COVID world, what incentives will remain for researchers and others to pursue necessary work like SEAN’s, spanning boundaries across sectors?
One answer comes from a broader ecosystem of efforts in “civic science,” of which we see SEAN as a part. Proponents of civic science argue that boundary-spanning work is needed in times of crisis as well as peace. In this light, we see a culture shift in which philanthropies, policymakers, community leaders, journalists, educators, and academics recognize that research-practice partnerships must be made routine rather than being exceptional. This culture shift has facilitated our own work as researchers and filmmakers as we explore how research informing filmmaking, and vice versa, might foster pro-democratic outcomes across diverse audiences. For example, how can science films enable holistic science literacy that supports deliberation about science-related issues among conflicted groups?
At first glance, our work may seem distant from SEAN’s policy focus. However, we view communication and storytelling (in non-fiction films particularly) as creating “publics,” or people who realize they share a stake in an issue, often despite some conflicting beliefs, and who enable new possibilities in policy and society. In this way and many others, our work aligns with a growing constellation of participants in the Civic Science Fellows program and a larger collective of collaborators who are bridging sectors and groups to address key challenges in science and society.
As the political philosopher Peter Levine has said, boundary-spanning work enables us to better respond to the civic questions asking “What should we do?” that run through science and broader society. SEAN illustrates how answering such questions cannot be done well—at the level of quality and legitimacy needed—in silos. We therefore strongly support multisector collaborations like those that SEAN and the Civic Science Fellows program model. We also underscore the opportunity and need for sustained cultural and institutional progress across the ecosystem of connections between science and civic society, to reward diverse actors for investing in these efforts despite their scope and uncertainties.
Emily L. Howell
Researcher
Science Communication Lab
Nicole M. Krause
Associate
Morgridge Institute for Research
Ian Cheney
Documentary film director and producer
Wicked Delicate Films
Elliot Kirschner
Executive Producer
Science Communication Lab
Sarah S. Goodwin
Executive Director
Science Communication Lab
I read Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe’s essay with great interest. It is hard to remember the early times of COVID-19, when everyone was desperate for answers and questions popped up daily about what to do and what was right. As a former elected county official and former chair of a local board of health, I valued the welcome I received when appointed to the Societal Experts Action Network (SEAN) the authors highlight. I believe that as a nonacademic, I was able to bring a pragmatic on-the-ground perspective to the investigations and recommendations.
I believe that as a nonacademic, I was able to bring a pragmatic on-the-ground perspective to the investigations and recommendations.
At the time, local leaders were dealing with a pressing need for scientific information when politics were becoming fraught with dissension and the public had reduced trust in science. Given such pressure, it is difficult to fully appreciate the speed at which SEAN operated—light speed compared with what I viewed as the usual standards of large organizations such as its parent, the National Academies of Sciences, Engineering, and Medicine. SEAN’s efforts were nimble and focused, allowing us to collaborate while addressing massive amounts of data.
Now, the key to addressing the evolving need for trusted and reliable information, responsive to the modern world’s speed, will be supporting and replicating the work of SEAN. Relationships across jurisdictions and institutions were formed that will continue to be imperative not only for ensuring academic rigor but also for understanding how to build the bridges of trust to support the value of science, to meet the need for resilience, and to provide the wherewithal to progress in the face of constant change.
Linda Langston
President, Langston Strategies Group
Former member of the Linn County, Iowa, Board of Supervisors
Supervisor and President, National Association of Counties
Rebuilding Public Trust in Science
As Kevin Finneran noted in “Science Policy in the Spotlight” (Issues, Fall 2023), “In the mid-1950s, 88% of Americans held a favorable attitude toward science.” But the story was even better back then. When the American National Election Study began in 1948 asking about trust in government, about three-quarters of people said they trusted the federal government to do the right thing almost always or most of the time (now under one-third and dropping, especially among Generation Z and millennials). Increasing public trust in science is important, but transforming new knowledge into societal impacts at scale will require much more. It will require meaningful public engagement and trust-building across the entire innovation cycle, from research and development to scale up, commercialization, and successful adoption and use. Public trust in this system can break down at any point—as the COVID-19 pandemic made painfully clear, robbing at least 20 million years of human life globally.
For over a decade, I had the opportunity to support dozens of focus groups and national surveys exploring public perceptions of scientific developments in areas such as nanotechnology, synthetic biology, cellular agriculture, and gene editing. Each of these exercises provided new insights and an appreciation for the often-maligned public mind. As the physicist Richard Feynman once noted, believing that “the average person is unintelligent is a very dangerous idea.”
The exercises consistently found that when confronted with the emergence of novel technologies, people were very consistent regarding their concerns and demands. For instance, there was little support for halting scientific and technological progress, with some noting, “Continue to go forward, but please be careful.” Being careful was often framed around three recurring themes.
As the physicist Richard Feynman once noted, believing that “the average person is unintelligent is a very dangerous idea.”
First, there was a desire for increased transparency, from both government and businesses. Second, people often asked for more pre-market research and risk assessment. In other words, don’t test new technologies on us—but unfortunately this now seems the default business model for social media and generative artificial intelligence. People voiced valid concerns that long-term risks would be overlooked in the rush to move products into the marketplace, and there was confusion about who exactly was responsible for such assessments, if anybody. Finally, many echoed the need for independent, third-party verification of both the risks and the benefits of new technologies, driven by suspicions of industry self-regulation and decreased trust in government oversight.
Taken as a whole, these public concerns sound reasonable, but remain a heavy lift. There is, unfortunately, very little “public” in the nation’s public policies, and we have entered an era where distrust is the default mode. Given this state of affairs, one should welcome the recent recommendations proposed to the White House by the President’s Council of Advisors on Science and Technology: to “develop public policies that are informed by scientific understanding and community values [creating] a dialogue … with the American people.” The question is whether these efforts go far enough and can occur fast enough to bend the trust curve back before the next pandemic, climate-related catastrophe, financial meltdown, geopolitical crisis, or arrival of artificial general intelligence.
David Rejeski
Visiting Scholar
Environmental Law Institute
Coping in an Era of Disentangled Research
In “An Age of Disentangled Research?” (Issues, Fall 2023), Igor Martins and Sylvia Schwaag Serger raise interesting questions about the changing nature of international cooperation in science and about the engagement of Chinese scientists with researchers in other countries. The authors rightly call attention to the rapid expansion of cooperation as measured in particular by bibliometric analyses. But as they point out, we may be seeing “signs of a potential new era of research in which global science is divided into geopolitical blocs of comparable economic, scientific, and innovative strength.”
While bibliometric data can give us indicators of such a trend, we have to look deeper to fully understand what is happening. Clearly, significant geopolitical forces are at work, generating heightened concerns for national security and, by extension, information security pertaining to scientific research. The fact that many areas of cutting-edge science also have direct implications for economic competitiveness and military capabilities further reinforces the security concerns raised by geopolitical competition, raising barriers to cooperation.
Forms of cooperation remain, continuing to give science a sense of community and common purpose.
Competition and discord in international scientific activities are certainly not new. Yet forms of cooperation remain, continuing to give science a sense of community and common purpose. That cooperative behavior is often quite subtle and indirect, as a result of multiple modalities of contact and communication. Direct international cooperation among scientists, relations among national and international scientific organizations, the international roles of universities, and the various ways that numerous corporations engage scientists and research centers around the world illustrate the plethora of modes and platforms.
From the point of view of political authorities, devising policies for this mix of modalities is no small challenge. Concerns about maintaining national security often lead to government intrusions into the professional interactions of the scientific community. There are no finer examples of this than the security policy initiatives being implemented in the United States and China, the results of which appear in the bibliometric data presented by the authors. At the same time, we might ask whether scientific communication continues in a variety of other forms, raising hopes that political realities will change. In addition, what should we make of the development of new sites for international cooperation such as the King Abdullah University of Science and Technology in Saudi Arabia and Singapore’s emergence as an important international center of research? Further examination of such questions is warranted as we try to understand the trends suggested by Martin and Schwaag Serger.
It is tempting to discuss this moment in terms of the familiar “convergence-divergence” distinction, but such a binary formulation does not do justice to enduring “community” interests among scientists globally.
In addition, there is more to be learned about the underlying norms and motivations that constitute the “cultures” of science, in China and elsewhere. Research integrity, evaluation practices, research ethics, and science-state relations, among other issues, all involve the norms of science and pertain to its governance. In today’s world, that governance clearly involves a fusion of the policies of governments with the cultures of science. As with geopolitical tensions, matters of governance also hold the potential for producing the bifurcated world of international scientific cooperation the authors suggest. At the same time, we are not without evidence that norms diffuse, supporting cooperative behavior.
We are thus at an interesting moment in our efforts to understand international research cooperation. While signs of “disentanglement” are before us, we are also faced with complex patterns of personal and institutional interactions. It is tempting to discuss this moment in terms of the familiar “convergence-divergence” distinction, but such a binary formulation does not do justice to enduring “community” interests among scientists globally, even as government policies and intellectual traditions may make some forms of cooperation difficult.
Richard “Pete” Suttmeier
Professor Emeritus, Political Science
University of Oregon
In Australia, the quality and impact of research is built upon uncommonly high levels of international collaboration. Compared with the global average of almost 25% cited by Igor Martins and Sylvia Schwaag Serger, over 60% of Australian research now involves international collaboration. So the questions the authors raise are essential for the future of Australian universities, research, and innovation.
While there are some early signs of “disentanglement” in Australian research—such as the recent mapping of a decline in collaboration with Chinese partners in projects funded by the Australian Research Council—the overall picture is still one of increasing international engagement. In 2022, Australian researchers coauthored more papers with Chinese colleagues than with American colleagues (but only just). This is the first time in Australian history that our major partner for collaborative research has been a country other than a Western military ally. But the fastest growth in Australia’s international research collaboration over the past decade was actually with India, not China.
At the same time, the connection between research and national and economic security is being drawn more clearly. At a major symposium at the Australian Academy of Science in Canberra in November 2023, Australia’s chief defense scientist talked about a “paradigm shift,” where the definition of excellent science was changing from “working with the best in the world” to “working with the best in the world who share our values.”
This is the first time in Australian history that our major partner for collaborative research has been a country other than a Western military ally.
Navigating these shifts in global knowledge production, collaboration, and innovation is going to require new strategies and an improved evidence base to inform the decisions of individual researchers, institutions, and governments in real time. Martins and Schwaag Serger are asking critical questions and bringing better data to the table to help us answer them.
As a country with a relatively small population (producing 4% of the world’s published research), Australia has succeeded over recent decades by being an open and multicultural trading nation, with high levels of international engagement, particularly in our Indo-Pacific region.
Increasing geostrategic competition is creating new risks for international research collaboration, and we need to manage these. In Australia in the past few years, universities and government agencies have established a joint task force for collaboration in addressing foreign interference, and there is also increased screening and government review of academic collaborations. But to balance the increased focus on the downsides of international research, we also need better evidence and analysis of the upsides—the benefits that accrue to Australia from being connected to the global cutting edge. While managing risk, we should also be alert to the risk of missing out.
Paul Harris
Executive Director, Innovative Research Universities
Canberra, Australia
The commentary on Igor Martins and Sylvia Schwaag Serger’s article is closely in tune with recent reports published by the Policy Institute at King’s College London. Most recently, in Stumbling Bear; Soaring Dragon and The China Question Revisited, we drew attention to the extraordinary rising research profile of China, which has disrupted the G7’s dominance of the global science network. This is a reality that scientists in other countries cannot ignore, not least because it is only by working with colleagues at the laboratory bench that we develop a proper understanding of the aims, methods, and outcomes of their work. If China is now producing as many highly cited research papers as the United States and the European Union, then knowing only by reading is blind folly.
A strong, interconnected global network underpins the vast majority of highly cited papers that signal change and innovation. How could climate science, epidemiology, and health management work without such links?
These considerations need to be set in a context of international collaboration, rising over the past four decades as travel got cheaper and communications improved. In 1980, less than 10% of articles and reviews published in the United Kingdom had an international coauthor; that now approaches 70% and is greatest among the leading research-intensive universities. A similar pattern occurs across the European Union. The United States is somewhat less international, having the challenge of a continent to span domestically. However, a strong, interconnected global network underpins the vast majority of highly cited papers that signal change and innovation. How could climate science, epidemiology, and health management work without such links?
The spread across disciplines is lumpy. Much of the trans-Atlantic research trade is biomedical and molecular biology. The bulk of engagement with China has been in technology and the physical sciences. That is unsurprising since this is where China had historical strength and where Western researchers were more open for new collaborations. Collaboration in social sciences and in humanities is sparse because many priority topics are regional or local. But collaboration is growing in almost every discipline and is shifting from bilateral to multilateral. Constraining this to certain subjects and politically correct partners would be a disaster for global knowledge horizons.
Jonathan Adams
Visiting Professor at the Policy Institute, King’s College London
Chief Scientist at the Institute for Scientific Information, Clarivate
Janet Ilieva
Founder and Director of Education Insight
Jo Johnson
Visiting Professor at the Policy Institute, King’s College London
Former UK Minister of State for Universities, Science, Research and Innovation
Lessons from Ukraine for Civil Engineering
The resilience of Ukraine’s infrastructure in the face of both conventional and cyber warfare, as well as attacks on the knowledge systems that underpin its operations, is no doubt rooted in the country’s history. Ukraine has been living with the prospect of warfare and chaos for over a century. This “normal” appears to have produced an agile and flexible infrastructure system that every day shows impressive capacity to adapt.
In “What Ukraine Can Teach the World About Resilience and Civil Engineering,” Daniel Armanios, Jonas Skovrup Christensen, and Andriy Tymoshenko leverage concepts from sociology to explain how the country is building agility and flexibility into its infrastructure system. They identify key tenets that provide resilience: a shared threat that unites and motivates, informal supply networks, decentralized management, learning from recent crises (namely COVID-19), and modular and distributed systems. Resilience naturally requires coupled social, ecological, and technological systems assessment, recognizing that sustained and expedited adaptation is predicated on complex dynamics that occur within and across these systems. As such, there is much to learn from sociology, but also other disciplines as we unpack what’s at the foundation of these tenets.
Agile and flexible infrastructure systems ultimately produce a repertoire of responses as large as or greater than the variety of conditions produced in their environments. This is known as requisite complexity. Thriving under a shared threat is rooted in the notion that systems can do a lot of innovation at the edge of chaos (complexity theory), if resources including knowledge are available and there is flexibility to reorganize as stability wanes. The informal networks Ukraine has used to source resources exist because formal networks are likely unavailable or unreliable. We often ignore ad hoc networks in stable situations, and even during periods of chaos such as extreme weather events, because the organization is viewed as unable to fail—and therefore too often falls back to its siloed and rigid structures to ineffectively deal with prevailing conditions.
Thriving under a shared threat is rooted in the notion that systems can do a lot of innovation at the edge of chaos, if resources including knowledge are available and there is flexibility to reorganize as stability wanes.
Ukraine didn’t have this luxury. Management and leadership science describe how informal networks are more adept at finding balance than are rigid and siloed organizations. Related, the proposition of decentralized management is akin to imbuing those closest to the chaos, who are better attuned to the specifics of what is unfolding, with greater decisionmaking authority. This is related to the concept of near decomposability (complexity science). This decentralized model works well during periods of instability, but can lead to inefficiencies during stable times. During rebuilding, you may not want decentralization as you try to efficiently use limited resources.
Lastly, modularity and distributed systems are often touted as resilience solutions, and indeed they can have benefits under the right circumstances. However, network science teaches us that decentralized systems shift the nature of the system from one big producer supplying many consumers (vulnerable to attack) to many small producers supplying many consumers (resilient). Distributed systems link decentralized and modular assets together so that greater cognition and functionality are achieved. But caution should be used in moving toward purely decentralized systems for resilience, as there are situations where resilience is more closely realized with centralized configurations.
Fundamentally, as the authors note, Ukraine is showing us how to build and operate infrastructure in rapidly changing and chaotic environments. But it is also important to recognize that infrastructure in regions not facing warfare is likely to experience shifts between chaotic (e.g., extreme weather events, cyberattacks, failure due to aging) and stable conditions. This cycling necessitates being able to pivot infrastructure organizations and their technologies between chaos and non-chaos innovation. The capabilities produced from these innovation sets become the cornerstone for agile and flexible infrastructure to respond at pace and scale to known challenges and perhaps, most importantly, to surprise.
Mikhail Chester
Professor of Civil, Environmental, and Sustainable Engineering
Arizona State University
Coauthor, with Braden Allenby, of The Rightful Place of Science: Infrastructure and the Anthropocene
In their essay, Daniel Armanios, Jonas Skovrup Christensen, and Andriy Tymoshenko provide insightful analysis of the Ukraine conflict and how the Ukrainian people are able to manage the crisis. Their recounting reminds me of an expression frequently used in the US Marines: improvise, adapt, and overcome. Having lived and worked for many years in Ukraine, and having returned for multiple visits since the Russian invasion, leaves me convinced that while the conflict will be long, Ukraine will succeed in the end. The five propositions the authors lay out as the key to success are spot on.
Ukraine’s common goal of bringing its people together (authors’ Proposition 1), along with the Slavic culture and a particular legacy of the Soviet system, combine to form the fundamental core of why the Ukrainian people not only survive but often flourish during times of crisis. Slavic people are, by my observation, tougher and more resilient than the average. Some will call it “grit,” some may call it “stoic”—but make no mistake, a country that has experienced countless invasions, conflicts, famines, and other hardships imbues its people with a special character. It is this character that serves as the cornerstone of their attitude and in the end their response. Unified hard people can endure hard things.
Some will call it “grit,” some may call it “stoic”—but make no mistake, a country that has experienced countless invasions, conflicts, famines, and other hardships imbues its people with a special character.
A point to remember is that Ukraine, like most of the former Soviet Union, benefits from a legacy infrastructure based on redundancy and simplicity. This is complementary to the authors’ Proposition 5 (a modular, distributed, and renewable energy infrastructure is more resilient in time of crisis). It was Vladimir Lenin who said, “Communism equals Soviet power plus the electrification of the whole country.” As a consequence, the humblest village in Ukraine has some form of electricity, and given each system’s robust yet simple connection, it is easily repaired when broken. Combine this with distributed generation (be it gensets or wind, solar, or some other type of renewable energy) and you have built-in redundancy.
During Soviet times, everyone needed to develop a “work-around” to source what they sometimes needed or wanted. Waiting for the Soviet state to supply something could take forever, if it ever happened at all. As a consequence, there were microentrepreneurs everywhere who could source, build, or repair just about everything, either for themselves or their neighbors. This system continues to flourish in Ukraine, and the nationalistic sentiment pervading the country makes it easier to recover from infrastructure damages. As the authors point out in Proposition 3, decentralized management allows for a more agile response.
The “lessons learned” from the ongoing conflict, as the authors describe, include, perhaps most importantly, that learning from previous incidents can help develop a viable incident response plan. Such planning, however, should be realistic and focus on the “probable” and not so much on the “possible,” since every situation and plan is resource-constrained to some degree. The weak link in any society is the civilian infrastructure, and failure to ensure redundancy and rapid restoration is not an option. Ukraine is showing the world how it can be accomplished.
Steve Walsh
Supervisory Board Member
Ukrhydroenergo
Ground Truths Are Human Constructions
Artificial intelligence algorithms are human-made, cultural constructs, something I saw first-hand as a scholar and technician embedded with AI teams for 30 months. Among the many concrete practices and materials these algorithms need in order to come into existence are sets of numerical values that enable machine learning. These referential repositories are often called “ground truths,” and when computer scientists construct or use these datasets to design new algorithms and attest to their efficiency, the process is called “ground-truthing.”
Understanding how ground-truthing works can reveal inherent limitations of algorithms—how they enable the spread of false information, pass biased judgments, or otherwise erode society’s agency—and this could also catalyze more thoughtful regulation. As long as ground-truthing remains clouded and abstract, society will struggle to prevent algorithms from causing harm and to optimize algorithms for the greater good.
Ground-truth datasets define AI algorithms’ fundamental goal of reliably predicting and generating a specific output—say, an image with requested specifications that resembles other input, such as web-crawled images. In other words, ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them.
Ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them.
Ground-truth datasets fall into at least two subsets: input data (what the algorithm should process) and output targets (what the algorithm should produce). In supervised machine learning, computer scientists start by building new algorithms using one part of the output targets annotated by human labelers, before evaluating their built algorithms on the remaining part. In the unsupervised (or “self-supervised”) machine learning that underpins most generative AI, output targets are used only to evaluate new algorithms.
Most production-grade generative AI systems are assemblages of algorithms built from both supervised and self-supervised machine learning. For example, an AI image generator depends on self-supervised diffusion algorithms (which create a new set of data based on a given set) and supervised noise reduction algorithms. In other words, generative AI is thoroughly dependent on ground truths and their socioculturally oriented nature, even if it is often presented—and rightly so—as a significant application of self-supervised learning.
Why does that matter? Much of AI punditry asserts that we live in a post-classification, post-socially constructed world in which computers have free access to “raw data,” which they refine into actionable truth. Yet data are never raw, and consequently actionable truth is never totally objective.
Algorithms do not create so much as retrieve what has already been supplied and defined—albeit repurposed and with varying levels of human intervention. This observation rebuts certain promises around AI and may sound like a disadvantage, but I believe that it could instead be an opportunity for social scientists to begin new collaborations with computer scientists. This could take the form of a professional social activity, people working together to describe the ground-truthing processes that underpin new algorithms, and so help make them more accountable and worthy.
AI Lacks Ethic Checks for Human Experimentation
Following Nazi medical experiments in World War II and outrage over the US Public Health Service’s four-decade-long Tuskegee syphilis study, bioethicists laid out frameworks, such as the 1947 Nuremberg Code and the 1979 Belmont Report, to regulate medical experimentation on human subjects. Today social media—and, increasingly, generative artificial intelligence—are constantly experimenting on human subjects, but without institutional checks to prevent harm.
In fact, over the last two decades, individuals have become so used to being part of large-scale testing that society has essentially been configured to produce human laboratories for AI. Examples include experiments with biometric and payment systems in refugee camps (designed to investigate use cases for blockchain applications), urban living labs where families are offered rent-free housing in exchange for serving as human subjects in a permanent marketing and branding experiment, and a mobile money research and development program where mobile providers offer their African consumers to firms looking to test new biometric and fintech applications. Originally put forward as a simpler way to test applications, the convention of software as “continual beta” rather than more discrete releases has enabled business models that depend on the creation of laboratory populations whose use of the software is observed in real time.
Generative AI is an extreme case of unregulated experimentation-as-innovation, with no formal mechanism for considering potential harms.
This experimentation on human populations has become normalized, and forms of AI experimentation are touted as a route to economic development. The Digital Europe Programme launched AI testing and experimentation facilities in 2023 to support what the program calls “regulatory sandboxes,” where populations will interact with AI deployments in order to produce information for regulators on harms and benefits. The goal is to allow some forms of real-world testing for smaller tech companies “without undue pressure from industry giants.” It is unclear, however, what can pressure the giants and what constitutes a meaningful sandbox for generative AI; given that it is already being incorporated into the base layers of applications we would be hard-pressed to avoid, the boundaries between the sandbox and the world are unclear.
Generative AI is an extreme case of unregulated experimentation-as-innovation, with no formal mechanism for considering potential harms. These experiments are already producing unforeseen ruptures in professional practice and knowledge: students are using ChatGPT to cheat on exams, and lawyers are filing AI-drafted briefs with fabricated case citations. Generative AI also undermines the public’s grip on the notion of “ground truth” by hallucinating false information in subtle and unpredictable ways.
Much of current regulation places the responsibility for AI safety on individuals, whereas in reality they are the subjects of an experiment being conducted across society.
These two breakdowns constitute an abrupt removal of what philosopher Regina Rini has termed “the epistemic backstop,”—that is, the benchmark for considering something real. Generative AI subverts information-seeking practices that professional domains such as law, policy, and medicine rely on; it also corrupts the ability to draw on common truth in public debates. Ironically, that disruption is being classed as success by the developers of such systems, emphasizing that this is not an experiment we are conducting but one that is being conducted upon us.
This is problematic from a governance point of view because much of current regulation places the responsibility for AI safety on individuals, whereas in reality they are the subjects of an experiment being conducted across society. The challenge this creates for researchers is to identify the kinds of rupture generative AI can cause and at what scales, and then translate the problem into a regulatory one. Then authorities can formalize and impose accountability, rather than creating diffuse and ill-defined forms of responsibility for individuals. Getting this right will guide how the technology develops and set the risks AI will pose in the medium and longer term.
Much like what happened with biomedical experimentation in the twentieth century, the work of defining boundaries for AI experimentation goes beyond “AI safety” to AI legitimacy, and this is the next frontier of conceptual social scientific work. Sectors, disciplines, and regulatory authorities must work to update the definition of experimentation so that it includes digitally enabled and data-driven forms of testing. It can no longer be assumed that experimentation is a bounded activity with impacts only on a single, visible group of people. Experimentation at scale is frequently invisible to its subjects, but this does not render it any less problematic or absolve regulators from creating ways of scrutinizing and controlling it.
Generative AI Is a Crisis for Copyright Law
Generative artificial intelligence is driving copyright into a crisis. More than a dozen copyright cases about AI were filed in the United States last year, up severalfold from all filings from 2020 to 2022. In early 2023, the US Copyright Office launched the most comprehensive review of the entire copyright system in 50 years, with a focus on generative AI. Simply put, the widespread use of AI is poised to force a substantial reworking of how, where, and to whom copyright should apply.
Starting with the 1710 British statute, “An Act for the Encouragement of Learning,” Anglo-American copyright law has provided a framework around creative production and ownership. Copyright is even embedded in the US Constitution as a tool “to promote the Progress of Science and useful Arts.” Now generative AI is destabilizing the foundational concepts of copyright law as it was originally conceived.
Typical copyright lawsuits focus on a single work and a single unauthorized copy, or “output,” to determine if infringement has occurred. When it comes to the capture of online data to train AI systems, the sheer scale and scope of these datasets overwhelms traditional analysis. The LAION 5-B dataset, used to train the AI image generator Stable Diffusion, contains 5 billion images and text captions harvested from the internet, while CommonPool (a collection of datasets released by nonprofit LAION in April to democratize machine learning), offers 12.8 billion images and captions. Generative AI systems have used datasets like these to produce billions of outputs.
US courts are likely to find that training AI systems on copyrighted works is acceptable under the fair use exemption, which allows for limited use of copyrighted works without permission in some cases.
For many artists and designers, this feels like an existential threat. Their work is being used to train AI systems, which can then create images and texts that replicate their artistic style. But to date, no court has considered AI training to be copyright infringement: following the Google Books case in 2015, which assessed scanning books to create a searchable index, US courts are likely to find that training AI systems on copyrighted works is acceptable under the fair use exemption, which allows for limited use of copyrighted works without permission in some cases when the use serves the public interest. It is also permitted in the European Union under the text and data mining exception of EU digital copyright law.
Copyright law has also struggled with authorship by AI systems. Anglo-American law presumes that work has an “author” somewhere. To encourage human creativity, some authors need the economic incentive of a time-limited monopoly on making, selling, and showing their work. But algorithms don’t need incentives. So according to the US Copyright Office they aren’t entitled to copyright. The same reasoning applied to other cases involving nonhuman authors, including the case where a macaque took selfies using a nature photographer’s camera. Generative AI is the latest in a line of nonhumans deemed unfit to hold copyright.
Nor are human prompters likely to have copyrights in AI-generated work. The algorithms and neural net architectures behind generative AI algorithms produce outputs that are inherently unpredictable, and any human prompter has less control over a creation than the model does.
Where does this leave us? For the moment, in limbo. The billions of works produced by generative AI are unowned and can be used anywhere, by anyone, for any purpose. Whether a ChatGPT novella or a Stable Diffusion artwork, output now exists as unclaimable content in the commercial workings of copyright itself. This is a radical moment in creative production: a stream of works without any legally recognizable author.
This is a radical moment in creative production: a stream of works without any legally recognizable author.
There is an equivalent crisis in proving copyright infringement. Historically, this has been easy, but when a generative AI system produces infringing content, be it an image of Mickey Mouse or Pikachu, courts will struggle with the question of who is initiating the copying. The AI researchers who gathered the training dataset? The company that trained the model? The user who prompted the model? It’s unclear where agency and accountability lie, so how can courts order an appropriate remedy?
Copyright law was developed by eighteenth-century capitalists to intertwine art with commerce. In the twenty-first century, it is being used by technology companies to allow them to exploit all the works of human creativity that are digitized and online. But the destabilization around generative AI is also an opportunity for a more radical reassessment of the social, legal, and cultural frameworks underpinning creative production.
What expectations of consent, credit, or compensation should human creators have going forward, when their online work is routinely incorporated into training sets? What happens when humans make works using generative AI that cannot have copyright protection? And how does our understanding of the value of human creativity change when it is increasingly mediated by technology, be it the pen, paintbrush, Photoshop, or DALL-E?
It may be time to develop concepts of intellectual property with a stronger focus on equity and creativity as opposed to economic incentives for media corporations. We are seeing early prototypes emerge from the recent collective bargaining agreements for writers, actors, and directors, many of whom lack copyrights but are nonetheless at the creative core of filmmaking. The lessons we learn from them could set a powerful precedent for how to pluralize intellectual property. Making a better world will require a deeper philosophical engagement with what it is to create, who has a say in how creations can be used, and who should profit.
Science Lessons from an Old Coin
In “What a Coin From 1792 Reveals About America’s Scientific Enterprise” (Issues, Fall 2023), Michael M. Crow, Nicole K. Mayberry, and Derrick M. Anderson make an adroit analogy between the origins of the Birch Cent and the two sides of the nation’s research endeavors, namely democracy and science. The noise and seeming dysfunction in the way science is adjudicated and revealed is, they say, a feature and not a bug.
I agree. I have written extensively about how scientists should embrace their humanity. That means we express emotions when we are ignored by policymakers, we have strong convictions and therefore are subject to motivated reasoning, and we make both intentional and inadvertent errors. Efforts to curb this humanity have all failed. We are not going to silence those who are passionate about science—nor should we. Why would someone study climate change unless they are passionate about the fact that it’s an existential crisis? We want and need that passion to drive effort and creativity. Does this make scientists outspoken and subject to—at least initially—looking for evidence that supports their passion? Of course. And does that same humanity mean that errors can appear in scientific papers that were missed by the authors, editors, and reviewers? Also yes.
We are not going to silence those who are passionate about science—nor should we. Why would someone study climate change unless they are passionate about the fact that it’s an existential crisis?
There’s a solution to this that also embraces the messy and glorious vision presented by Crow et al. And that is not to quell scientists’ passion and humanity, but rather to better explain and demonstrate that science operates within a system that ultimately corrects for human frailty. This requires better explaining the fact that scientists are competitive—another human trait—and that leads to arguments about data and papers that converge on the right answer, even when motivated reasoning may have been there to start with. It also requires courageous and forthright correction of the scientific record when errors have been made for any reason. Science is seriously falling short on this right now. The correction and retraction of scientific papers has become far too contentious—often publicly—and stigma is associated with these actions. This stigma arises from the perception that all errors are due to deliberate misconduct, even when journals are explicit that correction of the record does not imply fraud.
This must change. The public must experience—and perceive—that science is honorably self-correcting. That will require hard changes in scientists’ attitude and execution when concerns are raised about published papers. But fixing this is going to be a lot easier than lowering the noise level. And as the authors point out, that noise is a feature, not a bug, and therefore should be celebrated.
H. Holden Thorp
Editor-in-Chief of Science
Professor of Chemistry and Medicine
George Washington University
In their engaging article, Michael M. Crow, Nicole K. Mayberry, and Derrick M. Anderson rightly point to the centrality of science in US history—and to how much “centrality” has meant entanglement in controversy, not clarity of purpose.
The motto on the Birch Cent, “Liberty, Parent of Science and Industry,” brings out the importance of freedom of inquiry. This is not readily separable from freedom of interpretation and even freedom to disregard. The authors quote the slogan “follow the science” that attempts to counter the recent waves of distrust and denial. But while science may inform policy, it doesn’t dictate it. Liberty signals also the importance of political debate over whether and how to follow science.
Science and technology developed in a dialectical relationship between centralization and decentralization, large-scale and small, elite domination and democratic opportunities.
In 1792, science was largely a small-scale craft enterprise. Over time, universities, corporations, government agencies, and markets all became crucial. A science and technology system developed, greatly increasing support for science but also shaping which possible advances in knowledge were pursued. Potential usefulness was privileged, as were certain sectors, such as defense and health, and potential for profit. Different levels of risk and “disutility” were tolerated. The patent system developed not only to share useful knowledge but, as Crow and his coauthors emphasize, to secure private property rights. All this complicated and limited the liberty of scientific inquiry.
Comparing the United States to the United Kingdom, the authors sensibly emphasize the contrast of egalitarian to aristocratic norms. But the United States was not purely egalitarian. The Constitution is full of protections for inequality and protections from too much equality. Conversely, UK science was not all aristocratic nor entirely top-down and managed. Though the Royal Society secured formal recognition under King Charles II, it was created in the midst of (and influenced by) the English Civil War. Bottom-up self-organization among scientists was important. Most were elite, but not all statist. And the same went for a range of other self-organized groups, such as Birmingham’s Lunar Men, who shared a common interest in experiment and invention. These groups joined in creating “invisible colleges” that contributed to state power but were not controlled by it. Even more basically, perhaps, the authors’ contrast of egalitarian to aristocratic norms implies a contrast of common men to elites that obscures the rising industrial middle class. It was no accident the Lunar Men were in the English Midlands.
Crow and his coauthors correctly stress that neither scientific knowledge nor technological innovation has simply progressed along a linear path. In both the United States and the United Kingdom, science and technology developed in a dialectical relationship between centralization and decentralization, large-scale and small, elite domination and democratic opportunities. Universities, scientific societies, and indeed business corporations all cut both ways. They were upscaling and centralizing compared with autonomous, local craft workshops. They worked partly for honor and partly for profit. But they also formed intermediate associations in relation to the state and brought dynamism to local communities and regions. Universities joined science productively to critical and humanistic inquiry. Liberty remained the parent of science and industry because institutional supports remained diverse, allowing for creativity, debate, and exploration of different possible futures. There are lessons here for today.
Craig Calhoun
University Professor of Social Sciences
Arizona State University
Securing Semiconductor Supply Chains
Global supply chains, particularly in technologies of strategic value, are undergoing a remarkable reevaluation as geopolitical events weigh on the minds of decisionmakers across government and industry. The rise of an aggressive and revisionist China, a devastating global pandemic, and the rapid churn of technological advancement are among the factors prompting a dramatic rethinking of the value of lean, globally distributed supply chains.
These complex supply networks evolved over several decades of relative geopolitical stability to capture the efficiency gains of specialization and trade on a global scale. Yet in today’s world, efficiency must be recast in terms of reliable and resilient supply chains better adapted to geopolitical uncertainties rather than purely on the basis of lowest cost.
Indeed, nations worldwide have belatedly discovered a crippling lack of redundancy in supply chains necessary to produce and distribute products essential to their economies and welfare, including such diverse goods as vaccines and medical supplies, semiconductors and other electronic components, and the wide variety of technologies reliant on semiconductors. A drive to “rewire” these networks must balance the manifest advantages of globally connected innovation and production with the need for improved national and regional resiliency. This would include more investment in traditional technologies—for example, a more robust regional electrical grid in Texas, whose failure contributed to the supply disruption of automotive chips that Abigail Berger, Hassan Khan, Andrew Schrank, and Erica R. H. Fuchs describe in “A New Policy Toolbox for Semiconductor Supply Chains” (Issues, Summer 2023).
Efficiency must be recast in terms of reliable and resilient supply chains better adapted to geopolitical uncertainties rather than purely on the basis of lowest cost.
Of course, given its globalized operations, the semiconductor industry is at the forefront of these challenges. In particular, there is a need to distribute risks of single-point failures, such as those found in the global concentration of semiconductor manufacturing in East Asia. Taiwan and South Korea, which together account for roughly half of global semiconductor fabrication capacity, sit astride major geopolitical and geological fault lines, with the dangers of the latter often underestimated.
Recent investments to renew semiconductor manufacturing capacity in the United States are a key element of this rewiring. Through the CHIPS for America Act of 2021, lawmakers have authorized $52 billion to support restoring US capacity in advanced chip manufacturing, with $39 billion in subsidies for the construction of fabrication plants, or “fabs,” backed by substantial tax credits, and roughly $12 billion for related advanced chip research and development initiatives.
Berger and her colleagues argue cogently that it may also be possible to design greater resiliency directly into semiconductor chips. In some cases, greater standardization in chip architecture may allow some chips to be built at multiple fabs, reducing “foundry lock-in.” Such gains will depend on trusted networks among multiple firms as well as governments of US allies and strategic partners—although sorting the practical realities of commercial and national competition in a rapidly innovating industry that marches to the cadence of Moore’s Law will be challenging. The authors rightly point out that focusing on distinct market segments with similar use cases may offer win-win opportunities, but these, too, will require incentives to drive cooperation.
It is clear that global supply chains need a greater level of resiliency, not least through greater geographic dispersion of production across the supply chain. But whether generated by greater standardization, stronger trusted relationships, or through the redistribution of assets, the continued national economic security of the United States and its allies depends on a comprehensive, cooperative, and steady implementation of this rewiring. The authors propose a novel approach that should be pursued, but the broader rewiring will not happen quickly or easily. We still need to move forward with ongoing incentives for industry, more cooperative relationships, and major new investments in talent. We are not done. We need to think of semiconductors like nuclear energy, one involving sustained and substantial commitments of funds and policy attention.
Sujai Shivakumar
Senior Fellow and Director, Renewing American Innovation
Center for Strategic and International Studies
Time for an Engineering Biennale
Guru Madhavan’s “Creative Intolerance” (Issues, Summer 2023) is exceptionally rich in compelling metaphors and potent messages. I am enthusiastic about the idea of an engineering biennale to showcase innovations and provoke discussions about specific projects and the methods of engineering.
I am enthusiastic about the idea of an engineering biennale to showcase innovations and provoke discussions about specific projects and the methods of engineering.
The Venice Arts Biennale Architeturra 2023 that the author highlights, which impressed me with its scale and diversity, provides an excellent model for an Engineering Biennale. Could the US Department of Energy Solar Decathlon be scaled up? Could the Design Museum in London play a role? Maybe multiple design showcases—such as those at the University of British Columbia; the Jacobs Institute for Design Innovation at the University of California, Berkeley; or the MIT Design Lab—could grow into something larger?
The support for design could expand the role of the National Academies of Sciences, Engineering, and Medicine by building bridges with an increasing number of researchers in this field, ultimately leading to a National Academy of Design.