Training More Biosafety Officers

The United States has long claimed that there is a need to focus on the safety and security of biological research and engineering, but we are only beginning to see that call turn into high-level action on funding and support for biosafety and biosecurity governance. The CHIPS and Science Act, for example, calls for the White House Office of Science and Technology Policy to support “research and other activities related to the safety and security implications of engineering biology,” and for the office’s interagency committee to develop and update every five years a strategic plan for “applied biorisk management.” The committee is further charged with evaluating “existing biosecurity governance policies, guidance, and directives for the purposes of creating an adaptable, evidence-based framework to respond to emerging biosecurity challenges created by advances in engineering biology.”

To carry out this mouthful of assignments, more people need to be trained in biosafety and biosecurity. But what does good training look like? Moreover, what forms of knowledge should be incorporated into an adaptable evidence-based framework?

In “The Making of a Biosafety Officer” (Issues, Spring 2023), David Gillum shows the power and importance of tacit knowledge—“picked up here and there, both situationally and systemically”—in the practice of biosafety governance, while at the same time stressing the importance of the need to formalize biosafety education and training. This is due, in part, to the lack of places where people can receive formal training in biosafety. But it is also a recognition of, as Gillum puts it, the type of knowledge biosafety needs—knowledge “at the junction between rules, human behavior, facilities, and microbes.”

The present lack of formalized biosafety education and training presents an opportunity to re-create what it means to be a biosafety officer as well as to redefine what biosafety and biosecurity are within a broader research infrastructure and societal context. This opening, in turn, should be pursued in tandem with agenda-setting for research on the social aspects of biosafety and biosecurity. It is increasingly unrealistic to base a biosafety system primarily on lists of known concerns and standardized practices for laboratory management. Instead, adaptive frameworks are needed that are responsive to the role that tacit knowledge plays in ensuring biosafety practices and are aligned with current advances in bioengineering and the organizational and social dynamics within which it is done.

The present lack of formalized biosafety education and training presents an opportunity to re-create what it means to be a biosafety officer as well as to redefine what biosafety and biosecurity are within a broader research infrastructure and societal context.

Proficiency in biosafety and biosecurity expertise today means attending to the formal requirements of policies and regulations while also generating new knowledge about the gaps in those requirements and a well-developed sense of the workings of a particular institution. The challenge for both training and agenda-setting is how to endorse, disseminate, and assimilate the tacit knowledge generated by biosafety officers’ real-life experiences. For students and policymakers alike, a textbook introduction to biosafety’s methodological standards, fundamental concepts, and specific items of concern will surely come about as biosafety research becomes more codified. But even as some aspects of tacit knowledge become more explicit, routinized, and standardized, the emergence of new and ever valuable tacit knowledge will always remain a key part of biosafety expertise and experience.

Gillum’s vivid examples of real-life experiences involving anthrax exposures, the organizational peculiarities of information technology infrastructures, and the rollout of regulations of select bioagents demonstrate that, at a basic level, biosafety officers and those with whom they work need to be attuned to adaptability, uncertainty, and contingency in specific situations. Cultivating this required mode of attunement among future biosafety professionals means embracing the fact that biosafety, like science itself, is a constantly evolving social practice, embedded within particular institutional and political frameworks. As such, it means that formal biosafety educational programs must not reduce what counts as “biosafety basics” to technical know-how alone, but ought to prioritize situational awareness and adaptability as part of its pedagogy. Biosafety and biosecurity research such as that envisioned in the CHIPS and Science Act will advance the training and work of the next generation of biosafety professionals only if it recognizes this key facet of biosafety.

Biosecurity Postdoctoral Fellow in the Center for International Security & Cooperation

Stanford University

Senior Research Fellow in the Program on Science, Technology, and Society

Harvard Kennedy School

David Gillum illustrates the importance of codifying and transferring knowledge that biosafety professionals learn on the job. It is certainly true that not every biosafety incident can be anticipated, and that biosafety professionals must be prepared to draw on their knowledge, experience, and professional judgment to handle situations as they arise. But it is also true that as empirical evidence of laboratory hazards and their appropriate mitigations accumulate, means should be developed by which this evidence is analyzed, aggregated, and shared.

There will always be lessons that can only be learned the hard way—but they shouldn’t be learned the hard way more than once. There is a strong argument for codifying and institutionalizing these biosafety “lessons learned” through means such as formalized training or certification. Not only will that improve the practice of biosafety, but it will also help convince researchers—a population particularly sensitive to the need for empirical evidence and logical reasoning as the basis for action—that the concerns raised by biosafety professionals need to be taken seriously.

There is a strong argument for codifying and institutionalizing these biosafety “lessons learned” through means such as formalized training or certification.

This challenge would be significant enough if the only potential hazards from research in the life sciences flowed from accidents—human error or system malfunction—or from incomplete understanding of the consequences of research activities. But the problem is worse than that. Biosecurity, as contrasted with biosafety, deals with threats posed by those who would deliberately apply methods, materials, or knowledge from life science research for harm. Unfortunately, when it comes to those who might pose deliberate biological threats, we cannot exclude researchers or even biosafety professionals themselves. As a result, the case for codifying and sharing potential biosecurity failures and vulnerabilities is much more fraught than it is for biosafety: the audience might include the very individuals who are the source of the problem—people who might utilize the scenarios that are being shared, or who might even modify their plans once they learn how others seek to thwart them. Rather than setting up a registry or database by which lessons learned can be compiled and shared, one confronts the paradox of creating the Journal of Results Too Dangerous to Publish. Dealing with such so-called information hazards is one factor differentiating biosafety from biosecurity. Often, however, we call upon the same experts to deal with both.

Personal relationships do not immunize against such insider threats, as we learn every time the capture of a spy prompts expressions of shock from coworkers or friends who could not imagine that the person they knew was secretly living a vastly different life. However, informal networks of trust and personal relationships are likely a better basis on which to share sensitive biosecurity information than relying on mutual membership in the same profession or professional society. So while there is little downside to learning how to better institutionalize, codify, and share the tacit knowledge and experience with biosafety that Gillum describes so well, it will always be more difficult to do so in a biosecurity context.

Contributing Scholar

Johns Hopkins Center for Health Security

Enhancing Trust in Science

In “Enhancing Trust in Science and Democracy in an Age of Misinformation” (Issues, Spring 2023), Marcia McNutt and Michael M. Crow encourage the scientific community to “embrace its vital role in producing and disseminating knowledge in democratic societies.” We fully agree with this recommendation. To maximize success in this endeavor, we believe that the public dialogue on trust in science must become less coarse to better identify the different elements of science that can be trusted, whether it is science as a process, particular studies, which actors or entities are trusted, or further distinctions.

At the foundation of trust in science is trust in the scientific method, without which no other trust can be merited, warranted, or justified. The scientific community must strive to ensure that the scientific process is understood and accepted before we can hope to merit trust at more refined levels. Although trust in the premise that following the scientific method will lead to logical and evidence-based conclusions is essential, blanket trust in any component of the scientific method would be counterproductive. Instead, trust in science at all levels should be justified through rigor, reproducibility, robustness, and transparency. Scientific integrity is an essential precursor to trust.

As examples, at the study level, trust might be partially warranted through documentation of careful study execution, valid measurement, and sound experimental design. At the journal level, trust might be partially justified by enforcing preregistration or data and code sharing. In the case of large scientific or regulatory bodies, these institutions must merit trust by defining and communicating both the evidence on which they base their recommendations and the standards of evidence they are using.

Trust in science at all levels should be justified through rigor, reproducibility, robustness, and transparency. Scientific integrity is an essential precursor to trust.

Recognizing that trust can be merited at one point of the scientific process (e.g., a study and its findings have been reported accurately) without being merited at another (e.g., the findings represent the issue in question) is essential to understanding how to develop specific recommendations for conveying trustworthiness at each point. Therefore, efforts to improve trust in science should include the development of specific and actionable advice for increasing trust in science as a process of learning; individual scientific experiments; certain individual scientists; large, organized institutions of science; the scientific community as a whole; particular findings and interpretations; and scientific reporting.

However, as McNutt and Crow note, “It may be unrealistic to expect that scientists … probe the mysteries of, say, how nano particles behave, as well as communicate what their research means.” Hence, a major challenge facing the scientific community is developing detailed methods to help scientists better communicate with and warrant the trust of the general public. Thus, the current dialogue surrounding trust must identify both specific trust points and clear actions that can be taken at each point to indicate and possibly increase the extent to which trust is merited.

We believe the scientific community will rise to meet this challenge, offering techniques that signal the degree of credibility merited by key elements and steps in the scientific process and earning the public trust.

Dean

Distinguished Professor

Provost Professor

Indiana University School of Public Health, Bloomington

Associate Professor of Biostatistics, Department of Epidemiology and Biostatistics

Indiana University School of Public Health, Bloomington

In times of great crisis, a country needs inspiring leaders and courageous ideas. Marcia McNutt and Michael M. Crow offer examples of both. Recognizing the urgency of our moment, they propose several innovative strategies for increasing access to research-grade knowledge.

Their attention to increasing the effectiveness of science communication is important. While efforts to improve science communication can strengthen trust in science, positive outcomes are not assured. A challenge comes from the fact that many people and organizations see science communication as a way to draw more attention to their people and ideas. While good can come from pursuits of attention, they can also amplify challenges posed by misinformation and disinformation. These inadvertent outcomes occur when attention pursuits come at the expense of characteristics that make science credible in the first place.

Consider, for example, what major media companies know: sensationalism draws viewers and readers. For them, sensationalism works best when a presentation builds from a phenomenon that people recognize as true and then exaggerates it to fuel interest in “what happens next” (e.g., the plot of most superhero movies or the framework for many cable news programs).

A better way forward is to see the main goal of science communication as a form of service that increases accurate understanding. Adopting this orientation means that a communicator’s primary motivation is something other than gaining attention, influence, or prestige.

In science, several communication practices are akin to sensationalism. Science communicators who suppress null results and engage p-hacking (the practice of using statistical programs to create the illusion of causal relationships) can gain attention by increasing the probability of getting published in a scientific journal. Similarly, science communicators who exaggerate the generalizability of a finding or suppress information about methodological limitations may receive greater media coverage. Practices such as these can generate attention while producing misleading outcomes that reduce the public’s understanding of science.

A better way forward is to see the main goal of science communication as a form of service that increases accurate understanding. Adopting this orientation means that a communicator’s primary motivation is something other than gaining attention, influence, or prestige. Instead, the communicator’s goal is to treat the audience with so much reverence and respect that she or he will do everything possible to produce the clearest possible understanding of the topic.

Of course, many scientific topics are complex. A service-oriented approach to communication requires taking the time to learn about how people respond to different presentations of a phenomenon—and measuring which presentations produce the most accurate understandings. Fortunately, an emerging field of the science of science communication makes this type of activity increasingly easy to conduct.

Among the many brilliant elements of the McNutt-Crow essay are the ways their respective organizations have embraced service-oriented ideas. Each has broken with long-standing traditions about how science is communicated. Arizona State, through its revolutionary transformation into a highly accessible national university, and the National Academies of Sciences, Engineering, and Medicine through their innovations in responsibly communicating science, offer exemplars of how trust in science can be built. These inspiring leaders and their courageous ideas recognize the urgency of our moment and offer strong frameworks from which to build.

Gerald R. Ford Distinguished University Professor

Associate Vice President for Research, Large Scale Strategies

Executive Director, Bold Challenges

University of Michigan

Some years ago, I conducted a content analysis of five of the leading American history textbooks sold in the United States. The premise of the study was that most young people get more information about the history of science and pathbreaking discoveries in their history courses than in the typical secondary school course in chemistry or physics. I wanted to compare the extent and depth of coverage of great science compared with the coverage of politics, historical events, and the arts, among other topics.

The results were somewhat surprising. First, there was almost no coverage of science at all in these texts. Second, the only topic that received more than cursory attention was the discovery of the atomic bomb. Third, in comparative terms, the singer Madonna received more coverage in these texts than did the discovery of the DNA molecule by Watson and Crick. In short, there was almost no coverage of science.

When I asked authors why they did not include more about science, their answers were straight forward. As one put it: “Science doesn’t sell, according to my publisher,” and “Frankly, I don’t know enough about science myself to write with confidence about it.”

This brings me to Marcia McNutt and Michael M. Crow’s important essay on producing greater public trust in science as well as some higher level of scientific and technological literacy. Trust is a hard thing to regain once it is lost. McNutt and Crow suggest significant ways to improve public trust in science. I would expand a bit further on their playbook.

Probably 30% of the American population know little to nothing about science and have no desire to be educated about it and the discoveries that have changed their lives. They are lost. But a majority are believers in science and technology. When universities are becoming multidisciplinary and increasing institutions without borders, we must harness the abilities and knowledge that exists within these houses of intellect—and expertise beyond academic walls—to make the case for science as the greatest driver of American productivity and improved health care that we have.

When you survey people about science, you are apt to get more negative responses to very general questions than if you ask them to assess specific products and discoveries by scientists. The group Research America! consistently finds that the vast majority of US citizens approve of spending more federal money on scientific research. They applaud the discovery of the laser, of the gene-editing tool CRISPR, of computer chips, and of the vaccines derived from RNA research.

A few scientists have the gift for translating their work in ways that lead to accurate and deeper public understanding of their scientific research and discoveries. But the vast majority do not. That can’t be their job.

As McNutt and Crow suggest, it is now time to create a truly multidisciplinary effort to transfer knowledge from the laboratory to the public. A few scientists have the gift for translating their work in ways that lead to accurate and deeper public understanding of their scientific research and discoveries. But the vast majority do not. That can’t be their job. Here is where we need the talent and expertise of humanists, historians, ethicists, artists, and leading technology experts outside of the academy, as well as the producers of stories, films, and devises that ought to be used for learning. New academic foci of attention on science and technology as part of this movement of knowledge toward interdisciplinarity ought to be fostered inside our universities.

The congressional hearings centered on events of January 6 offer an excellent example of the collaboration between legislators and Hollywood producers. The product was a coherent story that could be easily understood. We should teach scientists to be good communicators with the communicators. They must also help to make complex ideas both accurate and understandable to the public. There are many scientists who can do this—and a few who can tell their own stories. This suggests the importance of training excellent science reporters and interlocutors who can evaluate scientific results and translate those discoveries into interesting reading for the attentive public. These science writers need additional training in the quality of research so that they don’t publish stories based on weak science that leads to misinformation—such as the tiny, flawed studies that were presented to the public as fact that led to false beliefs about autism or the effects of dietary cholesterol and heart disease.

We should be looking especially toward educating the young. The future of science and technology lies with their enthusiasms and beliefs. That enthusiasm for learning about women’s and minority members health, about global climate change, about finding cures and preventions for disease lies ultimately with their knowledge and action. The total immersive learning at Arizona State University is an excellent prototype of what is possible. Now those educational, total immersion models—so easily understood by the young—should be developed and used in all the nation’s secondary schools. We can bypass the politically influenced textbook industry by working directly with secondary schools and even more directly with young people who can use new technology better than their elders.

We have witnessed a growth in autobiographies by scientists, especially by women and members of minority groups. More scientists should tell their stories to the public. We also need gifted authors, such as Walter Isaacson, or before him Walter Sullivan or Stephen J. Gould, telling the stories of extraordinary scientists and their discoveries. Finally, we should be more willing to advertise ourselves. We have an important story to tell and work to be done. We should unabashedly tell those stories through organized efforts by the National Academies (such as their biographies of women scientists), research universities, and very well-trained expositors about science. Through these mechanisms we can build much improved public understanding of science and technology and the derivative trust that that will bring.

John Mitchell Mason Professor of the University

Provost and Dean of Faculties (1989–2003)

Columbia University

CHIPS and Science Opens a Door for Society

In August 2022, President Biden signed the CHIPS and Science Act into law, a bill my colleagues and I passed to ensure US leadership in semiconductor development and innovation across a multitude of sectors. The law secured historic authorizations in American manufacturing, support for our workforce in science, technology, engineering, and mathematics (STEM), and bolstering of the nation’s research competitiveness in emerging technologies. A year later, Congress must find the political will to fund the science component of the act, while ensuring these investments are socially and ethically responsible for all Americans.

In recent decades, emerging technologies were quickly perfected and rapidly proliferated to transform our economy and society. Powerful forces are now overwhelmingly at our fingertips, either through mass production or the digital superhighway brought on by fiber optics. What we know today about various materials and energy uses differs dramatically from when we were first harnessing the capabilities of plastics, tool and die making, and the combustion engine. Are we capable of learning from a past when we could not see as clearly into the future as we can today? How can we create a structure to adjust or more ethically adapt to changing environments and weigh social standards for implementing new technology?

Today, we see that many emerging technologies will continue to have profound impacts on the lives of American citizens. Technologies such as artificial intelligence and synthetic biology hold tremendous promise, but they also carry tremendous risks. AI and quantum cryptography, for example, will drastically influence the privacy of the average internet user. These are known risks for which we can take steps, including developing legislation, such as a bill I authored, the Privacy Enhancing Technology Research Act, to mitigate such risks. There is also a universe of unknown risks. But even in those cases we have tools and expertise to think through what those risks might be and how to assign value to them.

The ethical and societal considerations in CHIPS and Science were designed to empower scientists and engineers to consider the ethical, social, safety, and security implications of their research throughout its lifecycle, potentially mitigating any harms before they happen. And where researchers lack the tools or knowledge to consider these risks on their own, they might turn to professional ethicists or consensus guidelines within their disciplines for help.

The intent was not only to ensure representation in fields developing and applying the global shaping technologies of the future, but also to put value on the notion that American science can be more culturally just and equitable.

Incorporating these considerations into our federal agencies’ research design and review processes is consistent with the American approach of scientific self-governance. The enacted scientific legislation plays to the strengths of our policymaking in that we entrust researchers to use their intellectual autonomy to create technological solutions for the potential ethical and societal challenges of their work and give them the freedom to pursue new research directions altogether.

While prioritizing our law on STEM diversity, the intent was not only to ensure representation in fields developing and applying the global shaping technologies of the future, but also to put value on the notion that American science can be more culturally just and equitable. This occurs when diverse voices are in the research lab and at the commercialization table.

Seeing the CHIPS and Science Act fully funded remains one of my top priorities. New and emerging technologies, such as AI, quantum computing, and engineering biology, have a vast potential to re-shore American manufacturing, create sustainable supply chains, and bring powerful benefits to all Americans everywhere. However, these societal and ethical benefits cannot be realized if we are not also intentional in considering the societal context for these investments. If we do not lead with our values, other countries whose values we may not share will step in to fill the void. It is time for us to revitalize federal support for all kinds of research and development—including social and ethical initiatives—that have long made the United States a beacon of excellence in science and innovation.

Michigan, 11th District

Ranking Member of the Committee on Science, Space, and Technology’s Subcommittee on Research and Technology

As David H. Guston intimates, the CHIPS and Science Act presents a new opportunity for the National Science Foundation to make another important step in fostering the social aspects of science. The act can also champion existing and emerging efforts focused on understanding the way social science can deeply inform and shape the entire scientific enterprise.

Contemporary issues demand a substantive increase in the support for social science. This research is critically necessary to understand the social impacts of our changing environment and technological systems, and how to design and develop solutions and pathways that equitably center humanity.

By describing the historical arc of the evolving place of social science at NSF, Guston illustrates how the unbalance, syncopated, and often arhythmical dance between NSF and social science did not necessarily benefit either. I am optimistic about what specific sections of the CHIPS and Science Act directly require, tacitly imply, and conceptually allude to. The history of NSF is replete with examples of scientific research that fundamentally altered the way humans interact, communicate, and live in a shared world. Contemporary issues—in such diverse areas as rising climate variability and the place of artificial intelligence in our everyday interactions­—demand a substantive increase in the support for social science. This research is critically necessary to understand the social impacts of our changing environment and technological systems, and how to design and develop solutions and pathways that equitably center humanity. As the world always shows, we are on the cusp of a new moment. This new moment needs to be driven by social science and social scientists in concert with natural and physical scientists. I use the term concert, and its referent to artistic and sonic creative collaborations, deliberately to evoke a different framework of collaborative and interdisciplinary effort. Part of the solution is to always remember that science is a human endeavor.

In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential.

In thinking about the place of social science in the next evolution of interdisciplinary research, I believe the cupcake metaphor is instructive. As a child of the 1970s, I remember the cupcake was a birthday celebration staple. I really liked the cake part but was greatly indifferent to the frosting or sprinkles. If I had to choose, I would always select the cupcake with sprinkles for one reason: they were easy to knock off. In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential. Social science is not the egg, the flour, or the sugar. Sprinkles are neither in the batter, nor do they see the oven. Sprinkles are a late addition. No matter the stylistic or aesthetic impact, they never alter the substance of the “cake” in the cupcake. The potential of certain provisions of the CHIPS and Science Act hope to chart a pathway for scientific research that makes social science a key component of the scientific batter to bake social scientific knowledge, skill, and expertise into twenty-first century scientific “cupcakes.”

Professor in Communication Studies and the Medill School of Journalism

Northwestern University

Former Division Director, Social and Economic Sciences

National Science Foundation

David H. Guston expertly describes how provisions written into the ambitious CHIPS and Science Act could make ethical and societal considerations a primary factor in the National Science Foundation’s grantmaking priorities, thereby transforming science and innovation policy for generations to come.

Of particular interest, Guston makes reference to public interest technology (PIT), a growing movement of practitioners in academia, civil society, government, and the private sector to build practices to design, deploy, and govern technology to advance the public interest. Here, I extend his analysis by applying core concepts from PIT that have been articulated and operationalized by the Public Interest Technology University Network (PIT-UN), a 64-member network of universities and colleges that I oversee as director of public interest technology for New America. (Guston is a founding member of PIT-UN and has led several efforts to establish and institutionalize PIT at Arizona State University and in the academic community more broadly.)

As Guston describes, the CHIPS and Science Act “expand[s] congressional expectations of more integrated, upstream attention to ethical and societal considerations” in NSF’s process for awarding funds. This is undoubtedly a step in the right direction. However, operationalizing the concept of “ethical and societal considerations” requires that we get specific about who researchers must include in their process of articulating foreseeable risks and building partnerships to “mitigate risk and amplify societal benefit.”

Universities and other NSF-funded institutions must invest more in these kinds of community partnerships to regularly challenge and update our understanding of “the public.”

Public interest technology asserts that the needs and concerns of people most vulnerable to technological harm must be integrated into the process of designing, deploying, and governing technology. While existing methods to assess ethical and societal considerations of technology such as impact evaluations or user-centered design can be beneficial, they often fail to adequately incorporate the needs and concerns of marginalized and underserved communities that have been systematically shut out of stakeholder conversations. Without a clear understanding of how specific communities have been excluded from technology throughout US history—and a shared analysis of how those communities are continually exploited or made vulnerable to the negative impacts of technology—we run the risk of not only repeating the injustices of the past, but also embedding biases and harmful assumptions into emerging technologies. Frameworks and insights from interdisciplinary PIT scholars such as Ruha Benjamin, Cathy O’Neil, Meredith Broussard, and Afua Bruce that map relationships between technology and power structures must inform NSF’s policymaking if the funds made available through the CHIPS and Science Act are to effectively address ethical and societal considerations.

Furthermore, a robust operationalization of these considerations will require a continual push to develop and extend community partnerships in a way that expands our notion of the public. Who should be included in the definition of “the public”? Does it include under-resourced small businesses and nonprofits? People who are vulnerable to tech abuse? People living on the front lines of climate change? In advancing this broader understanding of the public, a strategic partnership with international organizations becomes essential, including cooperation with emerging research entities that focus on the ethical issues within emerging technologies and artificial intelligence such as the Distributed Artificial Intelligence Research Institute, the Algorithmic Justice League, the Center for AI and Digital Policy, the Electronic Frontier Foundation, and the OECD AI Policy Observatory, among others.

Guston points to participatory technology assessments undertaken through the NSF-funded Center for Nanotechnology in Society at Arizona State University as an example of how to engage the public in understanding and mitigating technological risks. Universities and other NSF-funded institutions must invest more in these kinds of community partnerships to regularly challenge and update our understanding of “the public,” to ensure that technological outputs are truly reflective of the voices, perspectives, and needs of the public as a whole, not only those of policymakers, academics, philanthropists, and technology executives.

Director of Public Interest Technology at New America and the Public Interest Technology University Network

 

These comments draw in part from recommendations crafted by PIT-UN scholars to NSF’s request for information on “Developing a Roadmap for the Directorate for Technology, Innovation, and Partnerships.”

The most important word in David H. Guston’s article addressing the societal considerations of the CHIPS and Science Act occurs in the first sentence: “promised.” For scholars and practitioners of science and technology policy, the law has created genuine excitement. This is a dynamic moment, where new practices are being envisioned and new institutions are being established to link scientific research more strongly and directly with societal outcomes.

Many factors will need to converge to realize the promise that Guston describes. One set of contributors that are crucial, yet often overlooked, in this changing ecosystem of science and technology policy are science philanthropies. Science philanthropy has played a key role in the formation and evolution of the current research enterprise, and these funders are especially well-positioned to actualize the kind of use-inspired, societally oriented scholarship that Guston emphasizes. How can science philanthropy assist in achieving these goals? I see three fruitful areas of investigation.

The first is experimenting with alternative approaches to funding. Increasingly, funders from both philanthropy and government are experimenting with different ways of financing scientific research to respond rapidly to scientific and societal needs. Some foundations have explored randomizing grant awards to address the inherent biases of peer review. New institutional arrangements, called Focused Research Organizations, have been established outside of universities to undertake applied, use-inspired research aimed at solving critical challenges related to health and climate change. There is the capacity for science philanthropies to do even more. For instance, participatory grantmaking is emerging as a complementary approach to allocating funds, in which the expected community beneficiaries of a program have a direct say in which awards are made. While this approach has yet to be directly applied to science funding, such alternative decisionmaking processes offer opportunities to place societal implications front and center.

Science philanthropies, because of the wide latitude they have in designing and structuring their programs, are uniquely situated to sponsor interdisciplinary research.

The second is making connections and filling knowledge gaps across disciplines and sectors. Interdisciplinary research is notoriously difficult to fund through conventional federal grantmaking programs. Science philanthropies, because of the wide latitude they have in designing and structuring their programs, are uniquely situated to sponsor such scholarship. As an example, the Energy and Environment program that I oversee at the Alfred P. Sloan Foundation is focused on advancing interdisciplinary social science and bringing together different perspectives and methodologies to ask and answer central questions about energy system decarbonization. The program supports interdisciplinary research topics such as examining the societal dimensions of carbon dioxide removal technologies, a project in which Guston is directly involved; highlighting the factors that are vital in accelerating the electrification of the energy system; and concentrating on the local, place-based challenges of realizing a just, equitable energy transition. Additional investments from science philanthropies can expand and extend interdisciplinary scholarship across all domains of inquiry.

The third is learning through iteration and evaluation. Guston traces the historical context of how societal concerns have always been present in federal science funding, even if their role has been obscured or marginalized. Science philanthropies can play a pivotal role in resourcing efforts to better understand the historical origins and subsequent evolution of the field of science and technology policy. For this reason, the Sloan Foundation recently funded a series of historically oriented research projects that will illuminate important developments related to the practices and institutions of the scientific enterprise. Further, science philanthropies should do more to encourage retrospective evaluation and impact assessment to inform how society is served by publicly and privately funded research. To that end, over the past three years I have helped to lead the Measurement, Evaluation, and Learning Special Interest Group of the Science Philanthropy Alliance, a forum for alliance members to come together and learn from one another about different approaches and perspectives on program monitoring and evaluation. As Guston writes, there is much promise in the CHIPS and Science Act. Science philanthropies will be essential partners to achieve its full potential.

Program Director

Alfred P. Sloan Foundation

We agree with David Guston’s assertion that the CHIPS and Science Act of 2022, which established the National Science Foundation’s Directorate for Technology, Innovation, and Partnerships, presents a significant opportunity to increase the public benefits from—and minimize the adverse effects of—US research investments.

We also agree that the TIP directorate’s focus on public engagement in research is promising for amplifying scientific impact. Our experiences leading the Transforming Evidence Funders Network, a global group of funders interested in increasing the societal impact of research, are consistent with a recent NSF report, which states that engaged research “conducted via meaningful collaboration among scientist and nonscientist actors explicitly recognizes that scientific expertise alone is not always sufficient to pose effective research questions, enable new discoveries, and rapidly translate scientific discoveries to address society’s grand challenges.”

We have also found that engaged research could be an essential strategy for identifying, anticipating, and integrating into science the “ethical and societal considerations” mentioned in the CHIPS and Science Act. The NSF-funded centers for nanotechnology in society provide an illustrative example in developing and normalizing participatory technology assessment. As one center notes on its website, the centers use engagement and other tactics to build capacity for collaboration among researchers and the public, allowing the groups to work together to “guide the path of nanotechnology knowledge and innovation toward more socially desirable outcomes and away from undesirable ones.”

Engaged research could be an essential strategy for identifying, anticipating, and integrating into science the “ethical and societal considerations” mentioned in the CHIPS and Science Act.

But to ensure that engaged research can deliver on the potential these collaborative methods hold, we argue for an expansion of funding for rigorous studies that address questions about when engagement and other strategies are effective for improving the relevance and use of research for societal needs—and who benefits (and who doesn’t) from these strategies. Such studies will increase our understanding of the conditions that enable engagement and other tactics to deliver their intended impacts. For example, scholarship shows that allowing sufficient time for relationship-building between researchers and decisionmakers is important for unlocking the potential of engaged research. Findings from such studies could, and should, shape future research investments aimed at improving societal outcomes.

Efforts to expand understanding in this area—Guston calls these efforts “socio-technical integration research”—include studies on the use of research evidence, science and technology studies, decision science, and implementation science, among several other areas. But so far, this body of research has been relatively siloed and has inconsistently informed research investments. The CHIPS and Science Act may help spur research investments in this important area with its requirement that NSF “make awards to improve our understanding of the impacts of federally funded research on society, the economy, and the workforce.” And the NSF’s TIP directorate provides a helpful precedent for funding studies that develop an understanding of when, and under what conditions, research drives change in decision-making and when (and for whom) research improves outcomes. But much must still be done to meet the need.

The CHIPS and Science Act and the TIP directorate present an important opportunity to scale research efforts that better reflect societal and ethical considerations. To support progress in this area, we have begun coordinating grantmakers in the Transforming Evidence Funders Network to build evidence about the essential elements of success for work at the intersection of science and society. We invite funders to connect with us to make use of the opportunity presented by these shifts in federal science funding and to join us as we build knowledge about how to maximize the societal benefits—and minimize the adverse effects—of research investments.

Project Director

The Pew Charitable Trusts Evidence Project

Principal Associate

The Pew Charitable Trusts Evidence Project

Adding Humanity to Anatomy Lessons

In “When Our Medical Students Learn Anatomy, They See a Person, Not a Specimen” (Issues, Spring 2023), Guo-Fang Tseng provides a wake-up call to treat anatomy as a humanistic as well as a scientific discipline. This is not new, as a move in a humanistic direction has been evident for some years and across a variety of countries and cultures. However, within the Silent Mentor Program that Tseng describes, it goes considerably further than generally found elsewhere, with far more involvement of family members at every stage.

The Silent Mentor Program is conducted within a Buddhist culture. Should this be normalized and viewed as the ideal practice for those in different societies with varying religious or cultural perspectives? As arguments in favor, the practices have led to major increases in body donations within these communities, and they have enhanced the humanity and empathy of clinicians.

To gain further insight, my colleague Mike R. King and I conducted a study to explore why in most academic settings in the Western world cadavers in the dissecting rooms of anatomy departments are routinely stripped of their identity. This has meant that medical and other health science students have been provided with limited, if any, information on the identities or medical histories of those they are dissecting. The study, published in Anatomical Sciences Education in 2017, identified four ways that the cadavers were treated: total anonymization; nonidentification, low information; nonidentification, moderate information; identification, full information. We concluded that at the heart of the debate is the altruism of the donors and the integrity of those responsible for the donors’ bodies.

The Silent Mentor Program is conducted within a Buddhist culture. Should this be normalized and viewed as the ideal practice for those in different societies with varying religious or cultural perspectives?

We further concluded that if potentially identifying information adds value to anatomical education, it should be provided. But other values also enter the picture, namely, the views of the donors and their families. What if the families do not wish to go down this road? This demonstrates that the direction outlined for the Silent Mentor Program depends upon full acceptance by all parties involved, with the families’ views being uppermost.

Then there are the students. It is unlikely that in a pluralist society all will want as much personal information about the body as possible. Thus, there must be a balance achieved between the students’ emotional or psychological reactions and the pedagogical value of the information.

The situation is more confused in some societies where certain ethnic or cultural groups oppose the donation of bodies on cultural grounds, so that students belonging to these groups must overcome an antipathy to the process of dissection. For them, identification of the bodies would likely be a step too far.

While the Silent Mentor Program is situated in a Buddhist society, it does not represent all Buddhist perspectives. For instance, donation programs in Sri Lanka have been the norm for many years, with Buddhist monks giving blessings for the afterlife of the deceased person in the deceased’s home prior to the cadaver being transferred to a local university anatomy department. After receipt of the cadaver, all identification marks are removed, thereby maintaining the anonymity of the deceased. The relatives have no further contact with the remains. Following dissection, Buddhist ceremonies are conducted by monks, thereby placing the whole process of donation and dissection within a Buddhist context, with participation by students and family members. This represents a variation on the Silent Mentors Program, encouraging altruism and involving the family in some aspects of the process of teaching anatomy within their own Buddhist context. This demonstrates that more than one model may serve to achieve humanistic ends.

Department of Anatomy

University of Otago

Dunedin, New Zealand

The first systematic dissection of the human body has been attributed to the ancient Greek anatomist Herophilus, who lived from 355 BC to 280 BC. Unfortunately, Herophilus was ultimately accused of performing vivisections of living human beings. Dissection then ceased after his days and recommenced only in the mid-sixteenth century. As dissection began to play a prominent role in the learning of the human body, the growing shortage of cadavers resulted in body snatching from graves and even the commission of murders, leading to the enactment in the United Kingdom of the Anatomy Act of 1832, which also served to regulate human body donation.

Cadaver-based dissection of human bodies to learn human anatomy has now become a cornerstone of the curriculum of many medical schools. As students actively explore the body, they are able to perceive the spatial relationships of the different structures and organs, as well as appreciate anatomical variations of various body structures. More recently, alternative methods—including the use of 3D visualization technologies such as augmented reality, virtual reality, and mixed reality—have been increasingly utilized, especially during the COVID-19 pandemic, in the light of limited access to anatomy laboratories and the implementation of safe distancing measures.

In his essay, Guo-Fang Tseng elegantly highlights the humanistic approach to the teaching and learning of human anatomy through bodies willed to the Tzu Chi University’s School of Medicine, in what is called the Silent Mentor Program. What are usually termed as cadavers are now accorded the status of Silent Mentors. Indeed, while these altruistic individuals may no longer be able to speak, their donated bodies are still used to impart the intricacies of the human anatomy. Students are constantly reminded to treat their Silent Mentors with the utmost dignity, respect, and gratitude.

I was able to witness firsthand the indescribably touching ceremony; it is certainly no exaggeration to say there was not a dry eye in the house.

The Tzu Chi program is a unique human body donation program where students and residents not only learn how their Silent Mentors had lived while they were still in this world, but also have close interactions with their Silent Mentors’ families. At the end of the gross anatomy dissection course, students place all the organs and tissues back into their Silent Mentors’ bodies, suture the skin together, and dress their Silent Mentors in formal clothes. The students then join the family members in sending the bodies to the crematorium, followed by a gratitude ceremony where there is sharing and reflection by both the students and family.

Thus far, the Silent Mentor Program has served as a salient example to the anatomy and medical community of how the approach taken to understand the individual donor could enhance the humanity of doctors in training. Having had the privilege of attending a Tzu Chi Surgical Silent Mentor Simulation Workshop, I was able to witness firsthand the indescribably touching ceremony; it is certainly no exaggeration to say there was not a dry eye in the house. This vivid experience has remained firmly etched in my mind.

A critical reason why the Tzu Chi Silent Mentor Program is highly successful and is being emulated by other medical schools is that it has a hardworking team that truly believes in the humanistic approach to the learning of human anatomy, undergirded by unwavering support from the university administration, including Dharma Master Cheng Yen, the founder of the Tzu Chi Foundation. Guo-Fang Tseng himself also leads by example, and his lifetime dedication to the program is aptly reflected in his intention to deliver his last anatomy lessons as a Silent Mentor.

Professor, Department of Anatomy

Yong Loo Lin School of Medicine

National University of Singapore

In his article for the Summer Issues, Guo-Fang Tseng describes the silent mentor program at Tzu Chi University’s School of Medicine, where medical students get to know the body they will dissect by meeting the deceased’s family. Tseng writes that there are notable, concrete outcomes to this approach, but he thinks the program’s effects “are much more profound: it enhances the humanity of clinicians and those they serve.”

I would like to highlight the high cost of not cultivating empathy and humanity in medical professionals. Consider pregnancy and birth, for example. As many as 33% of women report negative or traumatic birth experiences. Estimates of postpartum depression and post-traumatic stress disorder range from 5% to 25%. Even medical spending in the year following birth is substantially higher among women experiencing postpartum depression.

Authentic human connection during health care interactions is not a nice to have—it is a critical requirement that helps us make meaning from our medical experiences.

The causes of postpartum PTSD and depression are complex. However, dissatisfaction with social support, lack of control, and mistreatment by medical staff are reasons that rise to the top in studies on the issue—reasons that directly relate to lack of empathy.

In 2007, I visited a Viennese medical museum and saw an eighteenth-century wax anatomical model—a woman with her abdomen dissected and a fetus inside. At the time, I identified with the Enlightenment anatomists who made the model because as a former biology student, I had happily enjoyed dissecting animals. But later that same year, I gave birth for the first time and became one of the many women who left the hospital with a healthy baby and a troubled mind. I suddenly saw myself reflected in the wax model herself, and I felt heartbreakingly linked to the centuries-long anatomical tradition of disconnecting body and mind. In the operating room, I was reduced to an anonymous body on a table and my mind suffered for it.

I have spent years trying to understand what happened to me, and it took me a long time to realize that empathy was a critical missing component. I can’t help but wonder, what if a medical professional had looked me in the eye during or immediately following my ordeal and truly acknowledged all that had happened? I feel certain that regardless of the physiological complications I experienced, being treated as a whole human would have greatly lessened my struggle.

Tseng’s article about the silent mentor program brought tears to my eyes. Empathy can and should be taught, even with a dead body. Authentic human connection during health care interactions is not a nice to have—it is a critical requirement that helps us make meaning from our medical experiences.

Writer

Ithaca, New York

Navigating Interdisciplinary Careers

In “Finding the ‘I’ in Interdisciplinarity” (Issues, Spring 2023), Annie Y. Patrick raises important challenges for both interdisciplinary research—an oft-cited, rarely achieved aim in contemporary scholarship—and qualitative research more broadly. Many norms of traditional inquiry implicitly encourage the separation of the researcher from the research, a condition that Patrick compellingly argues against. The received wisdom is that researchers should leave their backgrounds, traditional or otherwise, “at the door.” This is a necessary critique of bracketing—where researchers consider what assumptions they bring to a research endeavor and then set them aside for the purposes of conducting and analyzing the phenomenon—and its implications.

As an interdisciplinary researcher myself, I know from experience that explicitly sharing points of commonality and difference within diverse teams is essential for the conduct of fulfilling research. After all, researchers are people first. What I find especially powerful in Patrick’s essay is the insistence on the human element of social science research for both the researcher and the researched. As she writes, “they were not simply informants or categories of data, but actual humans.” Why might Patrick be intimidated by the engineering faculty at Virginia Tech? She has seen patients and their families at their absolute lowest and quickly earned their trust and care. The faculty are only human, too.

Explicitly sharing points of commonality and difference within diverse teams is essential for the conduct of fulfilling research. After all, researchers are people first.

Similarly, I see her work explaining the real-life challenges of the student experience to faculty as reminding them that students are human, too, and have a whole host of embodied needs and experiences outside of classroom performance. Implicitly, Patrick calls the academy to task for treating humans with impersonal language such as “informants” and encourages researchers to claim our backgrounds that inform our research, and hopefully informs our groundwork as well.

I find Patrick’s call to action through groundwork to be a useful corrective. “When I saw something going wrong,” she writes, “my every professional instinct was to intervene.” As researchers, if we see something truly wrong and harmful taking place, shouldn’t we intervene? Her essay also reminded me of the gendered professions of both engineering and nursing. Despite being historically associated with men and women respectively, the emphasis on weed-out culture in both areas and how that interacts with gender could be something to further consider in the future. For these and other reasons, I appreciate this powerful and thought-provoking essay and its lessons very much.

Incoming Assistant Professor, Higher Education Studies & Leadership

Baylor University

Annie Y. Patrick makes astute observations about the challenges of interdisciplinary research. She describes “feeling out of place” and having to come to grips with unfamiliar jargon and disciplinary assumptions. These are experiences that will no doubt resonate with many researchers who work in interdisciplinary contexts. She also takes the brave step of sharing a mindset shift she went through during her PhD—one that went from viewing expertise as being about depth in a single domain and eschewing “non-scholarly” experience, to drawing on the full complement of her work and life experiences.

Patrick encourages researchers “to embrace their whole selves” in pursuit of interdisciplinarity. This kind of mindset is not commonly discussed as a critical ingredient for interdisciplinarity, but it should be. Embracing one’s whole self involves recognizing the importance of experiences beyond academia, as well as the multiple hats many of us wear—as colleagues, family members, and members of our broader communities. For interdisciplinary collaborations to really work, members of the team need to be valued for what each brings to the table. Taking time to appreciate the richness of our own experiences hopefully opens us up to appreciate those of others too. And indeed, to find shared ground beyond our academic silos. Caring for the individuals in an interdisciplinary collaboration—not just the subject matter—is an important ingredient for interdisciplinary success, as are good doses of curiosity and humility.

For interdisciplinary collaborations to really work, members of the team need to be valued for what each brings to the table.

Patrick’s account offers us tangible and positive examples of what interdisciplinarity can bring to a project. But from her description it’s clear that challenges still remain to achieving the interdisciplinary integration called for by the National Academies. For instance, it seems as if the interventions she developed at Virginia Tech were “extras” to the project she was part of, rather than central to the work of revolutionizing engineering education. Patrick emphasized the support and good will she received from more senior scholars, having demonstrated her work ethic and commitment to the project over four years. This kind of support is not always forthcoming. Junior scholars are often the ones who end up doing the risky work of interdisciplinarity. And they must typically do this in addition to achieving the milestones and depth seen as necessary to be experts in their own disciplinary domain.

Interdisciplinary interventions of the kinds Patrick describes—a podcast, career panels, and white papers—aren’t necessarily valued as highly as a peer-reviewed publication in a high-profile journal. Yet in practice these are likely more effective ways of bringing diverse communities together around shared concerns. For interdisciplinary research to become more embedded in academia, we need stronger support and reward systems for junior scholars embarking on this important but time-intensive and risky work.

Associate Professor

School for the Future of Innovation in Society and School of Biological and Health Systems Engineering

Arizona State University

“I was surprised to discover that becoming an effective interdisciplinary researcher also required that I embrace the value of what I call inner interdisciplinarity—my own unconventional background—and what it could bring to the team,” Annie Y. Patrick writes. Her personal reflection on the labor of academia—the conversations, the comprehensibility-making—foregrounds infrastructures for engagement by academics in our professional practice that may exceed our disciplinary training.

Those of us who study knowledge in general—and the convergence of science, technology, and society (STS) in particular—often write about people who hold a single disciplinary identity, be they electrical engineers, geophysicists, or something else entirely. Through the kinds of training scholars such as Patrick and myself have experienced, we also become “disciplined” and develop certain shared ways to be in the world. These ways of being often diverge significantly from those cultivated by the engineers and scientists we study, making contrasts particularly evident when we examine their technoscientific work or seek to enter collaborations with them. But as Patrick reminds us, we may not be trained in only a single discipline. The ways of being we’ve been trained into are not simply lost when we undertake thinking and acting in new ways. Or if that happens to some people, it certainly doesn’t happen to all.

The ways of being we’ve been trained into are not simply lost when we undertake thinking and acting in new ways.

I’ve never switched disciplines—at least not to the extent that Patrick has. I consistently pursued training in cultural anthropology since I discovered it existed, during my second year of college many decades back, then while concentrating on STS, and then more. For me, this experience was one of alignment, though my expertise and practice has developed through slow, iterative, contradictory personal and professional experiences. I lay them out in my 2023 book, ¡Alerta!, which examines a controversial technology developed in Mexico City to mitigate earthquake risk and, through that, considers how engineers and other experts are theorizing life with threatening environments. There, I make the case that these experiences have accumulated to make my life and scholarship possible in ways that are methodologically important to grapple with.

Patrick reflects on her efforts to apply insights, using her frustration to highlight her disciplinarities. As she does so, she highlights an important puzzle: What is an application? What counts as a viable answer to the perennial question so what? What can we conceive as meaningful implementation, and who might we see as fellow travelers in these efforts? We must understand that this, too, might be trained into us by our disciplines and schools of thought.

Patrick documents a pathway that led, eventually, to what she terms her “groundwork.” There are so very many others, though, with radically different ways of understanding that puzzle and figuring out what kinds of activities could flourish in their resolutions. I think the “making and doing” movement in STS is most exciting when it opens a space for many conceptions of conceiving of action and gives us the tools to understand their different logics, from radical to institutional. As such, it can also be a space for exploring how we might choose to articulate our disciplinary backgrounds and commitments—for imagining and reimagining STS and scholarly life too.

Assistant Professor, Department of Engineering, Design, and Society

Associate Director, Humanitarian Engineering and Science

Colorado School of Mines

Nursing and the Power of Change

In “The Transformation of American Nursing” (Issues, Spring 2023), Dominique A. Tobbell presents a fascinating, complicated, and multidetermined case for the post-World War II development of PhD programs in nursing. Built around the faith that there was a “nursing science”—akin to but foundationally different from the dominance of “biomedical science”—the white women (and they were almost exclusively white women) used financial support from the federal government’s health scientists’ programs to first earn PhDs in related disciplines such as sociology, education, and psychology and then to translate borrowed concepts into the ideological stance and the practice of nursing.

Some initiatives were spectacular successes: the changes that coalesced around nurse Hildegard Peplau’s intellectual translation of Henry Stack Sullivan’s interpersonal theory of human relationships forever changed nursing practice into one that focused intensely of what we now call (and teach and research as) patient-centered care. Others were as spectacular failures: the edict from nursing’s national accreditation association that all schools had to teach nursing content and practice specifically organized around one of the models Tobbell describes was a mercifully short-lived disaster after it became apparent that classroom content had no relation to clinical experiences.

In nursing, we have evidence of the power of change driven by collaborations among clinicians at the point of intersections with patients in need of care.

Such unevenness, of course, is hardly unique to any knowledge-building enterprise. My question is why, after more than 80 years of this enterprise, are people outside the narrow confines of my discipline still puzzled when they learn of my PhD and hear the term “nursing science.” I honestly do not blame them. And I think this points to yet another source of tension that the history of PhD education in nursing elucidates: Should knowledge-building in nursing or in any another discipline be a “top down” or “bottom up” experience?

In nursing, we have evidence of the power of change driven by collaborations among clinicians at the point of intersections with patients in need of care. The nurse practitioner movement, for example, came about in the same political, social, and technological contexts and among the added pressures of shortages among primary care practitioners. In response, collaborative, entrepreneurial efforts of physicians and nurses seeking expanded opportunities came together in individual dyads across the country to experiment with shared responsibilities for medical thinking, medical diagnosis, and prescribed treatments. Similarly, in coronary care units, dedicated to ensuring the survival of “hearts too young to die,” the new technology of electrocardiology brought physicians and nurses together to learn how to read rhythm strips. Both groups quickly learned, again together, that it was not necessary to have to wait for a physician to intervene in life-threatening emergencies as nurses could interpret arrythmias and respond immediately with life-saving protocols. Our current health care system now organizes itself around these two innovations.

The PhD in nursing, by contrast, came about as a solution to a problem that only a relatively small group of nursing educators identified. It would be a new form of knowledge generation, albeit one distanced from the bedside and imbricated with the knowledge-generating tools most valued by the biomedical establishment. It was, I would suggest, an essentially political and prestige process. And really interesting questions remain to be asked. Did the status position of nursing in clinical care and knowledge development necessitate a surrendering to the stronger and more privileged epistemological position of medicine for its own validity? Will nursing’s claims that it “asks different questions” survive the collapsing of boundaries between acute and chronic care needs of patients? And, to me most important, does the inherently interdisciplinary knowledge that we know nurses need to practice fail to translate into a knowledge agenda when it exists within an academy and a culture that knows only firm disciplinary boundaries?

Carol Ware Professor of Mental Health Nursing

Director, Barbara Bates Center for the Study of the History of Nursing

University of Pennsylvania

Stop Patching!

In “How to Keep Emerging Research Institutions from Slipping Through the Cracks” (Issues, Spring 2023), Anna M. Quider and Gerald C. Blazey raise interesting questions about how to address the misalignment in distribution of federal research dollars and students from diverse communities being educated in science, technology, engineering, mathematics, and medicine—the STEMM fields—across the full range of higher education institutions. If we wish to produce a diverse STEMM workforce for the twenty-first century, the authors explain, we need to recognize and consider how to address this mismatch.

Historically, institutions have usually been targeted for attention when agencies have been directed, largely by congressional action, to develop strategies and “carveouts” to affect the distribution across the full range of institutions. Quider and Blazey rightly point out the limits of such carveouts and special designations to achieve the goal of contributing to increased diversity of the STEMM community. Research support in institutions can provide research opportunities to next-generation scholars and researchers from diverse communities. Research participation has also been demonstrated to support retention of these students in STEMM as well as to promote their choice for graduate education, thus addressing the critical need for faculty diversity.

The difficulty in directing research support to a wider range of institutions cannot be underestimated. Institutions that have received even small advantages in research investments over the decades will present proposals not only where the ideas are excellent, but where research infrastructure is more than likely to be superior as well, advantages having accumulated. Institutions that have not enjoyed such investment may have excellent researchers with excellent proposals, but, lacking research infrastructure, they may not be as competitive as the research behemoths. Carveouts allow for a section of the playing field to be leveled, where similarly situated institutions can compete. The authors note that although a number of carveouts have been created, not all funding “cracks” have been plugged. Missing from the litany of special programs are so-called emerging research institutions that are also taking on the critical role of contributing to the diversity of the STEMM community.

The difficulty in directing research support to a wider range of institutions cannot be underestimated.

While the carveouts have been important to developing and maintaining research capacity across a larger range of institutions, they only delay needed reforms that are more systemic, directing how only a small share of total research and development funding is deployed while leaving the overwhelming majority of funding to the same set of institutions that have always topped the list of those receiving federal R&D support.

It is easy to have conversations about spreading the wealth in a time when budgets are expanding. But even when they are, such as in the doubling of the National Institutes of Health’s budget, they do not necessarily lead to a different distribution of supported institutions. Considering a flat funding environment, what would a reordering of strategic priorities that guide investment look like? Actions would include:

  • Ensuring widely distributed research capacity across a range of criteria.
  • Re-examining the research agenda and the process of setting it—who establishes, who benefits, and who is disadvantaged.
  • Specifically addressing the environment in which research is being done—that it be free of bias and allow all to thrive.
  • Harking back to the “Luke principle” I articulated previously in Issues, all research investments, in whatever the institutions, should include attention to equity and inclusion in developing the scholars and workforce of the future as a central element of supporting excellence and addressing the diversity-innovation paradox.

While we could stand up another targeted effort to address the cracks pointed out by the authors as a stop-gap measure, it is time to re-examine the overall research support structure in light of today’s needs and realities. Stop patching!

Senior Advisor and Director of SEA Change

Former Director of Education and Human Resources Programs

American Association for the Advancement of Science

I applaud Anna M. Quider and Gerald C. Blazey for drawing attention to the critical importance of emerging research institutions (ERIs) in the nation’s research ecosystem. ERIs are often dominated by students of color from low-income families, who may not have been admitted to a major research university or could not afford such a school’s tuition and cost of living. Or they may simply have preferred to enroll in a smaller university, perhaps closer to home.

If the nation does not embrace all ERIs, the disparities between the haves and have nots will become even greater and the nation will not fully achieve its research and diversity goals.

We have dozens of ERIs in California, and most are dominated by underrepresented minorities. The California State Universities are excellent examples of institutions that are in same category as the authors’ home institution, Northern Illinois University, in that they do not benefit from additional federal funding simply because they are geographically located in a state that has a number of major R1 universities.

I worry that if the nation does not embrace all ERIs, the disparities between the haves and have nots will become even greater and the nation will not fully achieve its research and diversity goals. I have firsthand knowledge of these disparities since I graduated from an emerging research institution. However, I am also an example of the potential of these students to contribute to the national research priorities.

Vice Chancellor for Research & Creative Activities

University of California, Los Angeles

Regulations for the Bioeconomy

In “Racing to Be First to Be Second” (Issues, Spring 2023), Mary E. Maxon ably describes the regulatory challenges to the emerging bioeconomy in the United States. The Biden administration has recognized explicitly the transition from a “chemical” economy to one in which inputs, processes, and products are largely the result of “biology,” and has chosen to help facilitate that transition.

The United States regulates products, not technologies. The regulatory paths these products take are defined by their intended use or “regulatory trigger” (i.e., the legal concept determining whether and how a product is regulated) regardless of manufacturing method. Intended use has generally been a good guide in determining which agency has primacy of regulatory oversight, as envisioned in the federal government’s Coordinated Framework for the Regulation of Biotechnology, first issued in 1986.

Almost 40 years on, one questions if that is still the case. To paraphrase the Irish playwright and political activist George Bernard Shaw, regulators and the regulated communities are divided by a common purpose—the safe, effective, and yet efficient introduction of products into commerce. Some of these have traversed the regulatory system slowly, but under the aegis of one agency; others have been shuttled among agencies asking approximately the same risk questions. Duplicative regulation rarely provides additional protection; instead, it can make a mash of policy that can undermine public confidence. It further poses enormous costs to manufacturers and the chronically under-resourced and over-burdened regulatory agencies. And we have yet to find a way to estimate the direct costs and externalities of not developing the bioeconomy.

Should we continue to regulate the products of the bioeconomy the same way we regulate the products of the chemical economy?

The examples that Maxon and others cite are products first developed over 20 years ago. What fate will befall products still “on the bench” or yet to occur in their inventors’ minds? Many participants in the field, me included, have advocated for the creation of a “single door,” possibly placed in a proposed bioeconomy Initiative Coordination Office, through which all (or almost all) products of the bioeconomy would be directed to the appropriate lead agency. Additionally, proposals have been floated to cross-train regulators, developers, funders, and legislators, possibly via mid-career sabbaticals or fellowships, about the various facets of the bioeconomy so that all are better prepared for regulatory oversight. These two steps could provide a mechanism for charting an efficient and transparent regulatory path. They will, of course, require nontrivial effort and coordination among and within agencies known more for their siloed behaviors than their cooperative interactions.

But a larger question lingers: Should we continue to regulate the products of the bioeconomy the same way we regulate the products of the chemical economy? Emerging technologies and their products can often require reframing risk pathways: it’s not that the endpoints (risks) are all that different; rather, the nature and kind of questions that characterize those risks can be more nuanced. Fortunately, we have also developed powerful, more appropriate tools to supplant the often irrelevant assays traditionally used to evaluate risks. We have also begun to understand that products posing minimal risks may not require the same regulatory scrutiny as products not yet seen by regulatory systems; these may require different and more complex hazard characterizations. Perhaps in addition to improving administrative paths, we should put some of the nation’s best minds toward the continued development of risk and safety assessment paradigms to be used simultaneously with product development so that regulation becomes—and is seen as—part of efficient, relevant, and responsible innovation and not just an unnecessary burden or box-checking exercise.

Research Affiliate, Program on Emerging Technologies, Massachusetts Institute of Technology

Cofounder, BioPolicy Solutions LLC

Former Senior Adviser for Biotechnology, Center for Veterinary Medicine, US Food and Drug Administration

Mary E. Maxon advocates for a coordinated regulatory system as a critical need toward building the biotechnology ecosystem of the future. She’s exactly right, but coordination is just one piece of the regulatory puzzle and could be taken a step farther still.

The products that will drive the next century of paradigm-shifting economic growth defy easy definition or jurisdiction. Having witnessed the discussions that take place on products that cross boundaries of agency jurisdiction, I have heard each entity’s lawyers and regulatory experts make a clear and cogent case about why their agency has jurisdiction and why the risks of the technology are relevant to their mission to protect the public.

The problem is, they are all right in their arguments, which makes reaching consensus a challenge. Navigating their disagreements is particularly difficult when it comes to emerging biotechnologies, where the risk space is uncertain and agencies vary in their comfort level with different types of risk, whether to human health or innovation. In the federal context, this can be paralyzing; lack of consensus creates endless wheel-spinning or logjams, particularly when the parties involved do not share a common executive decisionmaker below the level of the president.

What’s needed is a third-party arbiter who has the authority to cut through disagreement to establish clear precedents and an evidence base for future decisionmaking that gives industry more certainty about regulatory pathways.

In an ideal world, this wouldn’t matter. Each regulatory agency has a vigorous regulatory process and the ability to bring in additional subject matter expertise when needed. That suggests a flexible process would be best, with a common regulatory port of entry and a fixed amount of time, as Maxon recommends, to determine a cognizant agency. Unfortunately, one person’s flexibility is another’s ambiguity, and this does not solve the issue of the regulated community of developers who understandably want to shape their data collection around the culture and requirements of the agency with whom they’ll be dealing so they can most easily navigate the regulatory process. Moreover, this will lead to inconsistency, as Maxon notes in the case of the genetically modified mosquitos, in which agencies, based on their own cultural norms around risk assessment, will operate under very different timelines and come to different conclusions.

How do you overcome this quandary? What’s needed is a third-party arbiter who has the authority to cut through disagreement to establish clear precedents and an evidence base for future decisionmaking that gives industry more certainty about regulatory pathways. The arbiter could also serve as a pre-submission advisory group for developers and agencies. This arbiter could be a White House-based Initiative Coordination Office (ICO), as Maxon suggests, but I would argue that more heft is needed to ensure resolution. One possibility would be a small council, administered by the ICO, with representation at a senior level from the agencies and appropriate White House offices, such as the Office of Science and Technology Policy, the Domestic Policy Council, and the Office of Information and Regulatory Affairs, with clearly delegated authority from the president. When decisions are made, the resulting deliberations could be made public, to give a set of “case law” to the developer and regulatory community and assure the public of the integrity of safety assessments. This would be a very different model than the current and ineffective voluntary approach emphasizing the soft diplomacy of coordination between agencies. Congress could also consider establishing a clear arbiter in future legislation that has power to determine which agency has final decisionmaking responsibility on any individual product. As the various parties work through options, however, one thing remains certain. New paradigm-shifting biological products will continue to emerge from the US innovation ecosystem, and Maxon is correct that it is time for a parallel shift in thinking about regulation and governance.

Lewis-Burke Associates LLC

For nearly 40 years academic and industrial laboratories have been working on “industrializing” biology, usually referred to as biotechnology. As Mary E. Maxon points out, the process has been extremely successful, but it has been halting and selective. The future potential is enormous and has implications for many sectors of the US economy. To date the vision of a wide bioindustry has been hampered, in part by what can be politely called regulatory confusion. Maxon proposes an ambitious regulatory reform that would clarify and accelerate the regulatory process under the oversight of a new entity, an Initiative Coordination Office that would work with the various agencies identified in President Biden’s Executive Order launching a National Biomanufacturing and Biotechnology Initiative. Based on past experience with biotechnology regulation, this suggestion is what is often described as necessary but not sufficient.

It is amazing that the core structure for the nation’s current regulatory process is still the 1986 Coordinated Framework for the Regulation of Biotechnology. Maxon describes the weakness of that structure, but misses two important elements that must be considered in the development of any new structure. First, the Coordinated Framework places a major emphasis not on the product under review but on how the product was produced. She cites an excellent example of that problem in the case of laboratory-grown mosquitoes where Oxitec failed and MosquitoMate succeed based on how essentially the same product was produced.

It is amazing that the core structure for the nation’s current regulatory process is still the 1986 Coordinated Framework for the Regulation of Biotechnology.

The second weakness of the Coordinated Framework is the promise of cooperation between the various agencies that had no strong commitment at the top management level. Each agency official responsible for coordination had very little incentive to “share their turf” with another regulator, often citing the constraints of the enabling legislation. The Coordinated Framework was endorsed unanimously at the Cabinet level, but the message never was heard in the ranks. If the proposed new Initiative Coordination Office is to have any impact, more than new rules are needed. Strong leadership and the articulation of the value and urgency of the bioeconomy to the country is essential. Regulators must realize that their job is not to block new products but to work with their customers to quickly identify any problems and move things through the pipeline smoothly. How a product is produced is an anachronism.

The distressing element related to the continued development of the bioeconomy is not just the absence of a functional and meaningful regulatory framework. Without public confidence in the results, even approved products will not be successful in the marketplace. Over the past few years, we have seen an alarming degradation of public confidence in government guidance and in scientific information, even that produced by highly qualified experts. Reversing this trend is going to be an enormous challenge, but may be far more important than the development of a robust regulatory framework. Initiatives such as BioFutures, created and administered by the philanthropy Schmidt Futures, can play a significant role in this process, but they need to stand back and look at the whole pipeline of the biofuture transformation.

Former Chief Program Officer for Science (2004–2008)

Gordon and Betty Moore Foundation

Former Chair (1985–1988), White House Biotechnology Science Coordinating Committee

Mary E. Maxon packages nearly 30 years of biotechnology governance into a call for action that cannot be ignored, centered on aligning regulations with the times. Indeed, of all the issues that plague the future of the US bioeconomy, a regulatory structure that no longer suits its regulatory context is worthy of special consideration.

Maxon presents examples of biotechnologies that have been delayed or even lost, ultimately due to deficits in “biocoordination.” While I second Maxon’s suggestion that the Initiative Coordination Office, if established in the White House Office of Science and Technology Policy, should support agency collaboration on horizon-scanning, transparency, and guided processing for future biotechnologies, coordination needs to be central to the framework, not an accessory to it. As long as its individual regulatory elements (the Environmental Protection Agency, Department of Agriculture, and Food and Drug Administration, among others) lack the infrastructure to “share regulatory space,” the current federally established Coordinated Framework for the Regulation of Biotechnology will continue to present gaps in coordination that threaten the bioeconomy.

Of all the issues that plague the future of the US bioeconomy, a regulatory structure that no longer suits its regulatory context is worthy of special consideration.

Moreover, in considering ways to establish a regulatory framework that scales with future biotechnology, it will be essential to incorporate more public input and community reflection into the regulatory process. Maxon recommends the use of enforcement discretion as a strategy to fast-track new products that agencies consider low risk. This raises broader questions, however, of who determines safety and who determines risk? People and communities perceive risk differently, based on their lived experiences and their perceptions of what they have to lose. The same is true for safety, which also needs a collective definition that is grounded by social considerations. Creating a transparent decisionmaking process for biotechnology that integrates public input starts with redefining of risks and safety as a collective.

To put it plainly, if the nation maintains a collaboration that is built upon poor communication, then we ought not expect coordination. While collaborative governance is found throughout the US regulatory system, advancement will require acknowledgement of the regulatory problems that result from such governance strategies. In 2012, the Administrative Conference of the United States released a report titled Improving Coordination of Related Agency Responsibilities. When addressing the concept of shared regulatory space, the report states: “Such delegations may produce redundancy, inefficiency, and gaps, but they also create underappreciated coordination challenges.” As Maxon cleverly points out, this coordination challenge is petitioning for the creation of a regulatory framework for the bioeconomy—not just biotechnology.

To build on the author’s observations, concerted and deliberate policy action is crucial for fostering a regulatory ecosystem that advances the bioeconomy—subject, of course, to public trust—and increases national competitiveness, both now and in the future.

PhD Candidate, Department of Entomology and Plant Pathology

North Carolina State University

Mary E. Maxon argues for the establishment of a bioeconomy Initiative Coordination Office that could facilitate interagency collaboration, cross-train regulators, conduct horizon-scanning, and establish a single point of contact for guiding developers of biotechnology products through the regulatory process. This may seem like an impossibly long list of activities, especially in the area of biotechnology regulation, but I believe they are achievable, given the right support from the White House and Congress.

I say this because, as part of a team of Obama administration officials, I worked with dozens of experts from across the federal government, and I saw firsthand that it is possible to address the complexity and confusion in the biotechnology regulatory system. We delivered two public-facing policy documents: the 2017 Update to the Coordinated Framework for the Regulation of Biotechnology (2017 Coordinated Framework), which represented the first time in 30 years that the government had produced a comprehensive summary of the roles and responsibilities of the Food and Drug Administration (FDA) , the Environmental Protection Agency (EPA), and the Department of Agriculture with respect to regulating biotechnology products; and the National Strategy for Modernizing the Regulatory System for Biotechnology Products (2016 Strategy), which described a set of steps those agencies were planning to take to prepare for future products of biotechnology.

I saw firsthand that it is possible to address the complexity and confusion in the biotechnology regulatory system.

These documents were not cure-alls, but they represented progress. And don’t take just my word for it. The Trump administration, hardly known for cheerleading Obama-era policies, issued an Executive Order in 2019 stating that the 2017 Coordinated Framework and the 2016 Strategy “were important steps in clarifying Federal regulatory roles and responsibilities.”

To further support my contention that progress is possible (and to clarify one detail that Maxon discusses), I point to one of the policy changes that came out of the Obama-Trump Biotechnology Regulatory Modernization effort. This change specifically addresses the case studies of laboratory-grown mosquitos that Maxon describes. As she stated, the Oxitec mosquito, which was developed with genetic engineering, and the MosquitoMate mosquito, which was infected with a bacteria called Wolbachia, were both products with very similar mosquito population control (i.e., pesticidal) claims. However, one mosquito (Oxitec) was regulated by the FDA and the other (MosquitoMate) was regulated by the EPA.

The interagency team that developed the 2016 Strategy and the 2017 Coordinated Framework recognized this inconsistency and addressed it. In the 2016 Strategy, the EPA and the FDA committed to “better align their responsibilities over genetically engineered insects with their traditional oversight roles.” In October 2017, the FDA issued a final policy, clarifying that the EPA will regulate mosquito-related products intended to function as pesticides and the FDA will continue to have jurisdiction over mosquito-related products intended to prevent, treat, mitigate, or cure a disease. Since this clarification, Oxitec has received the green-light from the EPA to conduct field trials of its genetically engineered mosquitos in Florida and California.

A quarter of a century passed between the 1992 and 2017 updates to the Coordinated Framework for the Regulation of Biotechnology, during which time advances in biotechnology altered the product landscape. This mismatch between the regulatory system and technological progress made it difficult for the public to understand how the safety of some biotechnology products was evaluated, and also made it challenging for biotechnology companies to navigate the regulatory process. In the past eight years progress has been made, and there is clearly now momentum in Congress and in the White House to advance that momentum.

Chief Business Officer, Ceres Nanosciences

Advisory Board Member, National Science Policy Network

Mary E. Maxon argues convincingly that the White House Office of Science and Technology Policy (OSTP) should establish a bioeconomy Initiative Coordination Office (ICO) as mandated in the CHIPS and Science Act of 2022. Although Maxon focuses on its key role in the biotechnology regulatory system, it is important to think broadly and strategically about the many activities that a bioeconomy ICO should lead and coordinate governmentwide. The office should work not only to support the biotechnology regulatory system but also to coordinate strategic planning of federal investments in the bioeconomy; to facilitate interagency processes to safeguard biotechnology infrastructure, tools, and capabilities; and to serve as a focal point for government engagement with industry, academia, and other stakeholders across the bioeconomy.

In addition to providing fresh eyes and new perspectives, a program of this type would present opportunities for training and cross-sectoral engagement for regulators and would improve understanding of the biotechnology regulatory system across the bioeconomy.

This broad purview is supported by the language in the CHIPS and Science Act and would also encompass many of the activities included in President Biden’s Executive Order 14081 on “Establishing a National Biomanufacturing and Biotechnology Initiative.” Indeed, a recent bipartisan letter confirms Congress’s intent that the ICO described in the legislation incorporate this broader initiative. A bioeconomy ICO would be analogous to other congressionally mandated Coordination Offices at OSTP that drive effective interagency coordination and outreach, including those for the US Global Change Research Program, the National Nanotechnology Initiative, and the Networking and Information Technology Research and Development Program.

A public-facing bioeconomy ICO will make an ideal home for the biotechnology regulatory system’s “single point of entry” for product developers, and Maxon rightly places this issue as a top priority. To establish this approach, the ICO should work closely with the principal regulatory agencies to define ground rules for this process that will support efficient decisionmaking while also reflecting and protecting each agency’s autonomy in interpreting its own statutes and responsibilities. As experience is gained, the ICO should work to address bottlenecks to decisionmaking and help distill generalizable principles and useful guidance.

Another critical role for the ICO should be to support and coordinate a project-based fellowship program that brings together individuals with a wide range of perspectives from government agencies, industry, academia, the legal profession, and other sectors to focus on issues of relevance to the biotechnology regulatory system. In addition to providing fresh eyes and new perspectives, a program of this type would present opportunities for training and cross-sectoral engagement for regulators and would improve understanding of the biotechnology regulatory system across the bioeconomy.

Executive Order 14081 and the CHIPS and Science Act have kicked off a flurry of activity within the federal government related to the bioeconomy, and the regulatory system will need additional tools to keep up. Now is the time for OSTP to establish a bioeconomy ICO as a foundation for robust and durable interagency coordination that can lead in transforming the range of possible beneficial outcomes into reality.

Principal

Science Policy Consulting LLC

How Open Should American Science Be?

In “The Precarious Balance Between Research Openness and Security” (Issues, Spring 2023), E. William Colglazier makes an important contribution to the ongoing dialog about science security, and particularly regarding the United States’ basic science relationship with China. As a former director of the Department of Energy Office of Science, I agree with his assessment that rushing to engineer and implement even more restrictive top-down controls on basic science collaboration could be counterproductive, especially without a thoughtful analysis of the impact of the actions that already have been taken to thwart nefarious Chinese behavior.

In our personal lives, we instinctively understand when a relationship is not mutually beneficial and when we are being taken advantage of even when the rules are vague. It is true that the government of China, previously operating from a position of weakness, has pursued a coordinated and comprehensive strategy to harvest US scientific and technological progress and talent through a variety of overt and obscured means. This is frustrating and not sustainable, not least because China is no longer the same techno-economic junior partner it once was. In response, the United States has taken some substantial administrative and policy actions designed primarily to shed light on relationships and conflicts of commitment in sponsored work and in government laboratories, but also to signal a meaningful change in our willingness to be taken advantage of. These are recent developments, and the effects are as yet not understood.

The only effective long-term strategy in this race for global science and technology primacy is to out-invest and out-compete.

Looking again to our personal, human experience, cutting off contact and refusing to talk even in a difficult relationship is a defensive posture not consistent with competitive strength or confidence. Moreover, a reactive strategy of shutting doors and closing windows in an attempt to maintain science and technology leadership betrays a lack of understanding of the fungibility of talent in an increasingly educated world, the almost instantaneous and global flow of science and technology knowledge, and the vastly improved intrinsic science capabilities of China.

I believe that instead of defensive measures, the only effective long-term strategy in this race for global science and technology primacy is to out-invest and out-compete. Given transparent scientific relationships not motivated by easy access to resources, we also should not be afraid to work with anyone and particularly in basic research. We benefit from collaboration in part because we generally learn as much as we teach in a meaningful scientific exchange, and in part because our open and confident engagement is a fantastic advertisement for the attractiveness and effectiveness—and, in my opinion, the superiority—of our system and culture of science and technology.

The cost to US science and technology competitiveness and the flow of indispensable new talent of a regime of distrust or punitive control may well be greater than any theft of ideas or emigration of expertise, and disengaging and therefore blinding ourselves to a nuanced understanding of where our increasingly capable competitor is in this global science race may likewise hurt rather than help. Perhaps we should evaluate the effects of the new legal and policy adjustments we have made already, reconsider our end goals, and understand better the costs versus benefits before making further adjustments to the openness of the United States’ amazing engine of science and innovation.

Former Director (2019–2021)

US Department of Energy’s Office of Science

I am sympathetic to the familiar and well-reasoned arguments that E. William Colglazier makes, but I can’t shake the feeling that reading his essay is like watching a parade of antique cars on the 4th of July.

The US scientific research community, overwhelmingly funded by the federal government and mostly resident in universities, is reeling from increased government scrutiny of its international engagements. Colglazier’s arguments and recommendations are thoughtful, responsible pushback against that scrutiny eroding the value—to the United States—of science diplomacy and international scientific engagement. This is all to the good, but hitting the right balance of openness and protections in international scientific collaboration is a sideshow to the center stage events affecting US commercial and defense technological leadership.

These main events are the struggles, both within and among nations, over the role of advanced technologies and innovation—driven in the democracies primarily by private companies—in a new world order of economic and military competition, confrontation, and collaboration (among allies). For the United States, the events center around the pluses and minuses of export controls of advanced commercial products used as sanctions; the impact of technologically advanced multinational companies on US technological sovereignty; government reviews of inbound and outbound foreign direct (private sector) investments; and legislation such as the Inflation Reduction Act, which through its buy American provisions punishes innovative companies operating from nations that are long-standing national security allies.

We need a new playbook for commercial and defense international R&D engagement that can live alongside the traditional playbook of science diplomacy.

In the closing sections of his essay, Colglazier argues for leadership from the National Academies and professional societies for more personal cross-border engagement among researchers and government security and research officials. This is a good idea and may help protect the cross-border scientific research enterprise from the worst excesses of government scrutiny and oversight. But the voices that most need to be heard to navigate the current challenges are from the private sector, published more often in the Financial Times and the Wall Street Journal than in more narrowly targeted journals such as Science or even Issues in Science and Technology.

Take, for example, the recent interview with the CEO of Nvidia published in the Financial Times. In commenting on the recent US prohibition on domestic companies from selling artificial intelligence computer chips to China, he pointed out that “If [China] can’t buy from … the United States, they’ll just build it themselves.” This reveals a fundamental underlying characteristic of the new world order in which commercial and defense R&D and innovation capability is already widely distributed around the world. A simple, seemingly reasonable action to protect US “technological leadership”—drawn from the antique car/Cold War era of US technological dominance—could easily have the exact opposite effect of that intended. I’d argue that we need a new playbook for commercial and defense international R&D engagement that can live alongside the traditional playbook of science diplomacy. The Biden administration is moving in that direction, by relying heavily on the National Security Council to coordinate the activities of groups such as the National Science Foundation and the National Institutes of Health with the Departments of Commerce and Defense. In responding to current technological challenges in international economics and geopolitics, balancing openness and protection in government-supported international scientific research (and the cross-border activities of universities) is part of the show, but it is not the main event. That role falls to the cross-border activities and collaborations of companies, albeit enabled or impeded by a wide variety of regulation by governments.

The Applied Research Consortia (ARC) Project

E. William Colglazier offers a critical assessment at a very important time. Almost a decade of scientific exchange between the United States and Russia has been curtailed following Russia’s invasion of Ukraine. Over the past half a decade or so, the same is happening with China and several countries in the Middle East. Even US collaborations with friendly allies have become increasingly difficult when risks are perceived differently. Data that US research organizations might normally share freely or develop commonly with collaborators might now be blocked if the parties don’t share the same point of view. In this context, I would like to add a few thoughts to the author’s excellent description of international collaborations.

First, it is imperative to understand and accept the arguments from the proponents of more research security as well as the defenders of unquestioned openness. They are both valid and need to be listened to. But a word I would add to the conversation on how to move forward is “trust.” There must be trust that the research enterprise and principal investigators want to protect what is important to the United States, especially when we see a potential collaborator doing the opposite. Today, the consensus that international collaborations provide benefits is questioned. At the same time, the science community has lost at least some of this trust—otherwise we would not be having these conversations.

There must be trust that the research enterprise and principal investigators want to protect what is important to the United States, especially when we see a potential collaborator doing the opposite.

The dialogue around protection and trust must engage those at the forefront, in addition to occurring within expert panels and small group discussions. Principal investigators must be provided with opportunities to gain enough information to help them understand any potential risks going forward and get trained in how to deal with them. Or they or their institutions may decide not to pursue a project further. In this matter, there are ideas being explored at the National Science Foundation and elsewhere to provide such platforms for information exchange—and we should all wholeheartedly support those efforts. If home organizations prescribe how to manage the risk, they should take responsibility for the outcome as well—good or bad. As always, authority and responsibility have to line up, independent of what system of control is chosen. Since the research enterprise, the government, and companies and groups in the private sector all benefit from international collaborations, they should also share the risk.

Lawmakers, science funders, and managers of the US research enterprise must understand the opportunity cost of not collaborating, or the nation will be overwhelmed by surprises, underwhelmed by progress, and forced to scramble. Every time I attend a conference in Europe, I learn about progress in emerging technologies happening in countries we have curtailed scientific exchange with. After a few years of learning only second hand, even in the small slice of science and technology I’m engaged in, it is increasingly scary. There are more and more things we don’t know. Not seeing means not knowing. I share this experience with many colleagues and it underlines the urgency to restart international collaborations in both directions, albeit with controls applied.

Colglazier concludes that there is “no need to fundamentally change a strategy that has benefited our country so greatly.” Almost 80 years of success supports this statement, as do I. But in every collaboration it takes two to tango. If one side changes the rules of engagement, the answer shouldn’t be to not collaborate, but to establish a security culture that allows a measured approach.

Science Fellow, Hoover Institution

Stanford University

E. William Colglazier rightly points out that scientific cooperation was viewed, 40 years ago, as a low-risk path to strengthen the US-China relationship. The shift in risk assessment from low to high over the decades resulted from China’s successful commitment to building a world-leading science and technology sector. However, the solution to the challenge that China now poses for the United States is vastly more complicated than one crafted for dealing with the Soviet Union in the early 1980s.

US views on China have shifted rapidly. Imputing nefarious motivations to China, casting its researchers and students as part of a “whole nation” enterprise set on taking advantage of naïve American benefaction, differs markedly from the position espoused just a few years before. In 2010, US cooperation with China was noted by President Obama to be beneficial to the United States. By 2018, cooperation was viewed with suspicion, and China’s policy initiatives were met with accusations of fomenting everything from intellectual property theft to industrial espionage. The swift change in rhetoric, from China as a partner to an adversary, suggests political purposes rather than any change in the benefits of scientific cooperation. Chinese nationals and those working with them began to be prosecuted. Noting the change in underlying political atmospherics, cooperation between the two nations began to drop even as US cooperation with Europe was sustained.

The swift change in rhetoric, from China as a partner to an adversary, suggests political purposes rather than any change in the benefits of scientific cooperation.

Similar to the US relationship with the former Soviet Union, the current views on China, reminiscent of the “Red Scare” and xenophobia, were and are internal to the United States. These views are depriving US research and development of potential benefits of cooperation. Unlike the conditions of global research at the time of the 1982 Corson report, which Colglazier cites, when the United States dominated world science, China is now fully capable of finding alternative sources to working with us. Perhaps it was possible during the Cold War to “contain” the knowledge sector, but in the globalized world of the 2020s, where as much as one-third of all published research is multinational in origin, cutting off China serves mainly to redirect it to working with other scientifically advanced nations.

There is an unstated sense of betrayal in Western nations that scientific cooperation has not resulted in China’s political liberalization. The Enlightenment view posits an inextricable link between science and democracy. “Freedom is the first-born daughter of science,” said Thomas Jefferson, declaring that the enlightened citizenry participates in an ordered governance. In 1978–79, many US scientists and policymakers thought that if we would open our country to Chinese students and scholars, as President Jimmy Carter offered to China’s then president Deng Xiaoping, they would return home with new values more aligned with ours. Behind the science and technology agreements and the welcoming of more than 5.2 million students was the unspoken assumption that the United States would gift China with science, that science would enhance prosperity, and that from this would spring a more open, more market-led, and more liberal China. That this did not occur may cause some observers to reevaluate the relationship between science and government. However, to respond by betraying a core US value of openness does more damage to US science and technology than it does to China. It also does tangible damage to the bilateral relationship, making it much more costly than any sense of security that may ensue. With the asymmetries of the past.

Professor, John Glenn College of Public Affairs

The Ohio State University

Professor, Kenan-Flagler Business School

University of North Carolina at Chapel Hill

Science and innovation have always flourished in times and places where openness prevails. This was true for the ancient Greeks and during the Renaissance, and it remains true today. The United States’ leadership in science and technology is linked to its joint status as a global economic hub and the home of a free and open society.

Science is the “seed corn” to many useful and valuable technologies. These technologies are commercially valuable and support national security, defense, and the nation’s economic prosperity. To protect these vital interests, key technologies are often controlled by governments and companies through security restrictions that limit who can access key knowledge or who can participate in the design, production, trade, sale, or use of these technologies.

Today, shifting economic and geopolitical tensions are again upsetting the balance that has served the United States so well since the end of World War II.

While these restrictions protect misuse of technologies, they also adversely impact the open environment that fostered their development in the first place. To maximize the benefits, this tension must be dynamically balanced, responding to the actions and behaviors of adversaries and marketplace competitors.

In his essay, E. William Colglazier takes a fresh look at the interplay between international scientific collaboration and research security restrictions. At a time when the balance is rapidly tilting away from openness and toward more restrictions, the author leverages his deep experience and expertise to remind us that the maximum benefit to the country is in the optimal balance, not in the maximum amount of protection. Colglazier effectively uses the history of US science diplomacy in past periods of heightened geopolitical tension, including the Cold War and the opening with China in the 1970s, to clearly illustrate the benefits of open scientific exchange and engagement, even at times of great tension.

Through examples, Colglazier reminds us of the benefits of robust global scientific engagement, from the Montreal Protocol implemented in 1989 to protect Earth’s ozone layer to contemporary efforts in the global response to greenhouse gas emissions. Most dramatically, he tells how US and Soviet scientists made significant contributions to the nuclear arms control efforts in the 1990s through their informal, nongovernmental “Track 2” engagement in the 1980s.

Today, shifting economic and geopolitical tensions are again upsetting the balance that has served the United States so well since the end of World War II. The forces causing this new imbalance were explored in the recent National Academies study Protecting US Technological Advantage, which I co-chaired. Our report agreed that responding to these pressures with restrictions alone will only diminish our country’s “openness” advantage. In his essay, Colglazier concludes as we did: The United States does not need to throw away its principles of openness but rather to wisely reoptimize its controls to find that balance point of maximum advantage.

Chancellor

University of Pittsburgh

E. William Colglazier provides a clear analysis of the growing tensions between open basic research and America’s concerns about its national security and competitiveness. He correctly notes that open research is critical (not optional) to innovation, and that America’s attractiveness to foreign-born talent has been our competitive advantage. He also notes that “politics remains, however, a more powerful force than science.” Thus finding a “balance” for America’s research institutions is precarious indeed.

I participated in one of the National Academies’ Roundtable discussions in November 2022. I was impressed with the skill and effort that both the leaders of academic research institutions and the national security personnel demonstrated when engaging in deep conversations consistent with the recommendations that Colglazier offers. To bridge the wide gap between these communities’ “open” and “security” cultures, people on both sides will need to strive to learn about all the players and understand their concerns. As in research, institutions don’t collaborate—people do. But only after they have built a basis of mutual understanding and respect.

As Colglazier points out, these concerns are not new. They have existed since the Cold War and are revived anew in each generation. Yet America has continued to lead the world in innovation. Our research institutions know how to protect in secure facilities that which is clearly identified as needing protection. They also know how to conduct critical basic research with colleagues across the planet.

Don’t ignore the need to prepare our students to understand, compete with, and collaborate with those from places where America will be competing for talent and innovation.

What Colglazier correctly fears is asking an assortment of federal agencies to independently define what needs to be protected. This risks a drive to the broadest—but varying and vague—“areas” needing protection. It also risks chilling needed basic research for fear that what was proper before may later become problematic. The solution here can already be found in National Security Directive 189, laid out in 1985: to the greatest extent possible, fundamental research should be unrestricted, and where restrictions are needed, they should be imposed by “classification.” The directive adds that each agency is responsible for determining whether classification is appropriate prior to the award of a research grant, contract, or cooperative agreement, and, if so, controlling the research results through standard classification procedures. This clarity eliminates the “gray zone” risk to both interests.

If I were to add a single additional thought, it would be: don’t ignore the need to prepare our students to understand, compete with, and collaborate with those from places where America will be competing for talent and innovation. We need to find ways in which common global issues (such as climate change) can be the subject of joint student collaborations. We might use technology to create “walled gardens”—bounded areas for collaborative research that does not touch the areas of military sensitivity—in which bright students at global institutions can work and learn together to analyze shared global problems. I might also note, by way of example and challenge, that Tsinghua University, a national public research institution in Beijing, China, now hosts a competition for analysis—in English—of issues focused on the United Nations’ Sustainable Development Goals.

Former Chair, Sam Nunn School of International Affairs

Georgia Institute of Technology

Asking the Hard Questions

As an executive at the most innovative university in the United States and a graduate of what I call “a liberal arts college masquerading as an engineering school,” I find it refreshing when scholar-leaders in science, technology, engineering, and mathematics—the STEM fields—speak both passionately and eloquently about the arts and humanities. Thus, I found the interview with Freeman A. Hrabowski III (Issues, Spring 2023) particularly rewarding.

Although West Point launched the United States’ first school of engineering in 1802, my alma mater, the US Air Force Academy, is perhaps the most technologically forward-thinking of all the military service academies. But as Hrabowski reminds us, “If we are simply creating techies who can only work with the technology, we’re in big trouble.” The same can be said of our future turbocharged, technologically enhanced officer corps. They too must be deeply rooted in what makes us human, especially when generative artificial intelligence is beginning to distort our collective conceptualization of “knowledge.”

I find it refreshing when scholar-leaders in science, technology, engineering, and mathematics—the STEM fields—speak both passionately and eloquently about the arts and humanities.

Raised in the Deep South during the throes of the Civil Rights movement, Hrabowski draws a direct line from the sense of agency he gained while participating in Dr. Martin Luther King Jr.’s Children’s March in Alabama (an act that landed him in jail) to not only advocating for more Black PhDs in STEM but actually producing more of them. Hrabowski accomplished this heady task by completing what he identifies as among the most difficult tasks one can attempt: changing an institution’s culture—in this case, at the University of Maryland, Baltimore County. “To change the culture, we must be empowered to look in the mirror and to be honest with ourselves,” he reflects, if you’ll pardon the pun. Looking in the mirror, Hrabowski and his colleagues changed expectations, proclaiming and proving that underrepresented minority students can and will do math as well as their counterparts. But even after a successful 30-year run as a university president (when the average tenure is closer to six), Hrabowski’s efforts to promote improved outcomes for students, pre-K to PhD, haven’t slowed.

With a $1.5 billion scholars program funded and named in his honor by the Howard Hughes Medical Institute, Hrabowski has taken his crusade to even higher levels. Acknowledging that despite his team’s Herculean efforts, the average number of Black students earning PhDs in STEM fields has moved from just 2.2% of all PhD STEM graduates to 2.3% in the recent past, Hrabowski realizes his work is far from done. Just as important, he is quick to note that less than 50% of students starting college graduate in six years, regardless of race.

Reflecting on his lifelong work, Hrabowski asks, perhaps rhetorically, but perhaps not: “What is it going to take to create a professoriate that will make exceptional achievement in STEM by people of color the rule rather than the exception?” One certainty: Freeman Hrabowski won’t stop asking that and even more difficult questions, just as he has been doing for the past four decades.

Executive Vice President and Chief Operating Officer

Arizona State University

The Limits of Science Communication?

There is little doubt that scientists struggle with effectively communicating the results of their research, particularly when it conflicts with strongly held mental models, as in the case of climate change. Significant effort and resources have been put into programs that support science communication and engaging with communities, with most universities now offering courses around these topics.

However, in “Mental Models for Scientists Communicating With the Public” (Issues, Winter 2023), Kara Morgan and Baruch Fischhoff draw a distinction between simple, unilateral science communication and bilateral risk communication. The authors argue that risk communication is a prerequisite for effective science communication, and outline an engagement process for eliciting goals and mental models from target audiences. This iterative process centers on developing a series of influence diagrams to document stakeholder concerns relative to research outcomes. The process involves convening focus groups and a seemingly intensive schedule of interactive meetings between scientists and target audiences. Ultimately, Morgan and Fischhoff argue that, in general, scientists do not possess the relevant skill set to accomplish any of these activities.

I suspect this is not actually true, given the emphasis on research translation and science communication now prevalent across many graduate programs. It also appears that following the well-established process documented by the authors is likely to be effective when scientists and researchers have the opportunity to engage directly with target audiences and stakeholders. The concern is that the ability to have this kind of interactive engagement represents an atypical situation in the context of most science communication, and is not scalable to the target audience of greatest concern, the general public, or to the global risks of most concern.

The ability to have this kind of interactive engagement represents an atypical situation in the context of most science communication, and is not scalable to the target audience of greatest concern, the general public, or to the global risks of most concern.

Many independent analyses conclude that we now face significant existential and systemic environmental risks as a consequence of human economic activity. We risk exceeding planetary boundaries, that is, the biophysical capacity of the planet to support life as we have come to know. The kinds of large-scale issues that have emerged—biodiversity loss, climate change, varying pressures on global south versus north, and environmental and social inequities—would seem to require science and risk communication at a global scale, and across diverse but biased audiences, without the luxury of establishing personal relationships among researchers and stakeholders.

Understanding the mental models that shape audience perceptions is clearly important. Survey-based research reveals, for example, that in the United States there are distinct archetypal biases that inform how scientific information will be received. One biased mental model relating to climate change is based on the belief that it is not real, or that it is not caused by human activity. This stems from a number of biases, including confirmation bias, in which people seek information that confirms their preexisting beliefs, and the availability heuristic, in which people overestimate the likelihood of events based on how easily they come to mind.

The mental model that denies climate change is not supported by scientific evidence, which overwhelmingly shows that climate change not only is occurring, but is caused by human activities such as burning fossil fuels and deforestation. It is also harmful because it can lead people to resist efforts to address climate change, which will have serious consequences for the environment and ecological and human health.

I am curious what the authors would recommend for the risk and science communication around these kinds of issues, which increasingly dominate public discourse, and for which solutions will require integrated, systems solutions to address and for which conventional and traditional models of risk communication are unlikely to suffice.

Department of Environmental Health

Harvard T. H. Chan School of Public Health

NEK Associates

Managing the Risks of International Collaboration

Over a short time span, international academic cooperation has gone from being regarded as unambiguously positive and widely promoted by governments and research funders to something that is complicated, controversial, and even contested. Rising geopolitical frictions combined with competition over dominance of key technologies lie at the heart of this shift. As a result, universities and researchers who had come to take the global enterprise of science for granted are now navigating issues such as dual use regulations, export controls, screening of foreign students and scholars, and whether researchers should be required to disclose their international sources of funding. Governments are devising or considering measures to restrict or control international academic exchanges deemed a threat to national interests.

Researchers and university administrators are increasingly calling for clearer and more concrete guidance and frameworks for international collaboration. One challenge is how to balance rules for international scientific engagement with the preservation of academic freedom to choose research topics and partners. In “Navigating the Gray Zones of International Research” (Issues, Winter 2023), Tommy Shih offers some important insight here. In particular, he suggests that research funders can play an important role in developing global norms for collaboration. I agree. Such norms could, among other things, constitute a valuable step toward a framework of global governance for research that would safeguard international scientific cooperation while acknowledging national interests and protecting ethical principles. This is particularly important at a time when growing caution in academia and government risks preventing or ending valuable cross-border academic exchange and cooperation.

Universities and researchers who had come to take the global enterprise of science for granted are now navigating issues such as dual use regulations, export controls, screening of foreign students and scholars, and whether researchers should be required to disclose their international sources of funding.

Having worked in university administration, government, and research funding organizations, I have seen cases of research cooperation that clearly should not have happened because they violated ethical standards or undermined national security. Preventing such collaborations should be a priority for researchers, universities, and funders. At the same time, there is currently a growing tendency for researchers and universities to shy away from collaborative efforts that could significantly benefit science, society, and the planet because of some perceived potential risks. This concern is well captured in a report titled University Engagement with China: An MIT Approach, published in November 2022, which states: “An important aspect of this review process is to consider the risks of not undertaking proposed engagements, as well as the risks of doing so.”

As Shih correctly points out, international research cooperation is not as binary—unequivocally good or bad—as it is sometimes made out to be. Some cooperation has significant potential benefits while at the same time incurring risks. Binary guidelines are not suitable for handling such cases; rather they require instruments for managing risks.

Rising tensions between the two largest funders and producers of scientific knowledge—the United States and China—risk turning international academic cooperation into a zero-sum game that can hurt both science and humanity’s prospects of addressing pressing challenges. Preventing unsuitable research cooperation without scaring off collaborations that are beneficial and noncontroversial is a concern for institutions and countries committed to good science and prospering societies. Another is managing collaborations, which could bring significant benefits but also incur certain risks. Addressing these issues requires a combination of norms, support, and rules—and should be a priority for research performers and funders alike.

Professor, School of Economics and Management

Lund University, Sweden

Chair, Scientific Council of Formas (Swedish Research Council for Sustainable Development)

Chair, Austrian Council for Research and Technology Development

Boundary-Pushing Citizen Engagement

In “How Would You Defend the Planet from Asteroids?” (Issues, Winter 2023), Mahmud Farooque and Jason L. Kessler reflect on the Asteroid Grand Challenge (AGC), a series of public deliberation exercises organized by members of the Expert & Citizen Assessment of Science and Technology (ECAST) network and NASA in 2014. Center stage were the positive impacts that citizen deliberations had on NASA representatives and NASA decisionmaking. However, the authors lament that citizen engagement at the agency similar to the AGC has not happened again. As Kessler points out, while the value of citizen engagement is acknowledged within NASA to this day, the “interstitial tissue that enables it to happen” is lacking.

In response to this replication challenge, Farooque poses an “existential question” specifically to the ECAST network, but one that resonates more broadly for engagement scholar-practitioners: Should we continue to pursue experimental engagement from the outside or work to concentrate capacity for engagement within federal agencies? While this “outside” vs. “inside” debate remains perennial for pursuing political change, we suggest that the two strategies must work hand-in-hand. From our perspective, the AGC case study provides a road map for how to embrace the nexus of agency process (inside) and boundary-pushing engagement (outside).

First, crucial partnerships between the inside and outside enable success for citizen deliberations. Professionals such as Kessler search and advocate for opportunities and resources for citizen engagement from the inside of agencies such as NASA. Practitioners such as Farooque transport and translate questions, ideas, and perspectives from the outside that expand the immediate priorities of the agency. For example, although NASA presented only two options to focus citizen debate, Farooque explains that citizen discussions produced additional governance questions and options that broadened the impact of deliberation.

Should we continue to pursue experimental engagement from the outside or work to concentrate capacity for engagement within federal agencies? While this “outside” vs. “inside” debate remains perennial for pursuing political change, we suggest that the two strategies must work hand-in-hand.

Second, centering citizen deliberations around agency priorities yields important impacts for agency decisionmaking. In the AGC, a planetary defense officer confirmed in Farooque and Kessler’s account that an important outcome of the exercise was learning from public perspectives on planetary defense and hearing “how important it was for NASA to be doing it.” This social learning was valuable to agency decisionmaking, as experiencing this public support somewhat alleviated NASA’s decisionmaking gridlock and “pushed it over the threshold.” Citizen deliberations organized from the outside might not gain the internal audience to have such impacts on decisionmaking.

Lastly, interaction between agency representatives and citizens energizes both parties. As one participant reported, the opportunity to interact with NASA representatives “made this session special” for citizen participants. Moreover, interactions could be extended to the outside by inviting agency representatives to participate in external events. Continuous agency exposure to public perspectives could in turn build more support for engagement from the inside. The AGC’s success as institutionalized citizen engagement came from linking the spheres of agency process and boundary pushing engagement. This inside/outside strategy poses more of a model than a dilemma, as such exercises accumulate to build the “interstitial tissue” that could support a more dynamic, continuous, boundary-crossing engagement ecosystem.

Postdoctoral Research Scholar, School for the Future of Innovation in Society

Arizona State University

Professor of Science, Policy, and Society, Genetic Engineering and Society Center

North Carolina State University

Mahmud Farooque and Jason L. Kessler’s first-person account of how scholars and policymakers worked to integrate public views into NASA’s Asteroid Grand Challenge initiative describes the twists and turns involved in deploying a relatively new social science research approach, called participatory technology assessment (pTA), to provide policy-relevant input from members of the public on how NASA should prioritize and implement its approach in designing a planetary defense system.

The article provides many helpful takeaways. One of the most important is that even though there is much talk about the importance of involving the public in discussions about how new technological innovations could impact society, figuring out how to do this in practice remains challenging. The pTA approach—daylong events that combine informational sessions about a cutting-edge area of technology with interactive, facilitated discussions on how these technologies might be best managed—advances a new way of strengthening the link between public engagement and decisionmaking. Over the past decade, the pTA approach has been applied to numerous topic areas, and new efforts are underway as well. This includes a project funded by the Sloan Foundation, led by Farooque at Arizona State University, that will apply the pTA methodology to the issue of how to best manage the societal implications of carbon dioxide removal options—which seek to remove greenhouse gases from the atmosphere—that are in the process of being researched and deployed. This pTA effort heeds the call of two landmark consensus studies from the National Academies that highlight the need for more social science research on the rollout of negative emissions technologies and the ocean’s role in carbon dioxide sequestration.

Even though there is much talk about the importance of involving the public in discussions about how new technological innovations could impact society, figuring out how to do this in practice remains challenging.

More funders from philanthropy and government need to be willing to support this innovative social science approach and help to scale its application across a wider range of technological domains. As Farooque and Kessler so tellingly describe, it can be difficult for funders to make this leap. Due to unfamiliarity with the process, there is inevitable uncertainty upfront about the value of these pTA sessions. Since funders may not know what to expect from pTA processes, that can lead to caution in deciding to finance these efforts. Additionally, it can be difficult for funders familiar with supporting expert-driven science to adapt their mindsets and recognize that such public deliberation activities generate invaluable insight into the strengths and drawbacks of different technology governance options.

There are ways of overcoming these barriers. First, experiencing pTA sessions first-hand is key to understanding their value. Kessler helpfully reflects on this point, noting that going into the pTA sessions, NASA “didn’t really know what would come out of it,” but that as the sessions progressed “it was clear the results could exceed even our most optimistic expectations.” Second, funders can view pTA as a methodological tool that can complement more typical social science approaches, such as one-on-one interviews, focus groups, and surveys. Unlike individual interviews, the pTA approach benefits from group conversation and interaction. Unlike focus groups, pTA is structured to engage hundreds of participates over multiple dialogue sessions. Unlike surveys, time is taken to inform public participants about a technology’s development and lay out available governance options.

This is a period of experimentation for funders of science, with philanthropies and governments trying wholly new forms of allocating resources, from lotteries to grant randomization to entirely new institutional arrangements. Along with experimenting with how scientific research is supported, funders need to be similarly bold and willing to advance new approaches to social science research, which is critical to ensuring that public views are effectively brought into science policy debates.

Program Director

Alfred P. Sloan Foundation

Materially Different

In “Computers on Wheels?” (Issues, Winter 2023), Matthew Eisler makes a significant contribution in understanding the roots of the modern electric vehicle (EV) revolution. He provides many missing details of the thinking behind Tesla’s beginnings, especially the ideas for framing automobiles as consumer commodities. More importantly, he highlights the incompleteness of the “computer on wheels” analogy to the bane of legacy automakers and policymakers alike.

As Eisler notes, while electric vehicles and computers are similar in some respects, they “are significantly different in terms of scale, complexity, and, importantly, lifecycle.” One such difference is the intense demand EVs place on being able to develop and sustain extremely complex software not only for safety-critical battery management but for the rest of the vehicle’s systems. Tesla’s organic software-development capability is a critical reason it has been able to forge ahead of legacy automakers in terms of both features and manufacturing costs. While EV batteries contribute to some 35–40% of an EV’s manufacturing cost, vehicle development costs attributable to software are rapidly approaching 50%.

Although the amount of software reinforces the analogy of an EV being a computer on wheels, the analogy fails to account for how EVs materially differ from their internal combustion engine counterparts. EVs represent a new class of cyber-physical system, one that dynamically interacts with and affects its environment in novel ways. For instance, EVs with their software-controlled electric motors no longer need physical linkages to steer or apply power—a joystick or another computer will do. With additional devices to sense the outside world along with requisite computing capability, EVs can more easily drive themselves sans human interaction than can combustion-powered vehicles. Tesla realized this early and made creating autonomous driving capability a priority. In developing self-driving, the company further increased its software prowess over legacy automakers.

EVs represent a new class of cyber-physical system, one that dynamically interacts with and affects its environment in novel ways.

As Eisler notes, policymakers wholeheartedly embraced EVs, first to fight pollution and later to combat climate change. However, policymakers have also embraced the potential of autonomous-driving EVs and are counting on them to limit individual vehicle ownership, thus reducing traffic congestion and ultimately reducing greenhouse gas emissions by up to 80% by 2050. Even for Tesla, creating fully self-driving vehicles has been much more difficult than it imagined, illustrating the dangers of policymakers adopting nascent technologies as a future given.

This highlights another critical problem that Eisler pinpoints as resulting from policymakers’ embracing EVs as computers on wheels—that of scale. Transitioning to EVs at scale not only demands radical transformations in automakers’ global logistical supply chains but also establishes new interdependencies on systems and their capabilities outside their control, from lithium mines to the electrical grid. The grid, for example, will need increased energy generation capacity as well as a significantly improved software capability to keep local utilities from experiencing blackouts as millions of EVs charge concurrently. Policymakers are only now coming to terms with the plethora of network effects EVs, and their related policies, create.

Eisler clearly underscores the myriad challenges EVs present. How well they will be met is an open question.

Contributing Editor, IEEE Spectrum

Author of the IEEE series “The EV Transition Explained

In 2019, I faced the chore of replacing the family car. In visiting various dealerships, I found myself listening to lengthy descriptions of vehicle control panels, self-parking features, integration of contact lists with built-in phone systems, and navigation and mapping options. I heard nothing about safety features, engine integrity and maintenance, handling on the road, passenger comfort, or even gas mileage (I was looking at all types of engine options). I was barely even encouraged to take a test drive. It was as if, indeed, I was shopping for a computing machine on wheels.

By opening with the “computer on wheels” metaphor, Matthew Eisler provides an opportunity to think about a major technology shift—from the internal combustion engine to the electric motor—from multiple perspectives. How is a car like a computer? How does a computer operate a car? How did electric-vehicle visionaries adapt design and production techniques from the unlike business of producing computers? How are the specific requirements of the single-owner mode of transportation different from other engineering challenges? How are geopolitical crises and the loci of manufacturing of component parts implicated in car production? What might answers to these questions tell us about complex systems and technological change over time?

What might answers to these questions tell us about complex systems and technological change over time?

As Eisler deftly argues, the modern push for electric cars represented the confluence of multiple social, economic, technological, and political scenarios. EV enthusiasts looked outside Detroit for new approaches to car building. The methodology of the information technology industries offered the notion of assembling off-the-shelf component parts, but the specific safety requirements—and the related engineering complexity—of automobiles put the process on a longer trajectory. On the one hand, the near simultaneous cancellation of General Motors’ early electric car EV1 and bursting of the dot-com bubble discouraged investment in a new EV industry. On the other, public sentiment and resulting public policy created a regulatory environment in which a technically successful EV could flourish.

Eisler highlights additional tensions. A fundamental mismatch between battery life and engine life undermined the interest of the traditional auto industry in these newly designed vehicles and belied difficulties in production and maintenance. And the trend of globalization, with its attendant divide between design in the West and manufacturing in the East, persisted beyond policy initiatives to onshore all elements of car production in the United States. Most profoundly, taking the longer view toward the future, Eisler indicates that thinking about cars as computers on wheels fails to consider the larger sociotechnical system in which EVs are inevitably embedded: electric power networks.

Today, Americans plug mobile computing devices without wheels into outlets everywhere, expecting only to withdraw energy. Sure, some of us recharge our phones with laptops as well. But we don’t really see laptops as storage batteries for the power grid. Nor do we generally consider when to recharge phones to avoid periods of peak power demand. But the energy exchange potential of an all-EV fleet that may replace current hydrocarbon-burning cars, buses, trucks, and trains suggests a much more complex electrical future. Eisler gives just one of multiple examples, noting that EV recharging shortens the service life of local power transformers. EVs are complicating construction and maintenance of power distribution networks and are already much more than computers on wheels. A wide range of industries have difficult and interesting questions ahead about whether and how these hybrid IT/mobility devices will fit into our highly electrified future.

Non-Resident Scholar, Center for Energy Studies, Baker Institute

Rice University

Research Historian, Center for Public History

University of Houston

Author of The Grid: Biography of an American Technology (MIT Press, 2017)

Matthew Eisler makes a welcome contribution to the emerging conversation about the history of the rebirth of the electric vehicle. In particular, Eisler rightly highlights two factors that are often left out of breathier, more presentist accounts.

First, Tesla—the company synonymous with the faster, better EV that competes head-to-head with internal combustion—was not founded by Elon Musk. Although Musk subsequently negotiated a deal by which he was officially recognized as a Tesla founder, Eisler rightly focuses on the efforts of Martin Eberhard and Marc Tarpenning (and later JB Straubel) who recognized the opportunity arising from the confluence of advances in lithium-ion (Li-ion) storage batteries and drivetrain technology. Although many liken Musk to an “electric Henry Ford,” it is a poor analogy, as Eisler makes clear.

Second, Eisler rightly focuses on the social contexts of innovation in both the consumer electronics and automotive industries. The story of the Tesla founders trying to convince a reluctant Asian manufacturer to sell them high-performance Li-ion batteries for their pilot vehicle (eventually the Roadster) stands in sharp contrast with current efforts to “blitzscale” battery production for the growing electric vehicle market. The early battery makers focused on consumer electronics and therefore underestimated demand for Li-ion cells from electric vehicle manufacturers. Conversely, today’s rush to mass produce Li-ion cells everywhere may lead to overinvestment and rapid commodification rather than future riches. The Li-ion-powered electric vehicle is an industrial accident, not a carefully orchestrated transitional technology. Its history is definitely not one characterized by the seamless adjustments of efficient markets, a point further underscored by Eisler’s recognition of the role of state and federal policymakers, both in the initial rebirth of the EV and in support of Tesla.

There are three areas where I think Eisler might have missed the mark:

The Li-ion-powered electric vehicle is an industrial accident, not a carefully orchestrated transitional technology.

First, the idea of the car as a computer on wheels goes back to the dawn of the solid state era. In my own work on the history of EVs (The Electric Vehicle and the Burden of History, Rutgers University Press, 2000), I found engineers in Santa Clara county in the mid-1960s talking about the electrification of the automobile as a result of advances in solid state electronics. But it turned out that the incumbent auto industry responded by electrifying everything except the drivetrain. The car was an “electrical cabinet on wheels” before it became a computer. In this respect, thinking about electrification predates the birth of Silicon Valley, not to mention the dot-com era and everything that followed.

Second, many historians of technology may not wish to hear it in such stark terms, but it is very hard to imagine the EV transformation Eisler describes occurring in the absence of the important technological advances in energy storage and drivetrains. The “computer on wheels” was simply not plausible in the late-1980s. Innovation matters. Technological change creates affordances that shape downstream social and economic outcomes. Before those affordances were available, much of Eisler’s story would not have been possible. Third, the success of the standalone electric vehicle may have blinded Eisler (and others) to some of the paths not taken. For many years, EV supporters focused less on the electric vehicle as a replacement for internal combustion and more on adjacent market opportunities. For this group, electrification might have looked like electric scooters or electric-assist bicycles, or like micro cars such as city- or neighborhood-electric vehicles, or even like electric light delivery vans and small buses. Recent events have pared away the many other ways that the electrification of the auto might have developed.

Associate Professor, Robert H. Smith School of Business

University of Maryland

Caring for People With Brain Injuries

In “The Complicated Legacy of Terry Wallis and His Brain Injury” (Issues, Winter 2023), Joseph J. Fins employs the story of one man to underscore a severe shortcoming in the US health care system. Disorders of consciousness (DoC) are conditions of the brain when there is not brain death but there also is not consistent responsiveness to external stimuli. People who experience DoC may progress through coma, unresponsive wakefulness, and minimally conscious states before sleep/wake cycles are re-established and reliable responsiveness to external cues return.

People who experience prolonged DoC following traumatic brain injury encounter multiple faults in the existing service delivery system. Despite recent evidence that three of four persons with DoC due to traumatic brain injury will become responsive by one year after injury, many families are being asked to make decisions regarding withdrawal of life supports just 72 hours after injury. These families cannot be expected to know about prognosis; it is the responsibility of the health care system to provide unbiased and evidence-based data upon which these critical decisions can be made.

It is also incumbent upon the health care system to provide competent care tailored to the needs of persons with prolonged DoC. At discharge from trauma services, care by professionals who are competent to assess and treat unresponsive patients is obligatory. With the promulgation of guidelines by a joint committee of the American Academy of Neurology, the American Congress of Rehabilitation Medicine, and the National Institute on Disability, Independent Living, and Rehabilitation Research, we can no longer claim ignorance regarding the competencies needed to treat this population.

People who experience prolonged DoC following traumatic brain injury encounter multiple faults in the existing service delivery system.

Tailoring care to the needs of people with DoC also includes placement in health care settings that can optimize rehabilitative treatments while protecting against complications that limit recovery. Movement to and between a long-term care facility and specialized inpatient rehabilitation programs should not be based on criteria developed for other populations of patients. For instance, there is no medical basis for the requirement that a person with a DoC actively participate in rehabilitation therapies when passive motor and cognitive interventions are the appropriate care. Effective and humanitarian treatment requires monitoring and coordination across a number of health care settings including, in some cases, the patient’s own home. A person with a prolonged DoC deserves periodic reassessment to assure that complications are not developing and, more important, to detect when an improvement in arousal or responsiveness necessitates a change in therapeutic approach. This type of coordinated approach across settings is not a strength of the US health care system.

Other Western countries—most notably Great Britain and the Netherlands—have recognized the unique needs of persons with prolonged DoC and have designed health care pathways that optimize the opportunity for a person to attain their full potential. It should not be a matter of luck, personal means, or a relentless family that conveys the opportunity to regain responsiveness after prolonged DoC. We have the knowledge to provide appropriate care to this population; it is now a matter of will.

Professor, Department of Physical Medicine & Rehabilitation

The Ohio State University

As I read Joseph J. Fins’ essay, I was trying to envision the article’s optimal target audience. As a neurologist who cares for critically ill patients with acute brain injuries from trauma, stroke, and hypoxic-ischemic brain injury after cardiac arrest, the clinical details of cases such as described are familiar. But this story is of a person, not a patient, and it reinforced the view that my particular domain, the neurological intensive care unit, has afforded me. People such as Terry Wallis and their families have a complex journey that involves direct and indirect intersections with intensive care units, skilled nursing facilities, rehabilitation facilities, hospitals, outpatient clinics, and insurance payors (governmental and private), as well as with the doctors, nurses, therapists, administrators, social workers, ethicists, and interest groups that inhabit these organizations, and with legislators who craft policies that overarch all. Mr. Wallis’s poignant story seems to be one of disconnection. I would like to think that each of these groups that needs to hear this story has good intentions, but there is a clear lack of ownership, follow-through, and “big picture” that may even incentivize leaving those with severe neurologic impairments (or more often their families) to find their own way.

Inaccurate early negative prognostication can lead to a self-fulfilling prophecy of death or poor outcome if care is limited as a result of this assessment.

Several potentially disparate aspects of cases such as Mr. Wallis’s bear discussion and emphasize the need for a holistic patient-centered view of his experience. These include prognostic assessment by medical personnel, values-based judgment of living a disabled life, and the civil rights that consciousness necessitates. It is increasingly recognized that inaccurate early negative prognostication can lead to a self-fulfilling prophecy of death or poor outcome if care is limited as a result of this assessment. Medically, prognostic uncertainty can be considered as the difference between “phenotype” (what a patient looks like on clinical examination) and “endotype” (the underlying biological mechanism for why the patient looks like this). The author’s discussion of cognitive-motor dissociation is part of this consideration, as is clinical humility in prognostication (as described by the neurologist-bioethicist James Bernat). The comment that Mr. Wallis’s treating doctors “couldn’t imagine that the life he led was worth living” is also a common paternalistic view that pushes clinicians away from patients and diminishes the value of patients’ and their families’ goals of care. And perhaps most novel for treating physicians, the idea that civil rights of patients with impaired consciousness might be compromised if desired care is not accessible and provided is compelling and difficult to rebut. It is too easy for patients with disorders of consciousness to get disconnected.

A recent study by Daniel Kondziella and colleagues, reported in the journal Brain Communications, estimated that 103,000 overall cases of coma occur in the United States annually. Efforts such as the Neurocritical Care Society’s Curing Coma Campaign seek to push the science toward recovery and bring a more holistic view of the care of patients across their experience. The story of Terry Wallis is not a one-off. I hope his journey can get to the audiences who need to hear.

Professor of Neurology

University of California, San Francisco

Cochair, Curing Coma Campaign

Neurocritical Care Society

Joseph J. Fins narrates Terry Wallis’s fascinating and important case and explains the scientific and social lessons it taught. Here, I offer an additional scientific insight from the story and discuss its implications.

Neurologists and neuroscientists who studied Mr. Wallis’s case investigated the mechanism to explain the unusually long delay in his improvement following a serious traumatic brain injury. They performed brain MRI studies using diffusion tensor imaging, a technology that assesses the integrity of white matter tracts, which contain nerve fibers that serve to connect the cerebral cortex with different areas of the brain and spinal cord. These studies showed that the mechanism of his brain damage was diffuse axonal injury. This type of brain injury is produced by blunt rotational head trauma causing widespread severing of the axons of brain neurons. Additionally, observers noticed that the white matter changes evolved over time. They interpreted these findings as gradual axonal regrowth of the severed axons that likely accounted for Mr. Wallis’s long-delayed improvement. Presumably, it took nearly two decades for the slow axonal regrowth to adequately reconnect disconnected brain regions and restore his ability to talk.

The Terry Wallis case illustrates that improvement in neurological function after severe brain injury remains possible, even after many years.

Most types of serious global brain injury, such as those caused by trauma or lack of blood flow during cardiac arrest, primarily damage brain neurons. By contrast, diffuse axonal injury generally spares the cell bodies of neurons and damages only their axons. Diffuse axonal damage disconnects brain neurons from each other and produces severe brain dysfunction. The resulting widespread neuronal disconnection is sufficient to induce a disorder of consciousness such as the vegetative state or, as in Mr. Wallis’s case, the minimally conscious state.

Although often severe, diffuse axonal injury may have a better prognosis than a brain injury of similar magnitude that primarily damages neurons, such as that produced by absent brain circulation during cardiac arrest, as in the widely publicized case of Teresa Schiavo 20 years ago. Sheared axons with intact neuronal cell bodies retain the capacity to regrow, whereas severely damaged neurons usually do not. As the article explains, the Terry Wallis case illustrates that improvement in neurological function after severe brain injury remains possible, even after many years, particularly when the mechanism is diffuse axonal injury.

Professor of Neurology, Active Emeritus

Dartmouth Geisel School of Medicine

The Social Side of Evidence-Based Policy

To Support Evidence-Based Policymaking, Bring Researchers and Policymakers Together,” by D. Max Crowley and J. Taylor Scott (Issues, Winter 2023), captures a simple truth: getting scientific evidence used in policy is about building relationships of trust between researchers and policymakers—the social side of evidence use. While the idea may seem obvious, it challenges prevailing notions of evidence-based policymaking, which typically rest on a logic akin to “if we build it, they will come.” In fact, the idea that producing high-quality evidence ensures its use is demonstrably false. Even when evidence is timely, relevant, and accessible, and even after researchers have filed their rigorous findings in a clearinghouse, the gap between evidence production and evidence use remains wide.

But how to build such relationships of trust? More than a decade of findings from research supported by the William T. Grant Foundation demonstrates the need for an infrastructure that supports evidence use. Such an infrastructure may involve new roles for staff within policy organizations to engage with research and researchers, as well as provision of resources that build their capacity to do so. For researchers, this infrastructure may involve committing to ongoing, mutual engagement with policymakers, in contrast with the traditional role of conveying written results or presenting findings without necessarily prioritizing policymakers’ concerns. Intermediary organizations such as funders and advocacy groups can play a key role in advancing the two-way streets through which researchers and policymakers can forge closer, more productive relationships.

More than a decade of findings from research supported by the William T. Grant Foundation demonstrates the need for an infrastructure that supports evidence use.

Research-practice partnerships, which consist of sustained, formalized relationships between researchers and practitioners or policymakers, are one way to create and reinforce the infrastructure for supporting relationships that advance evidence use. Such partnerships are especially common in education, where they often bring together universities and school districts or state education agencies to collaborate on developing research agendas, communicating findings, and interpreting evidence.

Crowley and Scott have demonstrated an innovative approach to creating relationships between researchers and policymakers, one that is well suited to deliberative bodies such as the US Congress, but which could also apply to administrative offices. In the Research-to-Policy Collaboration model the authors describe, the Evidence-to-Impact Collaborative operates as an intermediary, or broker, that brings together researchers and congressional staff in structured relationships to create opportunities for development of trust. These relationships are mutually beneficial: they build policymakers’ capacity to access and interpret evidence and allow for researchers to learn how to interact effectively with policymakers. Thanks to their unique, doubly randomized research design (i.e., both policymakers and researchers were randomized to treatment and control groups), Crowley and Scott are able to demonstrate that the Research-to-Policy Collaboration model has benefits on both sides.

It is past time to move beyond the idea that the key to research use is producing high-quality, timely, relevant, and accessible evidence. These qualities are important, but as Crowley and Scott have shown, the chances of use are greatly enhanced when research findings are examined in the context of a trusting relationship between researchers and policymakers, fortified by the intermediaries who bring them together.

President

William T. Grant Foundation

Export Control as National Security Policy

In 1909, as part of the Declaration of London on the Laws of Naval War, a group of nations produced a list of items we would today consider “dual use,” but at the time were called “conditional contraband.” The list was the first time a large set of states had agreed to a common understanding of what goods and technologies represented a security concern.

Interestingly, the list included an item that is not on current export control lists, but is very much on the minds of people engaged in security governance today: balloons. Like general aviation airplanes, box cutters, or novel genetic sequences, balloons, such as the ones floating over the United States recently, represent a type of security concern that is not really visible to, and therefore governable by, today’s conventional export controls. But they still represent security concerns to the state.

In “Change and Continuity in US Export Control Policy” (Issues, Winter 2023), John Krige and Mario Daniels discuss how a historical gaze allows us to better understand “the context, effects, prospects, and challenges of the Biden administration’s current policy changes” on export controls. But there is a bigger conversation about export controls that we seem unable to have: When is this system of governance not the right tool for the job?

There is a bigger conversation about export controls that we seem unable to have: When is this system of governance not the right tool for the job?

Many aspects of the modern export control system took shape in the 1940s. What was once primarily a concern of the movement of goods from seaports is now about the movement of those goods, and the knowledge around them, from computer ports and laboratory doors. Krige and Daniels amply critique the central idea in much current export control policy: that security comes from preventing foreign supply. And the view that we can know what we need to be concerned about with enough time to put export controls in place—at least two years if you want to have international harmonization—doesn’t need that much inspection to find many areas where it doesn’t fit anymore.

Just five years after nations produced that first international lists of goods and technologies that represented a security concern, the concept of conditional contraband essentially fell apart in World War I and the era of total war. While export controls may not be on a similar precipice at the moment, their limitations are becoming only more apparent. In recognizing these limitations, we open the window to thinking differently about whose security matters, what counts as a security concern, and who has responsibility for doing something about it. Krige and Daniels note the obstacles the current export control policies will likely encounter, but it is also worth noting that we can capitalize on these obstacles to have a bigger conversation on when export controls are not the right tool for the job—and what the right tool might look like.

Senior Research Fellow, Program on Science, Technology, and Society

John F. Kennedy School of Government, Harvard University

“National security” is a beguiling concept. Who does not wish to be secure among one’s people? Yet the very idea of a secured nation, as well as the instruments to achieve it, is not so much about the safety and well-being of the people in a country but the maintenance and expansion of state power, often at the cost of such safety and well-being. The normalization of national security obscures its contested origin and the violence it invokes.

As John Krige and Mario Daniels elucidate in their essay, national security as a whole-of-society response to perpetual danger grew out of institutional legacies of World War II and quickly took hold at the onset of the Cold War. Export controls have been central to this mission: to keep US adversaries technologically inferior and economically poorer, hence militarily weaker.

Since the beginning, export control regulations have faced pushback from proponents of free trade. Yet the dual objectives of a secured nation and a free market are in tension only if one believes in the fairness or at least neutrality of the capitalist market, and mistakes the purported ideals of America for reality. The so-called liberal international order, including financial systems, intellectual property regime, and trade rules, overwhelmingly favors US corporate interests and aids its geopolitical agenda. Export controls are another set of tools in service of US hegemony.

By the parochial logic of techno-nationalism, safety from dual-use technology is achieved not by restricting its harmful use but by restricting its users.

A country’s foreign policy cannot be detached from its domestic politics. During the Cold War, US policymakers wielded the threat of communism as justification to wage wars and stage coups abroad, and to suppress speech, crush unions, and obstruct racial justice at home. Export controls should be understood within this broader context: more than just directing what can or cannot move across borders, these exclusionary policies also help define the borders they enforce. Both within and beyond the territorial bounds of the United States, the interests of capital and stratification of labor follow a racialized and gendered hierarchy. Export control policies reflect and reinforce these disparities; they are exercises of necropolitics on a global scale: to dictate who may live and who must die.

By the parochial logic of techno-nationalism, safety from dual-use technology is achieved not by restricting its harmful use but by restricting its users. Guns are good as long as they are pointed at the other side. The implications of this mindset are dangerous not just for existing technology but also for the future of science, as the anticipation of war shapes the contours of inquiry. When the Biden administration issued sweeping bans on the export of high-end semiconductor technology to China, citing the potential of “AI-powered” weaponry, the military application of artificial intelligence was no longer treated as a path that can be refused with collective agency but as destiny. The lust for a robot army further distracts from the many harms automated algorithms already cause, as they perpetuate systemic bias and aggravate social inequality. The securitization of a national border around knowledge depletes the global commons and closes off the moral imagination. The public is left poorer and less safe.

Research Scholar in Law and Fellow

Yale Law School’s Paul Tsai China Center

Lessons From the Ukraine-Russia War

Ukraine’s response to Russia’s invasion is reshaping our understanding of modern warfare along with defense research and development. At the same time, it presents an opportunity for already strong allies to forge new pathways of collaboration across the public and private sectors to bring commercial technology to the future battlefield. With help from public and private organizations, the Ukrainian armed forces have quickly embraced both military and civilian technologies as a means to confront fast-changing battlefield realities.

In “What the Ukraine-Russia War Means for South Korea’s Defense R&D” (Issues, Winter 2023), Keonyeong Jeong, Yongseok Seo, and Kyungmoo Heo argue that the “siloed,” “centralized” South Korean R&D defense sector should take a page from Ukraine’s playbook and better integrate itself with the broader commercial technology sector. The authors recommend prioritizing longer-term R&D challenges rather than the immediate needs of the South Korean armed forces, focusing innovation in critical technologies on new conflict scenarios and dynamic planning over the long run.

With help from public and private organizations, the Ukrainian armed forces have quickly embraced both military and civilian technologies as a means to confront fast-changing battlefield realities.

In recent years, South Korean policymakers have increasingly recognized the defense sector as a key area for advancing the country’s security and economic interests. Propelled in part by many of the same government-led policy support mechanisms that have made the country a global leader in telecommunications, semiconductors, and robotics, South Korea has become the fastest-growing arms supplier in the world, with arms exports reaching more than $17 billion in 2022. Yet as Jeong, Seo, and Heo note, South Korea’s defense community still faces obstacles to effective adoption of nondefense technologies that have played an important role in Ukraine, such as 3D printing, artificial intelligence-based voice recognition and translation software, and commercial space remote sensing. What’s more, South Korea’s failure to develop an inclusive R&D environment has hindered innovation in the nation’s defense ecosystem. Large companies account for almost 90% of sales among defense firms, leaving little room for smaller, innovative enterprises to find success in the Korean defense ecosystem.

The United States faces many of the same challenges. A 2021 report from Georgetown University’s Center for Security and Emerging Technology argued that under the US Department of Defense’s current organizational structure, “defense innovation is disconnected from defense procurement,” which is hampering efforts to adopt novel technologies at scale. Like its South Korean counterpart, the US defense industrial base is also characterized by high levels of market concentration among top defense contractors.

Jeong, Seo, and Heo offer recommendations that closely align with recent Defense Department efforts to foster innovation and accelerate adoption of the technologies that are fast transforming the US national security landscape. In light of lessons learned in Ukraine, the South Korean and US militaries should work together to develop and adopt disruptive technologies, ultimately enabling a joint fighting force in the Asia-Pacific region capable of deterring and defeating future adversaries.

Senior Fellow

Research Analyst

Center for Security and Emerging Technology

Georgetown University

Support Caregiving Scientists

In “Fixing Academia’s Childcare Problem” (Issues, Winter 2023), Zeeshan Habeeb makes what social scientists call the “business case” for providing subsidized childcare to graduate students and postdoctoral fellows. The author notes that the poorly paid, protracted training period for establishing an independent faculty career overlaps with women’s fertility. Habeeb argues that this life course pattern plus the lack of affordable childcare on campus pushes out talented academics in science, technology, engineering, and mathematics—the STEM fields—and decreases the innovation, competitiveness, and profitability of the United States. Losing these highly trained early-career scientists is a poor return on the nation’s investment in education and academic research.

However, the business case for work-family accommodations mostly persuades those who are already sympathetic. Others are likely to critique the statistics and to maintain that their institution is different from those held up as case studies.

Let’s ask why the current arrangements—that require academics to work for low wages and without adequate family accommodations until their thirties or forties—are still so taken for granted in American universities. To understand this, we need to address four moral and emotional dimensions of academic STEM culture, as Erin A. Cech and I find in our book, Misconceiving Merit (University of Chicago Press, 2022).

First, academic science is not seen by STEM faculty as a business. Rather, it is understood to be a vocation devoted to fundamental research and largely unpolluted by profit or politics.

Let’s ask why the current arrangements—that require academics to work for low wages and without adequate family accommodations until their thirties or forties—are still so taken for granted in American universities.

Second, our research shows that across genders and family statuses, STEM faculty largely embrace the “schema of work devotion,” which mandates undivided allegiance to the scientific calling. Research faculty love their work; it is a big part of their identity. STEM faculty celebrate their independence and inspiration, charting their own course to intellectual discovery.

Third, seen through a work devotion lens, the lengthy training period is an appropriate novitiate, in which novices prove their dedication and their worthiness to be appointed as professors.

Fourth, the underbelly of work devotion is the stigma faced by caregivers, who are seen as violating their vocation. This translates into a devaluation of women and of non-gender-normative men, who often take on more of the household’s caregiving responsibilities.

I encourage disciplinary and multidisciplinary associations and federal funders to address this stigma head-on. They should demand a moral reckoning, which would redefine STEM academics as deserving of the time and resources to have or adopt children, if they choose, while maintaining their respected status in the vocation. At a later life course stage, STEM academics also deserve the time to care for elderly or fragile parents and other loved ones, while still maintaining full respect for their scientific contributions.

Academic science is understood to be a vocation. To preserve the inclusion of early-career scientists who are creative and procreative in all senses of these words, let’s stop expecting it to be a monastic one.

Professor, Department of Sociology

University of California, San Diego

Zeeshan Habeeb offers compelling reasons and concrete solutions. The solutions are evidence-based for workforce productivity in research in science, technology, engineering, and mathematics—but US academic institutions already know how changes in policy across the board can make a huge difference for caregivers, especially women.

Academic institutions are not lacking models and solutions for equitable childcare; rather, the institutions function because of the exploitative invisible labor of caregiving that is at the root of all US workplaces. To address the childcare crisis in academia requires a reflection on our value system as a society and where we place our priorities. In 2021, women took on an extra 173 hours of childcare, and men took on an average of 59 extra hours. The United States spends the least of any “developed” country on early childhood care, and families are left to fill this gap in infrastructure. This gap is particularly glaring within academia because the system is traditionally set up for a man of means who has someone at home full time to cook, clean, and raise their children. The academic structure was never intended for a faculty member to do research, teaching, and service at the university and then go home to do cooking, cleaning, and bath time. We need to examine our cultural values around what childcare is worth and why it is not considered a valid expense for something like federal grants.

To address the childcare crisis in academia requires a reflection on our value system as a society and where we place our priorities.

These are the questions we should be asking institutional program officers because the COVID-19 pandemic has shown that “it’s the policy” can be changed when necessary. If we can create budget lines for the travel required to do research, why do we dismiss the additional labor of caregiving required for parents to be in the field doing said research? The childcare crisis is particularly insidious given how academia as an industry requires graduate students, researchers, and faculty to move around to wherever we can find jobs—most times separated from family and other networks of support and making us heavily reliant on paid care. Even if you have managed to secure regular childcare, university events and conferences do not line up with typical childcare arrangements. Weekend conferences and evening lectures require scrambling for additional care, and missing out on networking and the unsaid necessities between the lines of your CV can be detrimental to promotion and tenure evaluations.

As the poet and civil rights activist Maya Angelou said, “When someone shows you who they are, believe them the first time.” US policies on caregiving, parental leave, and bodily autonomy continue to show us repeatedly where our value systems reside. Yet there are still reasons for hope and possibility for better working conditions in US academic institutions. If we were able to reconfigure the entirety of academic life during the early days of the COVID-19 pandemic, we should be able to utilize pandemic workarounds and re-evaluate cultural norms around carework, especially childcare. Academic institutions should not be left to fix the problems of US working culture alone, though we should consider how our industry-specific norms such as evening talks and weekend conferences are managed to produce family-friendly practices and child-friendly cultures.

Assistant Professor, School for the Future of Innovation in Society

Arizona State University

Zeeshan Habeeb discusses the urgent need for better childcare in academia, where there currently are few options and high costs. I wholeheartedly agree, and can think of no more significant life event than parenting—but it also certainly tips the balance of work-life. As a faculty member, I believe that this balance or imbalance should be acknowledged when faculty are up for merit and promotion.

There is a precedent for it. During the first years of the coronavirus pandemic, across institutions many faculty up for evaluation were allowed to submit a supplementary COVID-19 impact statement along with their files for consideration. The purpose was to illustrate for reviewers the impact of the pandemic on their academic productivity. While this was a step in the right direction, a system that wholly measures and values individuals by academic productivity, and evaluates everyone on the same playing field, in COVID times or otherwise, is flawed.

While many institutions discuss, and highlight their support for, work-life balance, it is often not the reality that faculty experience. As many faculty who have gone through the review process know, the area that often matters most is research, and the unrealistic expectation of applying for grants each cycle has led to the current situation of overwork, burnout, and productivity trumping all to move up the academic ladder. The most privileged move up the academic ladder easiest and are least likely to be criticized by similar members of privileged committees (e.g., white, male, straight, cisgender, and healthy, with educated parents, adult children, ability to travel for talks and to conferences, fewer significant disabilities, and basic needs met). This reality does not support building a diverse academic community and is inherently exclusive.

A system that wholly measures and values individuals by academic productivity, and evaluates everyone on the same playing field, in COVID times or otherwise, is flawed.

In this light, I offer a modest proposal. Similarly to reporting on accomplishments in research, teaching, and service to achieve merit and promotion, faculty should also be allowed to report on their efforts in self-care activities and significant life experiences outside academia. This additional reporting would help address faculty retention, promote positive mental health, and acknowledge reality. Instead of having one-offs such as the COVID-19 impact statement or the next future emergency, work-life balance must be part of the norm in review.

My vision is not to deny tenure for those who don’t achieve their goals of self-care or for people who prefer to overwork. But just having such a work-life balance section will bring attention to its importance, provide a venue for people to reflect on their complete journey (which includes academia), and allow those who make decisions on merit and promotion files to understand context and to value work-life balance rather than the unhealthy academic norms with life taking a back seat to academic productivity. So maybe a particular assistant professor didn’t reach the established departmental productivity norms in one scholastic area, but look at that person’s experience outside academia and all they are doing for self-care. Having taken time for self may make them a better researcher, teacher, collaborator, and human being who will stick around academia. Shouldn’t that be the goal?

Professor of Medicine

Department of Social Medicine, Population, and Public Health

University of California, Riverside School of Medicine

Coordinating Against Disease

I support Anh Loan Diep’s call in “Finding My Future Beyond the Bench” (Issues, Fall 2022) for greater communication between researchers and the public. Diep began their undergraduate research studies in my laboratory, which studies the interactions between the parasite Toxoplasma gondii and the human immune system. For their PhD research, Diep worked on a relatively understudied but increasingly recognized significant fungal infection that occurs in the southwestern United States: Valley fever.

The need for collaboration among clinicians treating patients, scientists studying the molecular mechanisms of the disease, and policymakers could not be greater.

Of all the major classes of infectious agents—viral, bacterial, parasitic, fungal—we know the least about how our immune system contends with and ultimately clears fungal pathogens. Certainly, there is great need for more research and public awareness regarding Valley fever. As Diep points out, the disease is underdiagnosed. In the clinic, it is also often mistaken for other respiratory diseases, delaying treatment and exacerbating symptoms for those with Valley fever. The need for collaboration among clinicians treating patients, scientists studying the molecular mechanisms of the disease, and policymakers could not be greater.

I second Diep’s suggestion that at leading scientific conferences in the field of Valley fever, devoting a section to fostering interactions among scientists, clinicians, and those in public health policy would have lasting impact on disease alleviation. As the author reasons, scientific conferences are something of a missed opportunity to build mutual trust between relevant stakeholders, and to build collaboration and momentum needed to tackle this insidious disease—a vision that many of us support.

Associate Professor

Department of Molecular and Cell Biology

University of California, Merced

Episode 28: Finding Collective Advantage in Shared Knowledge

The CHIPS and Science Act aims to secure American competitiveness and innovation by investing $280 billion in domestic semiconductor manufacturing, scientific innovation, and regional development. But if past government investments in science and technology are any guide, this will affect American life in unexpected and profound ways—well beyond manufacturing and scientific laboratories.

On this episode, Michael Crow, president of Arizona State University, talks to host Lisa Margonelli about the CHIPS and Science Act in the context of previous American security investments. Investments in food security and agriculture in the 1860s and nuclear security in the 1940s and ’50s created shared knowledge that benefitted all Americans. Early agricultural programs, for example, turned farmers into innovators, resulting in an agricultural sector that can feed many people with very little labor. In similar ways, today’s quest for digital security could make the country more secure, while also changing how individuals live and work with information.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. In 2022, Congress passed an extraordinarily bipartisan initiative called the CHIPS and Science Act. The act is meant to make the US a leader in industries of the future. It has $52 billion for semiconductor chip development, $200 billion for science, and $10 billion for regional hubs. It’s a lot of money. In today’s dollars, it’s twice the cost of the Manhattan Project for the chips element alone. How could these investments transform American life?

I’m Lisa Margonelli, editor-in-chief of Issues. On this episode, we’re talking to Dr. Michael Crow, president of Arizona State University about previous government initiatives around science and security, and what they suggest about the chips initiative and our possible future. Michael, welcome.

Michael Crow: Hey, Lisa. Thank you. Glad to be here.

Lisa Margonelli: There’s a lot of talk about how CHIPS and Science is unprecedented, but how does it fit into the history of government investments in science and security?

Crow: You know, what’s funny—and a lot of Americans I don’t think remember this or have thought about it—but the American government from its design and its outset has always been scientifically driven. President Jefferson in 1804 formed the Corps of Discovery after the purchase of the Louisiana property from France, and then had Lewis and Clark, then as the captains of the Corps of Discovery, scientifically explore from the Mississippi River in St. Louis all the way to the coast of Oregon at the mouth of the Columbia River—an unbelievable scientific exploration. Then many times in the history of the United States, with the Coastal and Geodetic Survey and all kinds of other things along the way, the country just became very, very science driven; very, very knowledge core driven.

Three times prior to the CHIPS and Science Act, the US government stepped up and decided to ensure national security around something that they felt was absolutely essential. The first was our moves in the nineteenth century, in the 1860s, with both the establishment of the Department of Agriculture and the land-grant universities, to make certain that food security would always be maintained in the United States. And now we’ve become the most agriculturally abundant, most agriculturally creative, most scientifically driven, food-secure place that’s ever existed. That was sort of case number one.

Case number two was following the Manhattan Project during World War II, nuclear security became a thing, where we had developed this scientific thing: atomic fission. We had done this during World War II; we’d built all of these labs, and now we knew we had this tiger by the tail that would have both civilian applications and weapons applications—which we needed to basically be the best at, forever, so that we could maintain the advantage that we’d gained. And so the Atomic Energy Commission was formed in 1946, later the ERDA, the Energy Research and Development Administration, in the early 1970s. And this really became a core thing.

A third thing kind of on the side, was that we decided after the launch of Sputnik in October of 1957, that we were going to be the masters of space technology. President Kennedy announced going to the moon. NASA was created from the previous agency that had existed since World War I. All kinds of things happened in that space. And in those three areas, food, nuclear, and space, the United States is able to protect all of its interests and to advance its knowledge-seeking requirements in those spaces to our advantage. And finally, now, just recently with the CHIPS and Science Act, we’ve decided that all things digital are so important to the future of the country, like food in the 1860s, that all things digital are so essential that we have to maintain technological—not superiority, but constant technological innovation, constant manufacturing capability, constant ability to be the best at all things digital. So the CHIPS and Science Act is like the agricultural project, the nuclear project, and the space project. They’re decisions by the country to maintain national security around a certain area of technology.

Margonelli: That’s really interesting. And I think what’s in the story of twentieth-century science—we’re pretty familiar with the Manhattan Project and the space program, but we’re a little bit less familiar with what happened in the 1860s. So I want to kind of dive down into that. There was the formation of the agriculture department, and there was also the formation of the land-grant universities. And these things had huge and long-lasting, transformative effects. So let’s talk a little bit about that.

Crow: So imagine it’s 1860. The country’s deeply divided. There’s three people running for president. A person is elected president with around 40% of the vote—that would be Abraham Lincoln. Several states secede from the union; the country’s in crisis. There’s about 30 million people living in the United States at the time, but it’s expanding wildly and quickly, particularly into the West. Food security becomes a question. And then also the notion of inequitable social outcomes becomes a question, as well as our agricultural productivity.

So with Congress realigned, with fewer states present in Congress, two things could be created. One was a national initiative in agriculture, agricultural science, agricultural trade oversight, agricultural ideas and thinking, and so forth: agricultural innovation. So that’s the Department of Agriculture. And then along the way, a guy named Justin Morrill, who was a congressman from Vermont at the time, had thought for some time that each state should sell some of the land given to the states by the federal government to build a college for agricultural and mechanical arts, open to the sons and daughters of farmers and mechanics (which was 90% of the population at the time).

And so that got passed in July of 1862. The states set up land-grant schools like the University of California, the University of Illinois, Michigan State, Purdue, Cornell, MIT. In each of those states and many others—Iowa State, where I went to undergraduate school, was one of those schools—those universities then became, and the history shows this, unbelievable, transformative elements on two dimensions relative to the United States. First, we moved into unbelievable agricultural security and agricultural productivity, and never had the food insecurity that then existed in Europe, existed in Asia, has existed in other places around the world. And then food has just been taken for granted in the United States because it’s been such a perfect area of national security. In addition to that, the innovation created out of these schools then became the driving force for the post-Civil War industrial success of the United States. A lot of the literature has looked at the role of the land-grants.

It’s really quite remarkable. Those land grants, several of them became among the first research universities at scale. The United States accelerated its economic evolution, its social evolution, all these things were driven by basically stabilization of agriculture, movement of agriculture into a powerful economic driver. And then all the engineering solutions and special training and special people that came out of these schools were really, really powerful to the late-nineteenth-century transformation of the American economy.

Margonelli: That’s really interesting because reading what you had written about this sort of sent me back to Hunter Dupree’s book on the history of science and the federal government. And two things came out of that that struck me. One thing is that that transformation of the US science and knowledge enterprise was not really anticipated when they started. When the agriculture department started, it was run by a milkman, I think, and it didn’t know how to generate knowledge. It didn’t know how to solve problems. The Texas fever among cattle got completely out of hand. They had all the wrong ideas, and they gradually moved towards this very unified way of looking at problems and solving problems. And they also kind of transformed, on a very intimate level, farmers all across the country into scientists.

Crow: Yes, you’re absolutely right with that history. And so what we learned was that there was collective advantage to shared knowledge; there was collective advantage to shared training and shared experience. So over time, county extension offices were built in every one of the 3,000-plus counties in the United States. There were agricultural extension specialists that were helping individual farmers to accelerate their innovation. Hybrid corn varieties, ways to take care of pests and insects and weeds, all kinds of things, all enhancing productivity and also enhancing farmer success. So throughout European history and other parts of the world, farm collapse, agricultural collapse, economic collapse, bread riots, food riots, starvation, all these things were avoided here because we found a way to turn every individual farmer into a state-of-the-art agriculturalist. They could use their own ingenuity, but then they could draw from the collective knowledge of the country.

And yes, the Department of Agriculture started the same way that the Department of State [did]—I mean, I think the first patent agents and spies for the United States in terms of acquiring other technology reported directly to Hamilton and Jefferson when they were both cabinet members in the first administration. And so all these departments started out as small, unorganized things. But what happened was then the value of connection and collective knowledge and core scientific knowledge and core technological knowledge became really, really important to the success of the country.

Margonelli: Yeah, it’s really a fascinating transformation. I think one of the other things that came up, another parallel to CHIPS and Science, which has been discussed as industrial policy or the government getting out of its lane and getting involved in working directly with industry, was that when these agricultural acts started, they essentially transformed the role of government into working on the general welfare and generating knowledge. And we have something sort of similar happening here.

Crow: Well you know, what’s weird about that is it’s always funny to me when people talk about interference of the government. In fact, they’ve forgotten to go back and read the founding documents or the debates that occurred in the summer of 1787. So a lot of things got left on the cutting room floor in Philadelphia. The summer of 1787 left a lot of things that were proposed and not brought into the Constitution and then those things that were put into the Constitution. And the “general Welfare” remains in there. And people just forget, what does that mean? Well, how about food security? How about nuclear security? How about making certain that we never have to live without the essential digital devices that we’re going to need for every aspect of our life, our drinking water, our clean air, our cars, our electric vehicles, our computational tools, our learning assistants, our everything—all these things require these digital assets.

If you go back, it’s kind of weird, all these people who are against earmarks. So Samuel Morse’s funding for the first telegraph was an earmark from Congress. The wind tunnel that ultimately became the Jet Propulsion Laboratory was an earmark. So this notion that somehow you can’t have politics involved in building national capability, I don’t get that. And then there’s just this weird thing about, “Well, the government shouldn’t be involved in this.” Well, it’s not the government that’s involved in this. The government is facilitating collective knowledge. It’s facilitating base knowledge from which everyone can benefit.

If you look at somebody like George Washington Carver and what he was able to do in organizing knowledge about the peanut and the growth of the peanut, helping after Reconstruction Black farmers in the South to gain wealth and move forward with things. I mean, no individual farmer could do that by themselves.

Every individual farmer could be a better farmer because of the collective knowledge. And then from that, the industries that were developed from that base in the United States are unbelievable. It’s almost 20% of the economy, if you look at all things that agriculture touches just in that particular area.

Margonelli: I think this is a good time to move on to the second major initiative, which is after World War II, when you had the science and security initiative that had three elements to it. It created an infrastructure, it mobilized talent, and it had critical supply chains attached to it. Can you talk a little bit about how that transformed?

Crow: Well, what was interesting is that President Franklin Roosevelt, in the summer of 1940, was already speculating that the United States was highly likely to become involved in World War II. We had not been attacked yet by the Japanese, and in general, the public did not want to get into the war. But the president’s job is also to be prepared for the war. So he called on a person named Vannevar Bush, who at the time was the president of the Carnegie Institution of Washington. He had been the vice president for research at MIT in the 1930s, and he was one of the founders, after his PhD in electrical engineering in the 19-teens, of a company called Raytheon. So he was sort of a polymathic computer person, design electoral engineer. He could do all these things.

Margonelli: He was an amazing writer too.

Crow: Yeah, he was, absolutely. So he was called upon by President Roosevelt to create a thing that was ultimately called the Office of Scientific Research and Development, OSRD. And that became the mechanism by which President Roosevelt said, “I want you to bring all of the talent of American universities and American science and American technology to bear so that when we enter this war, we can have as few casualties as possible and we can end this war as quickly as possible,” which is a fantastic objective. And then when we entered the war in December of ’41, he accelerated unbelievably the scientific capabilities of the United States, particularly at the universities, building the Manhattan Project, launching other initiatives. So as you said, he brought talent to bear, he brought ideas to bear. He brought structures and mechanisms and, in a sense, transformed the way that we thought about science as a mechanism to protect democracy—and science as a mechanism to advance our economic and health success.

So much so that by the end of the war, just as President Roosevelt had passed away in April of ’45, just prior to that, Bush had been asked to put together a report on what do we do with all this science capability? And he wrote the famous report, Science, the Endless Frontier came out in July of 1945. President Truman accepted it. And from that point forward, you see that we got out of that the Atomic Energy Commission, we got the National Science Foundation, we got the expansion of the National Institutes of Health. The United States became the most significant scientific place in human history in terms of discoveries and technologies and moving things forward. And research universities began growing up all over the place, and the economy began doubling and doubling and doubling and doubling. And so what happened was we secured ourselves, in a sense, nuclear defense, which has proven to be complicated but positive. But we also designed out of that an unbelievable creative enterprise engaging the entire country.

Margonelli: It’s also interesting because it also kind of remodeled the relationship between government, research universities, and industry. It’s been called sort of the golden triangle or “a new kind of post-war science” that blurred the traditional distinctions. And that has proven to be an incredibly powerful engine of change in innovation through the development of GPS as it migrated out of military applications and into our cars and our phones—and right now, as we talk, smartphones, AI, jet engines, all of this sort of stuff moved from the military and security sphere out into our lives.

Crow: Well, what happened was that these research universities, which began being built in the 1870s with Johns Hopkins in the 1890s with Stanford and the University of Chicago, then a bunch of the public universities and the land-grant universities came in and became research universities. But even by 1939, they weren’t heavily funded by the government. They were doing their own research. They were funded by some foundations, there were some private entities. And then when they were asked to rise up to the national challenge to carry out a global conflict to advance the United States to victory on two massive war fronts at the same time, technology played an unbelievably important role in all of that, from proximity fuses, to other kinds of devices, to code breakers, to atomic weapons designers and torpedo developers—everything that you can imagine. That quickly brought the war to an end; the main combatants in the form of Germany and Japan transformed forever into functional democracies of significant economic outcome.

This was perceived at the moment as an unbelievable transformation in the role of universities, and it just has never stopped. So what began in ’41 and ’42 accelerated in the fifties, accelerated in the sixties, and has continued to accelerate, which has then fueled, as you said, the internet advanced technologies. It fueled us becoming the unbelievable developers of these advanced semiconductors and microchips, advanced materials research, advanced computation research, medical research. All these things got going, and now it is a core part of who we are—and in fact has been emulated by others, which is making others nervous now that other places are “catching up” or passing us or whatever, because they’ve decided to take on the same model, build research universities, fuel these research universities and become competitive with the unbelievably successful United States.

Margonelli: Yes, and that actually brings me to my next question, which is: you’ve called failing to secure digital security a strategic error. What do you mean there?

Crow: So what I mean by that: we developed the fundamental material sciences, the fundamental engineering, the fundamental designs, the breakthroughs in the first semiconductors, the breakthroughs in what was the first transistor, all the things that came—the transistor was ’47, and in the fifties and in the sixties, these semiconductor materials were being built. We then built the most advanced chips, microchips, built the most advanced systems. Then because of costs of manufacturing being potentially lower in other parts of the world, manufacturing got offshored, development got offshored—so much so that by the time we get to the 2020s, the late teens and the 2020s, we find ourselves with a small manufacturing base, a significant research base, and our supply chain interruptible. So the strategic error was to not see these as a national asset. It’s only in the way that we see nuclear, the way that we see food, both of which are inseparable from our existence. In this case, we thought that this was only a commercial thing. It’s not only a commercial thing. These chips have become as essential as water to our success going forward.

Margonelli: That’s interesting too, because food, it has national implications, but it also has sort of personal implications, as we’re seeing with this talk of taking TikTok off of our phones and things like that.

Crow: Well, the technological applications using these technologies are slightly ahead of our social thinking right now and our ability to understand these things. So we’ve got all kinds of technology manifestations that are causing social disruption and social upset, and we have potential for security threats, we have potential for cultural threats. We’ve got all these things that are going on. All those things are transitory and will be addressed. What’s not transitory is the fact that our species is now enabled by these microchips, which are basically enhancing every single individual. All of us carry, or most of us carry, an iPhone or something like an iPhone, or an Android phone or something like this. Well, that’s a supercomputer attached to your body, connected to all the other supercomputers that are out there. And with ChatGPT and other things coming along, those will become, over time, powerful assistants to every person, every organization.

And so what’s going to happen here is that our species, for the first time, has now created a foundational tool—a computational device in the form of a semiconductor, which is an electronic system—which is then reducible because of advanced science to up to, I mean, I think the most advanced chip that IBM has has 50 billion transistors on a single microchip. My phone has, I think, only 12 billion transistors on the microchip. So everything will change. Medicine will change, business will change, computation will change, learning will change. Everything will continue to evolve. And so, like food and like nuclear, digital will be that kind of thing. And we’ve just come to that realization, and the CHIPS and Science Act is that.

Margonelli: Yeah. It’s so interesting when you’ve really put it in a larger context of how far this may take us and how it may change and transforms our lives and our fundamental relationships. I think the question here is, what can we learn from the past about how CHIPS and Science can have the same transformative potential?

Crow: Well, one thing we need to learn from what we learned in agriculture is that you’ve got to work at the level of the people. You’ve got to think sociologically about the outcome of these kinds of technologies. You’ve got to do technology assessment. You’ve got to understand what these technologies might do. You’ve got to think about how to educate the people to then fully take advantage of the technology and become, as we have in agriculture—basically spurring development across the entire economy, not just in concentrated corporations. That will then get the most fueling of all of “Schumpeter’s forces of creative destruction,” the terms that he used, Schumpeter being the Austrian economist who thought about what is innovation? How do you drive innovation? So innovation can’t be just these big chip manufacturers or the big tool manufacturers only. They have to be then spurred by whole new ways of thinking about chips and using chips and using technology.

So that’s a lesson from the past. Another lesson from the past is to basically not take our foot off the gas. This can’t be on again, off again, on again, off again. It has to be continuous innovation, continuous forward movement.

The other thing is that competition is real. We can’t stop competition from other parts of the world. We can only win. And so if you try to stop something, you don’t win. If you try to block something, you will lose. And so you need to understand global competition.

And then I think the other thing that we need to think about in terms of a lesson from the past coming out of nuclear is that we were clueless as to all of the ultimate implications of nuclear weapons technologies, certainly. And so now we have unmitigated nuclear proliferation, which hasn’t been thought through, hasn’t been managed. And so how do we manage the negative outcomes of some of these technologies more carefully? That’s certainly a lesson from the past.

And I think another lesson from the past is that sometimes we don’t think about what it all means. So for instance, through agricultural technology development, we eliminated the agricultural workforce. OK, well, that happened kind of gradually, and we adjusted, but we had a deep cultural impact on the country because much of the country was agriculturally based. And so these digital technologies will also have huge workforce implications, and we should think about them in front of these changes as opposed to during or after these changes. And so those are lessons from the past.

Margonelli: Yeah, recalling what happened with the agricultural act of transforming people’s ability to be scientists in their own lives and have that contribute to their own satisfaction and ability to feed themselves and their families has some interesting parallels for this.

Crow: Yes. And so one parallel is—it certainly is the case. In fact, on a project that I’m working on as a part of the National Advisory Committee on Innovation, which I’m a member of, I’ve been arguing that we need to make certain that we can have, down to the level of communities, incubators for the uses of chips in ideas that teenagers and others are coming up with. And how do you help people to build new kinds of chips and new kinds of activities and so forth. You got to look at these things as not just the realm of the massive global corporation, but the realm of any tinkerer, any innovator. If you read [Walter] Isaacson’s book, The Innovators, it’s a fabulous story about how some of these innovations in digital technologies emerged. They were not the product of just the big corporations. They were the product of all kinds of people and the big corporations.

And so what we need is both and, then more. So how do you facilitate all of that? And also, how do we get more even economic benefit across the country from these kinds of technologies? What could be developed using these kinds of technologies in new applications to help manage, I don’t know, the Mississippi River, or grow rice better in the delta regions of Arkansas and Louisiana along the Mississippi River and Tennessee? How do we do all those things? We need as much of this to be, like the agriculture department, as localized as possible.

Margonelli: That brings up two other interesting parallels to the agricultural act. One was the realization that manufacturing this knowledge could help raise everybody’s boat. And that is kind of called in question a little bit with CHIPS and Science: are we going to try to raise the knowledge only in the US and raise everybody’s boat in the US, or are we still a global knowledge producer? And that seems like something that’s going to have to be negotiated.

Crow: Well, I mean, yes, it’s complicated because some of these technologies can be particularly handy in weapons systems. And so what one wants to think about is how do we float all boats to drive up all economic activity? I’m going to round up the global economy, a hundred trillion dollars. Well, there’s no reason that it couldn’t be a thousand trillion dollars, be environmentally clean, drive up per capita income across the entire planet, drive us into all kinds of new things. Well, we’re not going to do that if we hold onto these digital technologies in a way that everyone doesn’t benefit. We just have to find a way to make certain that we reduce the probability of kinetic combat. There may be ways where these technologies can be very helpful to us in that also. We just have to think it through. We’re not thinking it through enough.

Right now we are heavily concerned about the rise of new major competition in China, new major competition in other parts of the world. I’m all for competition. Competition makes you perform better, harder, cheaper. There’s all kinds of ways that you solve things. We just have to make sure that what we get out of this is global evolution and fair competition.

Margonelli: We are at a very interesting point in history because we are at the beginning of this sort of arc of another 80 years. As you mentioned, we’ve had 80 years of transformation from the initial sort of nuclear security work. And we’ve had 150 years of evolution from the agricultural work. And as we start down that path, history shows us that we don’t actually know where we’re going, but we have to actually keep our eyes on what things are important as we go forward.

Crow: Well, what’s interesting, no one in 1940 would’ve predicted where we are now with either nuclear weapons, nuclear power, the emergence of fusion power, the Perseverance rover on the surface of Mars being nuclear powered, all these things that are happening—no one would’ve thought about any of that. We will have nuclear-powered spaceships, we’ll have all these things going on. All these things that are happening, no one would’ve predicted any of that. And then in agriculture, no one would’ve predicted that only 2% of the American population would be involved in production agriculture, feeding 340 million people in the United States and probably another 300 million people around the world, something like that, all from 2% of the American population. No one would’ve predicted that. No one would’ve thought about sustainable agriculture or whole new ways to build plant-based meats and all these other kinds of things that are going on.Not a single person could have thought of that.

And here we are now in 2023, thinking about what will happen between now and 2100 when in fact these technologies, these digitally based technologies will be more impactful than either nuclear or food. No one can predict where it’s all going, which means then, therefore, that we need more technology assessment capabilities, more predictive analytics, more deeper understanding of what these things might do, and just more thoughtfulness. Not to predict, because we’ll never get the predictions correct, but to understand and to adjust as we go along the way.

Margonelli: You have been really active in CHIPS and Science in Arizona, and as you think forward for how Arizona’s life, and not just the whole state, but the individual life could potentially be transformed, what are the things that you hope to steer towards and what are the things that you worry about?

Crow: One of the things I think that will happen for certain is that Arizona already is a huge manufacturing center for semiconductors and will become even more than that, that it’ll become the most concentrated semiconductor manufacturing place on the planet. And then all of the supply chain related to that, which then also connects to the battery companies that are here and the electric vehicle companies that are expanding here. So empowerment of all kinds of renewable energy systems, renewable tools, renewable devices, all those kinds of things. All of that will be advanced here. And then I think, beyond that, then what happens in all of that is, how does one find a way in Arizona to become the place where the best renewable energy-based, best sustainability-based economy can be built using every electronic computational tool imaginable?

So you can better manage water with more data, more data, more data, more data, more data.You can better manage all complex systems, like adjustments to all of the complexities of global management, with more computational outcomes. We don’t have the computational capabilities to manage the complex interfaces that we have with the environment. So if we want to better manage our relationship with the environment, we need more intensive tools to do that, and we need companies building those tools. And so I’m hopeful that Arizona will be a place where a lot of those things grow.

Now, the downside here is there’s some chance of uneven economic opportunity for the population because of educational differences. And we’re working very heavily to address that at ASU by giving pathways to everyone to have a chance to participate. There are unresolved issues of the waste streams from these advanced digital technologies, which have to be very seriously thought about because the chemicals are particularly hazardous in many cases.

And then I’d say that there is a huge worker transformation that we have to worry about. So as these computational tools become—you know, the reason that autonomous vehicles don’t work as well as we would like them to work is that we don’t have computational tools that are good enough. You get a computational tool that’s 20 times better than a chip today, and you can now calculate almost anything, any error function. And then all of a sudden half the drivers don’t have jobs, half the servers don’t have jobs in restaurants, the grocery stores, as you’ve already seen if you’ve been to one lately. There’s nobody that works there. You just check out yourself. And so what that means then is that I think the downside that we have to think about is how do we build an economy that is robust for everyone with these technological breakthroughs driven by these digital technologies? And this happened in agriculture; it’s going to be more complicated with digital. And so we’re going to have to really, really worry about this significantly.

Margonelli: To learn more about previous science initiatives mentioned in this conversation, visit the podcast page at issues.org. You can email us at [email protected] with any comments or suggestions. And you can subscribe to The Ongoing Transformation wherever you get your podcast. Thanks to our podcast producer, Kimberly Quach, and audio engineer, Shannon Lynch. I’m Lisa Margonelli, editor-in-chief of Issues in Science and Technology. Thank you for joining us!

Revisiting the Wireless Body

Though the notion of a “wireless body” is often presented as a recent concern in the age of electronic medical records, or pandemic-era telemedicine, Jeremy Greene shows us that this fantasy has been a structuring concern in medical care for the past 70 years or so. To many people, this alone might be a stunning finding—but it stuns only if we are without our history.

The notion that we might be able to quantify continuously the inner workings of the body (and then surveil and disseminate that data), and therefore prevent excess death, increase wellness, and render transparent the black box of the body may seem either still speculative or like a moral good or both. While the fantasy is simple and longstanding, its implications are less so.

In turning the patient’s inner workings into streams of data, we complicate the ethics of medicine by blending it with the ethics of data science.

The media of medicine have long promised not merely to reveal the body, but to democratize medical care in the United States. These media are often articulated as transcending barriers to care. That if we only connect the right devices to the right people in the right system, we might retrieve those who traditionally have been excluded from the scene of care. For, so the story goes, the scene of care would go to them. 

In “The Wireless Body” (Issues, Fall 2022), Greene says, not so fast. In turning the patient’s inner workings into streams of data, we complicate the ethics of medicine by blending it with the ethics of data science. And we do so haphazardly. Even the question of what counts as medical data is a tricky and deeply concerning one. What might one system of patient protection, say the Health Insurance Privacy and Portability Act (HIPPA), protect, and what might be designed to fall outside its remit? Greene shows us that when patients become users, and we turn our companion technologies into medical devices, we might open up the black box of the body, but we also hand over our most sensitive medical data and, as the author points out briefly, the implications of that data can be volatile, as in the case of consumer apps that track menstruation and fertility in light of the US Supreme Court’s recent Dobbs decision on abortion. Markets shape the medium of care much more than practitioners and patients, and so do politics.

Greene shows us that always-on medical devices, from the wearable Holter monitor that records heartbeats to the tools that might be in your pocket right now, are not some neutral or obvious good. Instead, if we read them closely, we can use them to reveal the contradictions of American medicine and its competing impulses: to democratize and to scale it, to make it universal and to make it intimate with individuals, to provide patients freedom while indoctrinating them into bodily surveillance, to make them transparent objects of study while privatizing care and linking that care to proprietary devices and the biases and interests they carry.

Assistant Professor of Informatics

Indiana University

Can CHIPS and Science Achieve Its Potential?

Amid rising concerns over the United States’ capacity to innovate and address large-scale societal challenges, the CHIPS and Science Act represents a positive and well-timed achievement for legislators and their staff. As multiple authors point out in a special section of the Fall 2022 Issues that explores how to help the act deliver on its promises, the 400-page law seeks to address goals in a variety of areas: semiconductor production; skills development in science, technology, engineering, and mathematics; regional innovation, and discovery science, among others.

In “An Infection Point for Technological Leadership?” Steven C. Currall and Venkatesh Narayanamurti raise a particularly salient and subtle point: the attempt in CHIPS to nudge parallel investments in discovery science and technological invention, primarily through reforms to the National Science Foundation. The intimate interplay between discovery and invention has yielded breakthroughs in the past, and such linked investments may offer potential going forward. Even before CHIPS, the NSF boasted an appealing mix of discovery science grant programs, multidisciplinary research centers, and industrial partnerships. The new law creates an opportunity to deepen these networks and expand the funding palette to accelerate discovery, invention, and commercialization together. Whatever the outcomes of CHIPS semiconductor funding, the other areas of science reform flowing from the legislation may yield real long-term benefit for US innovation.

But a throughline of the CHIPS coverage is the idea of “potential.” If the long-run goal is to enhance US science and innovation leadership and industrial capacity, then the bipartisan vote for the CHIPS and Science Act really represents the end of the beginning, as agencies move on to implementation and Congress performs oversight.

Whatever the outcomes of CHIPS semiconductor funding, the other areas of science reform flowing from the legislation may yield real long-term benefit for US innovation.

What comes next? The most crucial and immediate need is robust appropriations, as Currall and Narayanamurti correctly identify, including for newly created programs on regional technology hubs, nuclear research instrumentation, and microelectronics research and development, among other areas. As of mid-December 2022, Congress is already well past the fiscal deadline and has yet to reach a deal on overall spending, which will facilitate CHIPS investments. There’s a chance that appropriations may stretch into 2023, which would represent a failure in Congress’s first test to put the science provisions into practice. The multiyear time horizon for CHIPS means future Congresses—and possibly a new administration in 2025—will also be on the funding hook for fulfilling the CHIPS and Science vision.

Beyond appropriations, the federal government should think about honing its strategic acumen. CHIPS directs agencies to invest in critical technology areas and mandates a broad science and technology strategy from the White House Office of Science and Technology Policy (OSTP). In light of these directives, government should expand its capacity to analyze and understand data, trends, and national performance in emerging and critical technology. This would raise its tech IQ, improve federal ability to spot critical supply chain trouble, and ensure smart investment decisions with an eye to long-term competitiveness. Congress can help lead in this area, and should encourage OSTP to take its role as strategist and quarterback seriously.

The new Congress will also have an opportunity to focus on research and industrial capacity in a sector that didn’t get much attention in CHIPS: space. Congress and the agencies could work together to address space junk, expand technology investments in in-space manufacturing and advanced propulsion, modernize federal space technology acquisition, or reform global treaties to facilitate the peaceful development of space. These and other moves could do for space what the CHIPS and Science Act sought to do for microelectronics, in a similar spirit of enhancing US competitiveness in the long run.

Associate Director for R&D and Advanced Industry

Federation of American Scientists

The Urgent Need for Carbon Dioxide Removal

Carbon dioxide removal, or CDR, has emerged as a key climate mitigation activity to limit global temperature increase to well below 2 degrees Celsius. In 2022, the rise in global temperature reached 1.1 degrees, resulting in more intense and frequent droughts, floods, and fires, as well as rising sea levels. Without immediate emission reductions, carbon dioxide removal, or both, society is currently on course to exhaust the remaining carbon budget to stay below a 1.5 degree rise within around 10 years at current levels of emissions from fossil-fuel combustion.

CDR technologies provide a critical opportunity to implement negative-emissions technologies to support the global transition toward renewable energy portfolios needed to reach net-zero carbon emissions. In “What Is the Big Picture in Carbon Removal Research?” (Issues, Fall 2022), Gyami Shrestha reviews the history and contemporary landscape of CDR research and observations supported by the US government. CDR emerges as an active area of interagency research, technological development, and implementation, now mapped by an extensive survey carried out by the Interagency CDR Research Coordination Group.

CDR technologies provide a critical opportunity to implement negative-emissions technologies to support the global transition toward renewable energy portfolios needed to reach net-zero carbon emissions. 

Several thematic areas emerge from the survey that link the federal activities across areas of technology development, implementation, and monitoring. Technology areas include direct air capture and storage, soil carbon sequestration, reforestation and afforestation, enhanced mineralization, ocean capture and storage, and biomass removal and storage. Technology transfers to the private sector allow for scaling of activities such as direct air capture or mobilization of resources for climate-smart agriculture and forest management. And monitoring includes enabling new multiscale measurements of carbon dioxide from tower networks, aircraft, and spacecraft missions. Combined, the investments situate CDR as an important component of a climate mitigation strategy.

Several challenges remain. These include evaluating the full effects of CDR technologies on other ecological services that society depends on; implementing CDR while taking into account issues related to environmental justice; and addressing uncertainties and responsibilities for monitoring, reporting, and verification. These are multidisciplinary challenges that require expertise in Earth system science, social science, policy, land management, and engineering, and that need to be facilitated with continued coordination and transparency. Establishing unique partnerships between the public and private sectors can provide novel opportunities to rise to the challenge, as demonstrated recently through new satellite constellations successfully detecting point-source emissions of methane from oil, gas, agriculture, and landfills.

Research Scientist

Earth Sciences Division

NASA Goddard Space Flight Center

Gyami Shrestha reflects on her experiences during her tenure as director of the US Carbon Cycle Science Program, especially with respect to coordination of federal research on carbon dioxide removal (CDR) approaches and technologies. She describes how the program, along with the Interagency Carbon Dioxide Removal Research Coordination Group (I-CDR-C), produced a compendium of federal research activities in this arena. This compendium, as she notes, helps to “identify gaps and establish fruitful new collaborations and partnerships” among participating agencies, a crucial element for ensuring the success of federal CDR efforts.

While the breadth of the compendium is exciting, it is an incomplete snapshot of the full scope of federal CDR research efforts. Contributions to the compendium, and in fact participation in the I-CDR-C itself, are voluntary efforts by participants who have the time, energy, and commitment to engage with their colleagues on this topic. There are federal departments, agencies, and programs that fund or conduct relevant research but are not currently involved with I-CDR-C work, whether due to lack of awareness or limitations on capacity. Without a mandate for all relevant agencies and programs to report and coordinate, there will be missed opportunities and perhaps unforeseen consequences in this rapidly evolving arena.

Without a mandate for all relevant agencies and programs to report and coordinate, there will be missed opportunities and perhaps unforeseen consequences in this rapidly evolving arena.

As reported, the I-CDR-C found that some of the activities cited in the compendium were not explicitly focused on CDR, but instead were carbon cycle science research efforts that are foundational to the design and implementation of CDR work. As Shrestha notes, there are “fundamental questions [about the carbon cycle] that have yet to be sufficiently answered.” Inclusion of these research activities in the compendium demonstrates an understanding of the importance of this underlying research to CDR, but there needs to be clearer articulation of pathways to support and enhance the connections of research to operations and back again. How can programs that are typically considered more “basic” science ensure that their insights inform applications like CDR? How can CDR activities potentially contribute to our fundamental knowledge of the carbon cycle and other Earth and social sciences?

Ongoing interagency collaboration through the I-CDR-C can help facilitate these matters, as can engagement with the broader research community. Programs such as the North American Carbon Program and the US Ocean Carbon and Biogeochemistry Program provide space for the ongoing development and support of communities of practice around carbon dioxide removal that span sectors and domains, and both programs have held trainings and workshops on the topic that have generated significant enthusiasm. The urgency and complexity of climate change threats requires collaborative, iterative, adaptive approaches to research and applications that not only cross traditional disciplinary boundaries but will likely also require new organizational and institutional frameworks in the way we approach science altogether.

Coordinator

North American Carbon Program

Missouri Lawmakers and Science

In “How Missouri’s Legislators Got Their ‘Science Notes’” (Issues, Fall 2022), Brittany N. Whitley and Rachel K. Owen examine the value and challenges of bringing scientific information into the vast array of decisions that state legislators must make each year. With over 2,000 bills considered by the Missouri General Assembly annually, so many of them impact the daily lives of people across the state, changing how they connect, work, learn, and live. And eventually, many of these seemingly “local” decisions add up to national impacts.

We often hear concerns that people do not care about science or do not trust experts. However, experts often do not show up in ways that center the user’s actual problem, respect the context and complexity of the decision being made, and provide the scientific information in a way that can be valued and understood across backgrounds and party lines. Whitley and Owen provide a compelling approach for addressing these challenges.

There is an additional underlying challenge I would like to bring to this discussion, one that funders do not like to support or sustain. Few groups have the breadth of expertise to understand the landscape of emerging scientific information, aggregate that learning into knowledge, and translate that knowledge into useful inputs for decisions.

There is the fundamental need for trusted translators in our society.

That funders of all types will pour billions of dollars into science, but far fewer dollars into sustained efforts to make science results useful, is a fundamental flaw in our system.

If we want the funding the United States pours into science—over $40 billion of federal funding for basic research annually—to lead to real-world solutions and to impact how policy leaders make decisions, it will be necessary to provide those leaders with information in ways they can hear, understand, and trust.

The scale of the challenge is often glossed over. Looking at just the Scopus database, curated by independent subject matter experts, we see that the overall scientific literature as characterized by the National Science Foundation is growing at almost 4% each year, with approximately 1.8 million publications in 2008 and growing to 2.6 million in 2018. This gives us roughly 20 million articles in just a 10-year period from Scopus alone, providing new information that is supposed to be building on itself. But the articles are often laden with jargon, highly academic, and focused on other experts in the field. They also tend to highlight novelty instead of a road map for how to actually use the information and aggregate it into knowledge.

Who is supposed to aggregate knowledge from those tens of millions of papers over time? This is a daunting task for an expert; it is an unreasonable if not impossible expectation for legislative staff with little support. That funders of all types—federal, state, philanthropic, and industrial—will pour billions of dollars into science, but far fewer dollars into sustained efforts to make science results useful, is a fundamental flaw in our system.

I applaud the progress being made by the Missouri Science & Technology (MOST) Policy Initiative, as the authors document, along with the achievements of other state-level science programs in New Jersey, California, and Idaho, among others. But if we believe that evidence-based decisions are critical to solving society’s most pressing problems, we must rethink how we support and sustain those organizations actually doing the work.

Flagg Consulting

Former Deputy Assistant Secretary of Defense for Research

As a Missourian, I applaud the efforts of the visionary founders and the savvy of the current leadership of the Missouri Science & Technology (MOST) Policy Initiative and their contributions to sound science-based policy and legislation in the state. As Brittany N. Whitley and Rachel K. Owen describe, in a truly short time and in the context of a part-time legislature (never mind the restrictions of a global pandemic), the MOST fellows have already demonstrated remarkable effectiveness and have garnered the respect and appreciative of our state lawmakers.

I have always hoped that the highly successful model of the American Association for the Advancement of Science’s Science and Technology Fellows program, which has so successfully helped integrate scientific knowledge into the work of the federal government, would be, in the true spirit of federalism, translated into state houses and local governments. MOST is among a small number of state-based programs demonstrating the effectiveness of such translation. Whitley and Owen provide an excellent historical record of the importance of having sources of solid, nonpartisan scientific knowledge informing the many decisions state governments make that can impact, for example, the health of their citizens, the future of their environmental resources, the strength of their economies, and the quality of their educational systems.

It is well known that private funders typically see themselves as the source of “start-up” funding and not a source of long-term sustaining support.

The piloting of state initiatives would not have been possible without the support of private funders. Investment by the Gordon and Betty Moore Foundation was integral to the launching of several state programs, and in Missouri, the James S. McDonnell Foundation contributed significantly to launching MOST and has recently renewed its support. So, in what is otherwise a positive article, I was surprised to read the negative slant of the discussion concerning the role of private funders.

I am sympathetic to the plight of programs, such as MOST, whose fellows are engaging in novel undertakings requiring both the building of trust relationships and the time to develop a record of accomplishment. However, it is well known that private funders typically see themselves as the source of “start-up” funding and not a source of long-term sustaining support. Philanthropy is a source of social venture capital—with the return on investment measured in terms of contribution to the common good. For philanthropy to continue as a source of venture funds, it has to carefully evaluate the duration of the commitment it can make to any one beneficiary. While I am certain MOST will continually garner support from private funders, it is likely that the identities of the donors will change as the organization’s maturation changes its needs. It is also likely that the leadership of MOST will continue to grow more skilled at communicating its accomplishments and requesting the necessary resources to continue its essential work.

I fervently hope that MOST will garner long-term and sustainable support from the constituency it serves so well, and that the state of Missouri and its citizens will recognize the value that MOST fellows provide to informed governance by appropriating funds to the program.

President (retired)

James S. McDonnell Foundation

The Vital Humanities

The humanities bring a range of important perspectives to bear on the scientific and technological issues with which institutions such as Georgia Tech wrestle, as Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz note in “Humanizing Science and Engineering for the Twenty-First Century” (Issues, Fall 2022). I want to start from that general argument in favor of interdisciplinarity and make a slightly different case, however.

First, it’s important to note that the humanities are not the arts. Humanists are trained in analysis and interpretation; they are not trained in aesthetic production. Thus, when the article cites case studies from a Georgia Tech publication called Humanistic Perspectives in a Technological World that turn out to be about technical writing, music composition, and theater production, I worry. None of those fields center analysis—they all focus on production. 

Collapsing the arts into the heading “humanities” is not uncommon. When I went to the Washington, DC, launch of the report cited in this article, The Integration of the Humanities and Arts with Sciences, Engineering, and Medicine in Higher Education: Branches from the Same Tree, I was struck by the poster presentations, which included projects that featured dance, music, art, and poetry. But not a single one of them included the humanities. Nothing analytical, critical, or interpretive. When “arts and humanities” is the framework being used, the humanities tend to disappear. It’s easier to talk about the arts: everyone knows what music or poetry is. It’s harder to be concrete in talking about philosophy or literary criticism. But what philosophers and literary critics do is just as essential as what musicians or poets do: they enable us to interpret the world around us and to posit a better one.

What philosophers and literary critics do is just as essential as what musicians or poets do: they enable us to interpret the world around us and to posit a better one.

So what I want to point out is the specific value of integrating humanities into science and engineering by recognizing the expertise of humanities practitioners. That expertise is in visual analysis (art history), ethics and problem-solving (philosophy), close reading and analysis (literary criticism), and interpretation of the past (history). The doctor who likes reading novels is probably not the right person to be teaching the narrative medicine course when you have experts in narrative theory on your campus. The article notes that Florida International University’s Herbert Wertheim College of Medicine “uses art analysis during museum tours as a practice analogous to detailed patient diagnosis.” I hope the art analysis is done by trained art historians.

Interpretation and analysis are important skills for practitioners in science, technology, engineering, and mathematics (STEM) to learn, certainly. But the humanities are valuable for more than “equipping STEM practitioners with a humanistic lens.” STEM researchers achieve the best interdisciplinary work not when they apply a humanistic lens themselves but when they partner with those trained in humanities disciplines. I think of Jay Clayton, for example, whose team of humanists at Vanderbilt University’s Center for Genetic Privacy and Identity in Community Settings, or GetPreCiSe, analyzes cultural treatments of the topic of genetics. How do novels, films, stories, and other cultural expressions address the moral and ethical consequences of developments in genetics, and what do those cultural texts tell us about our society’s changing sense of itself? How do such texts shape social attitudes? These are humanities questions calling for humanities methodologies and humanities expertise.

Executive Director

Modern Language Association

As an educator and researcher concerned with equity, I’m tasked with looking for and identifying useful connections between science, technology, engineering, and mathematics (the STEM fields) and the arts, a collective span otherwise known as STEAM. My work amplifies the contributions of artists and cultural practitioners who are often left out of the discourse in STEM areas. For example, popular comics and movies give us Shuri in Black Panther, who uses her knowledge of science, culture, and the natural resources around her to design and build things. As an artist who uses artificial intelligence, I combine my knowledge of color theory, culture, literature, creative writing, and image editing to create unique art that captures the spirit of the present moment.

While reading the essay by Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz, I thought about Albert Einstein, who used thought experiments as a way to understand and illustrate physics concepts. Einstein considered creative practice as essential to problem-solving. He took music breaks and engaged in combinatory play, which involved taking seemingly unrelated things outside the realms of science (e.g., music and art) and combining them to come up with new ideas. These interludes helped him “connect the dots” of his experiments at opportune moments when he played the violin. Einstein’s ideas influenced musicians such as John Coltrane, who used theoretical physics to inform his understanding of jazz composition.

As an artist who uses artificial intelligence, I combine my knowledge of color theory, culture, literature, creative writing, and image editing to create unique art that captures the spirit of the present moment.

Scientists who embrace the arts use cognitive tools that the biologist and historian of science Robert Root-Bernstein identifies as observing, imaging, recognizing patterns, modeling, playing, and more to provide “a clever, detailed, and demanding fitness program for the creative mind” across scientific and artistic disciplines. A study led by Root-Bernstein considered the value of the arts for scientists. He and his collaborators found that scientists often commented on the usefulness of artistic practices in their work. They suggested that their findings could have important implications in public policy and education. Conducted over a decade ago, this research has not yet led to a marked shift in science policies and the development of STEM and STEAM curricula. Science, the arts, and the humanities are still siloed in most US institutions.

Many scientists and musicians never realize the links between physics and the polyrhythmic structures in music. K–12+ physics teachers don’t teach their students about the connections between theoretical physics and jazz. Music students never consider physics when learning to play Coltrane’s “Giant Steps,” and I think this is a missed opportunity for interdisciplinary learning. Scholars such as the multidisciplinary Ethan Zuckerman argue for the combining of technical and creative innovation through the use of artificial intelligence, which has a potential for composing music, visualizing ideas, and understanding literature. The gaps or frictions in the sciences, the arts, and the humanities belie the fact that all these disciplines or fields are charged with investigating what it means to be human and how we might improve our states of wellness and well-being. To create a more inclusive future inside and across disciplines, it’s up to all of us to make these connections more apparent, and our engagements with inclusion more intentional.

Assistant Director, Lesley STEAM Learning Lab

Lesley University

Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz trace the historical development of the argument that science, technology, engineering, and mathematics (STEM) and the humanities, arts, and social sciences (HASS) should be integrated. In the first century BCE—far further back than Vannevar Bush’s landmark report or the Branches from the Same Tree and The Fundamental Role of Arts and Humanities in Medical Education (FRAHME) reports that Husbands Fealing et al. foreground—the Roman architect and engineer Vitruvius wrote that architecture should meet three criteria: firmness, fitness, and delight. Buildings should exhibit structural integrity but also meet the needs of their occupants and be pleasant spaces for them. Or in an example from today’s world, smartphones should not only work; they should also seamlessly fit into their owners’ daily lives and contribute to their self-identity.

Translated into Vitruvian terms, an overemphasis on STEM addresses firmness but neglects fitness and delight, which are where HASS can help. Critical analysis from the humanities and social scientific investigations play an essential role in assessing, predicting, and designing for fitness. And delight is the domain of the arts. (“Delight” is meant in a broad sense of aesthetic engagement, and may include provocative discomfiture if that is what is intended.)

By treating complex problems as STEM problems at their core, with HASS contributions as mere “add-ons,” we risk solving the wrong problems or only the narrow subproblems that are tractable using STEM methods.

Framed in this way, the authors correctly state that considering fitness and delight too late is bound to lead to a narrow conception of whatever problem is being addressed. By treating complex problems as STEM problems at their core, with HASS contributions as mere “add-ons,” we risk solving the wrong problems or only the narrow subproblems that are tractable using STEM methods. Computing professionals in my experience often consider user interfaces to be window dressing—something to be considered after the core functionality is nailed down. In contrast, the Human-Centered Computing PhD program at Georgia Tech—the authors’ own institution and formerly my own—takes the perspective that a human-centered computing solution to any problem has an intertwined Vitruvian structure. Students learn to conceive of technology as part of a broader social web. But they do more than study technology from the sidelines; they design and advocate, much as the FRAHME report’s “Prism Model” encourages medical students to do.

So far, so good, but there’s an elephant in the room. At STEM-focused universities, STEM and HASS are not just separate: they have different statuses. It is no accident that at Stanford University, the STEM and HASS students label themselves as “techies” and “fuzzies,” respectively. STEM professionals may magnanimously acknowledge that HASS contributions can help them, but that is not the same as treating STEM and HASS as co-equal contributors to sociotechnical challenges. My experience of the computational media program that Husbands Fealing et al. cite as an example of successful integration included conversations with colleagues and students who referred to it as “Computer Science Lite.” The valorization of technical rigor—or the mislabeling of epistemic rigidity as rigor—is a badge of masculinized self-identity in many STEM environments, and overcoming that will be a HASS problem in its own right.

Provost and Executive Vice Chancellor for Academic Affairs

Professor, Computer Science

Missouri University of Science and Technology

While science will benefit from greater integration with the humanities, arts, and social sciences (HASS), so too will the HASS fields benefit from greater knowledge of science, technology, engineering, and mathematics (STEM). What better place to do this than the undergraduate bachelors programs across the country, as demonstrated by many colleges and universities offering a liberal education with curricular requirements of all majors to take courses across the disciplines. Many institutions moved away from such requirements in the 1960s and ’70s, and it is time to bring them back, for the reasons that Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz articulate.

While science will benefit from greater integration with the humanities, arts, and social sciences, so too will these fields benefit from greater knowledge of science, technology, engineering, and mathematics.

In 2010, four Vassar College students produced a film, Still Here, for a class on documentary film. The film chronicled the life of Randy Baron, from New York City, who lived with HIV for 30 years. He survived for decades because of a rare genetic mutation, while he watched almost all his friends die from the disease. The film, and the four students who produced it, demonstrates the value of a liberal education and exposure to a variety of disciplines. The film was powerful because it integrated the history of the 1980s, the public’s response to the HIV epidemic, and the politics involved, including the Reagan administration’s slow response, the psychology and trauma of grief, and of course the science of the disease. Without the integration of the history, politics, science, and art, this film wouldn’t have won the Cannes prize that it did for best student documentary.

The student director, Alex Camilleri, commented that to be an effective filmmaker, film had to be secondary in one’s life, behind first being human. This applies to all STEM-HASS majors. While expertise and depth of knowledge are of course important to all disciplines, bringing understanding of the context of your work in the world, as well as the variety of methodologies used by various disciplines, allows for greater creativity and the advancement of knowledge important to improving the human condition. The slow response to the AIDS epidemic and the development of antivirals in the 1980s and the ’90s has clearly informed the world’s response to COVID-19, saving lives across the globe. It is to be hoped that historians, scientists, politicians, and storytellers of all types will study the COVID-19 episode to improve future responses to pandemics.

Some observers worry that requiring students to take courses that they wouldn’t otherwise choose will not be productive. But the point is in fact to expose students to ideas and disciplines that they are not familiar with and wouldn’t necessarily choose to pursue. If STEM and HASS faculty believe in the benefits of more interdisciplinary exposure for their majors, it can easily become part of the culture and understood to be an important part of the education that will benefit their work in the future, not take away from it.

Managing Director, Ithaka S + R

President Emerita, Vassar College

Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz effectively address the critical importance of “intercultural” dialogue between science, technology, engineering, and mathematics (the STEM fields) and the humanities, arts, and social sciences to restore a holistic educational model that leverages diverse perspectives. It is indeed time to come together across sectors and disciplines to organize a cohesive response to the intractable problems that plague humankind, such as novel viruses, racism, radicalization, hunger, and damage to the earth.

The network I direct, a2ru, is devoted to not only integrating the arts in STEM and other disciplines, but also providing demonstrable examples of integrated research to help communicate the promise of these collaborations to policymakers and the public. A2ru’s Ground Works, a peer-reviewed platform of arts-integrated projects, features many examples of research that integrates art and the humanities in STEM.

It is indeed time to come together across sectors and disciplines to organize a cohesive response to the intractable problems that plague humankind, such as novel viruses, racism, radicalization, hunger, and damage to the earth.

For example, “Just-in-Time Ecology of Interdisciplinarity: Working with Viral Imaginations in Pandemic Times” is a fascinating example of where and how this transdisciplinary can exist in the research culture—in this case, calibrated as a coordinated response to a real-time, real-world problem. Other projects demonstrate close and innovative collaborations. “Greenlight SONATA,” a collaboration between a composer, ethnomusicologist, and civil engineer, tested the hypothesis that translating simulated traffic information into music could lead to musical resolution of persistent traffic congestion. “Unfolding the Genome,” a collaboration between scientists and artists, explored how the human genome, the DNA contained in every cell of the body, folds in 3D. A recent special collection on the platform, “Vibrant Ecologies of Research,” looks beyond the project level to the systems level. As its guest editor, Aaron D. Knochel, explains, “The project work and commentaries explore vibrant ecologies of research deepening our understanding of the institutional, social, and epistemological systems that effectively weave arts-based inquiry into the scholarly fabric of research.”

The a2ru network is seeing a subtle yet consistent and accelerating shift in STEM programs finding pathways for their students to benefit from arts and humanities studies, to better prepare them for a changing workforce but also to improve their individual wellness and ability to contribute to society within their chosen profession. There is a critical need to bridge a siloed disciplinary culture in higher education (and other sectors). We need networks such as a2ru in place to effectively build those bridges.

As the authors note, Branches from the Same Tree, a 2018 report by the National Academies of Sciences, Engineering, and Medicine, focused on integrating curricula within these various fields. A second phase of this work is surfacing examples of integrated research in not only higher education, but in the industry and civic spheres as well. Allied networks such as a2ru, designed and determined to work across differences, can lead us there.

Executive Director

a2ru

Semiconductors and Environmental Justice

In “Sustainability for Semiconductors” (Issues, Fall 2022), Elise Harrington and colleagues persuasively argue that the CHIPS and Science Act of 2022 offers the United States a unique chance to advance national interests while decoupling the semiconductor industry from supply chain vulnerabilities, public health problems, and environmental hazards. The path the authors suggest involves improving industry sustainability by, among other actions, circularizing supply chains—that is, by designing with a focus on material reuse and regeneration.

But any industrial effort to circularize the supply chain will face an uphill battle without addressing competition policy concerns. Today’s large companies and investors seem to assume it natural to seek illegal monopoly power, regardless of its toxic effects on society. Some companies may claim consolidation as a cost of US technological sovereignty, but consolidation is actually a threat to national security. Achieving a circular supply chain will require innovative policies for competition and coordination of pre-competitive interests across use, repair, reuse, refurbishment, or recycling of semiconductors and destination devices. Securing downstream product interoperability, rights to repair, and waste and recycling requirements would be a promising start.

Further, building a strong and sustainable semiconductor industry should not come at the expense of public and environmental health. Attention to environmental justice must be front and center. The European Commission is advancing a novel approach to regulating economic activities for sustainability through the principle of “do no significant harm.” However, the Commission, as well as Harrington et al., fixates on negotiating quantitative, absolute environmental targets to arbitrate the harm of an industrial activity. Harm and its significance are subjective and contingent on the parties involved (and the stage of the industrial lifecycle). Too often, research, development, and industrial policies end up simply becoming governments doling out permission to harm individuals and communities for the benefit of a few for-profit companies. Silicon Valley, with 23 Superfund sites, the highest concentration in the country, has a lot to answer for on this front.

Building a strong and sustainable semiconductor industry should not come at the expense of public and environmental health. Attention to environmental justice must be front and center.

Finally, the United States should avoid a “race to the bottom” of state and local governments undercutting each other to secure regional technology hubs. Too often, relocation handouts siphon tax dollars from communities and schools to attract corporations that prove quick to loot the public purse and leave to the next doting location. For example, the Mexican American environmental justice movement has noted how major semiconductor industries in New Mexico and Arizona regularly secured state subsidies yet provided few quality jobs and little community reinvestment, burdened communities with environmental wastes, and drained scarce water resources. To center environmental justice in semiconductor sustainability efforts, much can be learned from such highly effective good neighbor agreement efforts. Respecting small and medium-size industries, centering environmental justice, and fairly distributing the benefits of a semiconductor renaissance around the country would be not only good policy but also good politics, as shown in other industrial policy efforts. A sustainable semiconductor industry considering these strategies would be more likely to win political and public support and stand a better chance of genuinely benefiting the nation and its people.

Center for Innovation Systems & Policy

AIT Austrian Institute of Technology

Technology-Based Economic Development

In “Manufacturing and Workforce” (Issues, Fall 2022), Sujai Shivakumar provides a timely and important review of the CHIPS and Science Act. This landmark legislation aims at strengthening domestic semiconductor research, development, design, and manufacturing, and advancing technology transfer in such fields as quantum computing, artificial intelligence, clean energy, and nanotechnology. It also establishes new regional high-tech hubs and looks to foster a larger and more inclusive workforce in science, technology, engineering, and mathematics—the STEM fields. In a recent article in Annals of Science and Technology Policy, I noted that the act focuses tightly on general-purpose technologies, emanating from technology transfer at universities and federal laboratories. Shivakumar correctly notes that public/private investment in technology-based economic development (TBED) in manufacturing must be accompanied by workforce development to match the human capital needs of producers and suppliers.

I have two recommendations relating to workforce development, in the context of technology transfer. The first is based on evidence presented in a 2021 report by the National Academies of Sciences, Engineering, and Medicine titled Advancing Commercialization of Digital Products from Federal Laboratories. (In full disclosure, I cochaired that committee with Ruth Okediji of Harvard University.) The report concluded that accelerating commercialization of research requires that we achieve a better understanding of workplace and managerial practices relating to technology transfer, including individual and organizational factors that may inhibit or enhance the ability of scientists to engage in commercialization of their research. These factors include the role of pecuniary and nonpecuniary incentives, organizational justice (i.e., workplace fairness and equity), championing, leadership, work-life balance, equity, diversity and inclusion, and organizational culture. Understanding such issues will help identify and eliminate roadblocks encountered by scientists at federal labs as well as universities who wish to pursue technology transfer. It would also allow us to assess how “better performance” in technology transfer is achieved.

Accelerating commercialization of research requires that we achieve a better understanding of workplace and managerial practices relating to technology transfer.

A second recommendation concerns a major gap that needs to be filled, in terms of developing a more inclusive STEM workforce to implement these technologies. This gap centers on tribal communities, which are largely ignored in TBED initiatives and technology transfer. Unfortunately, economic development efforts for tribal communities have predominantly focused on building and managing casinos and developing tourism. Results have been mixed, with limited prospects for steady employment and career advancement.

Opportunities for TBED strategies to aid tribal communities might include the development of new investment instruments, the strategic use of incentives to attract production facilities in such locations, and the promotion of entrepreneurship to build out supply chains. This would require adapting tools for TBED to be better suited to the needs and values of the communities. That means developing a TBED/technology transfer strategy that simultaneously protects unique, Indigenous cultures and is responsive to community needs.

In sum, I agree with Shivakumar that workforce development is key to the success of the CHIPS and Science Act. Two complementary factors that will help achieve its laudable goals are improving our understanding of how to better manage technology transfer at universities, federal labs, and corporations, and involving tribal communities in technology-based economic development initiatives and technology transfer.

Foundation Professor of Public Policy and Management

Co-Executive Director, Global Center for Technology Transfer

Arizona State University

Sujai Shivakumar stresses the importance of building regional innovation capacity to bolster manufacturing innovation in the United States. He rightly notes that this needs to be a long-term cooperative effort, one requiring sustained funding and ongoing policy attention.

One of Shivakumar’s key points is the necessity of complementary state and local initiatives to leverage federal and private investments through the use of public-private partnerships. As he notes, the success of the nano cluster centered in Albany, New York, was initially based on collaboration with IBM and the state, especially through the College of Nanoscale Science and Engineering, but it reflected a 20-year commitment by a succession of governors, both Republican and Democratic, to developing a regional semiconductor industry.

We need to capitalize on current centers of excellence, even as we seek to create new ones in a spoke-and-hub model, using their proven strengths to reinforce this ambitious national undertaking.

Working with Thomas R. Howell, we have documented the long-term nature of this effort in a recent study titled Regional Renaissance: How the New York Capital Region became a Nanotechnology Powerhouse. We describe in some detail (because the details matter) the role of regional organizations, such as the Center for Economic Growth and the Saratoga Development Commission, steadily supported by leaders of the state assembly to find a site, obtain the necessary permits, build out infrastructure, and win public support. The state also encouraged training programs in semiconductor manufacturing by institutions such as the Hudson Valley Community College. Moreover, the semiconductor company AMD (now Global Foundries) was attracted by the resources and proximity of the College of Nanoscale Science and Engineering, building and expanding a semiconductor fabrication plant—or “fab,” in the field’s parlance—that has led to many thousands of well-paying jobs.

This model is especially relevant to meeting the needs of growing numbers of semiconductor fabs that are encouraged under the CHIPS and Science Act. Indeed, in order to address the need for applied research, student training, and collaborative research among semiconductor companies, the Albany facility stands out, not least for its proven track record and its exceptional capabilities, based on its commercial scale fab, ideal for testing equipment and designs but unusual for a university.

This facility can and should be a central node in the semiconductor ecosystem that the nation seeks to strengthen. If we are to avoid an excessive dispersal of funds and the long lead times of new construction and staffing, we will need to draw on existing facilities that are already operational and can be reinforced by the resources of the CHIPS and Science Act.

In short, we need to capitalize on current centers of excellence, even as we seek to create new ones in a spoke-and-hub model, using their proven strengths to reinforce this ambitious national undertaking. Time is not our friend; existing assets are.

Adjunct Professor

Global Innovation Policy

Science, Technology, and International Affairs

School of Foreign Service

Georgetown University

R&D for Local Needs

In “Place-Based Economic Development” (Issues, Fall 2022), Maryann Feldman observes that the CHIPS and Science Act marks “an abrupt pivot in the nation’s innovation policy” away from the laissez-faire system of the past and toward a policy focused on addressing regional economic development. Central to this new course is the act’s directive for the National Science Foundation (NSF) to “support use-inspired and translational research” through its new Technology, Innovation, and Partnerships (TIP) directorate.

Yet nowhere within the statute are these terms defined or described. The phrase “use-inspired research” was coined in 1997 by the political scientist Donald Stokes in his seminal work, Pasteur’s Quadrant, in which he sought to break down the artificial distinctions between scientific understanding and wider use while rejecting overly limiting terms such as basic and applied research. For Stokes, research focused on real-world societal problems—such as French chemist Louis Pasteur’s work on anthrax, cholera, and rabies—can spark both new fundamental knowledge and applied breakthroughs.

But what potential uses will inspire the next generation of innovators? If we look to the text of the CHIPS and Science Act, the legislation outlines 10 technology focus areas and five areas of societal need to guide use-inspired research overseen by the TIP directorate. Beyond these lists however, there is another source of inspiration that is strongly implied by the legislative language: regional societal and economic needs—specifically, the needs of places where scientists live and work.

What potential uses will inspire the next generation of innovators?

While this observation may sound simple, implementation is not. Indeed, researchers at the University of Maine previously described in Issues the intricate challenges of crafting a regional use-inspired research agenda, creating community partnerships, engaging stakeholders, and breaking through institutional and cultural barriers that transcend publish-or-perish incentives to help produce real-world solutions. 

The CHIPS and Science Act has launched such an endeavor on a national scale with NSF as the driver. It is a new place-based research enterprise that finds inspiration from the needs of diverse geographic regions across the United States. The statute is an important step, though many bumps in the road lie ahead, including securing the necessary appropriations. However, by focusing more on the needs of geographic regions through use-inspired research, NSF can better meet the mandate of CHIPS and Science to address societal, national, and geostrategic challenges for the benefit of all Americans.

President, Arch Street

Former professional staff member, US House Committee on Science, Space, and Technology

The recent CHIPS and Science Act ensures that the invisible hand in the capitalist US economy is now far from invisible. With nearly $280 billion in planned investments by the federal government for domestic research, development, and manufacturing of semiconductors, officials are hoping this support will lead to a technological paradigm shift for future generations. Officials contend this sizeable investment will decrease the nation’s dependence on countries such as South Korea, Taiwan, and China, which have dominated the semiconductor industry for the past two decades. By their dominance, these countries are effectively in control of the US supply chain and thus threaten the nation’s current and future national security. Credit is due to federal elected officials for realizing the need for such critical investment in the US economy and semiconductor industry. How that funding and support is distributed, however, remains a critical component of the legislation.

In her essay, Maryann Feldman argues that “the United States needs a bold strategic effort to create prosperity.” While one could argue that the Chips and Science Act is the bold public policy needed to ensure the nation’s technological independence and innovation, from an economic and public policy perspective I would argue that the most critical components of that act will be source contract award processes that are directly connected to defined requirements, strong agency oversight, engagement throughout the award timeline, early definition and commitment to creating commercialization pathways, implementation of award support for research personnel in flyover states, and a commitment to assess program results as they relate to the requirement that generated the award.

With nearly $280 billion in planned investments by the federal government for domestic research, development, and manufacturing of semiconductors, officials are hoping this support will lead to a technological paradigm shift for future generations.

Additionally, based on contemporary research from TechLink, a US Department of Defense Partnership Intermediary, in order to develop innovative, successfully commercialized technology, identifying the most effective research personnel in flyover states—those individuals who will be able to ensure that their technology innovations and commercialization overcome the well-known “valley of death”—will be critical for place-based economic impacts and outcomes.

The CHIPS and Science Act needs to ensure that when technology decisions and appropriations occur at the practical level, they are funded to a diverse set of independent entrepreneurs, innovators, nonprofit organizations, universities, small businesses, local government partners, and educated citizens in research parks—those whose sole purpose is in developing the most advanced technology conceivable. COVID-19 changed how researchers can coordinate remotely with leading technology experts in the field, regardless of their physical location. Technological innovation and commercialization must take precedence over quid-pro-quo politics in Washington, DC, or the nation’s attempt to become the global leader in the semiconductor industry will have started and ended with federal public policy officials and bureaucrats. If the recommendations that Feldman makes regarding place-based economics, public policy implementation, and economic development are implemented, I’m confident the United States will surpass China, Taiwan, and South Korea in a semiconductor paradigm shift that will last for decades to come.

Department Head, Economic Impacts

TechLink

Maryann Feldman in her excellent article makes a strong case that one of the most important strategic requirements for future growth in high-income jobs is expanding what the regional economic growth policy arena calls “innovation hubs.”

Feldman states that “place-based policy recognizes that when firms conducting related activities are located near each other, this proximity to suppliers and customers and access to workers and ideas yield significant dynamic efficiency gains. These clusters may become self-reinforcing, leading to greater productivity and enhanced innovation.”

There are a lot of economic rationales and policy implications packed into this summary statement. From an economist’s perspective, innovation hubs are an essential industrial structure for future economic development, first and foremost because they enable the realization of “economies of scope.” This is a distinguishing characteristic from the Industrial Revolution in which “economies of scale” dominated.

More specifically, scale is the dominant driver when product technology is relatively simple and product differentiation is therefore limited. In such cases, the emphasis is largely on reducing unit cost; that is, price is the basis for competing. In contrast, modern technology platforms offer a much wider “scope” of potential product applications, which requires more sophisticated process technologies in terms of both quality and attribute flexibility. The increasingly differentiated needs of modern high-tech supply chains means that economies of scope with respect to emerging technology platforms are now the major policy driver.

More technically demanding product and process technology development and use require higher and diversified labor skills. As the complex set of labor inputs changes continuously with the evolution of technology development, responsive educational institutions are essential to update and refocus workers’ skills. The resulting diverse local (and hence mobile) labor pool is essential to meeting rapidly changing skill requirements across firms in a regional innovation cluster.

The increasingly differentiated needs of modern high-tech supply chains means that economies of scope with respect to emerging technology platforms are now the major policy driver.

Further, the potential for economies of scope provides many opportunities for small firms to form and pursue niche markets. But doing so requires the availability of a local start-up infrastructure embodying such institutional entities as “accelerators” and “incubators” to facilitate evolution of optimal industry structures.

The extreme dynamic character of technology-based competition determined to a significant extent by economies of scope inherent in modern technology platforms means considerable shifting of skilled labor among competing firms, as new application areas are developed and grow. Co-location of a large skilled labor pool and a supporting educational infrastructure is therefore essential. Similarly, the extreme dynamics of the high-tech economy that affords opportunities for new firms to form and prosper works well only if a significant venture capital infrastructure is present.

These factors—facilitation of economies of scope in research and development; a diverse and skilled local labor pool; start-up firm formation; risk financing; and technical infrastructure—collectively promote the innovation hub concept. As Feldman states, “For too long, the conventional policy approach has been for government to invest in projects and training rather than in places.”

In summary, the complexity of modern technology development and commercialization demands a four-element growth model: technology, fixed capital (hardware and software), and skilled labor, all of which depend on a complex supporting element: technical, educational, and business infrastructure. All four assets must be co-located to achieve economies of scope and hence broad-based technology development and commercialization.

Research Fellow, Economic Policy Research Center

University of Washington

The Problem With Subsidies

The United States government is waging a “chip” war with China. The war is fought on two fronts: one is preventing China from accessing the latest artificial intelligence chips and manufacturing tools, and the other is subsidizing large firms to bring manufacturing back to the United States. But as Yu Zhou points out in “Competing with China” (Issues, Fall 2022), “whether [the CHIPS and Science Act] will improve US global competitiveness and prevent the rise of China is uncertain.”

A race to subsidies as America’s solution is uncertain and problematic because it is based on a misunderstanding of how innovative Chinese firms advanced their technologies. Contrary to the popular belief that China built its technology industry through massive subsidies, China’s records of state-sponsored technology investments are often spotty. The prime example is the semiconductor industry, with billions of dollars invested by the state over the last three decades; the industry’s advancement consistently fell short of government targets. The real secret of the Chinese high-tech industry is indigenous innovation—that is, innovative Chinese firms sense unfulfilled domestic demands, innovate to generate localized products at a lower cost, build on access to the vast Chinese market to scale up, and eventually accumulate problem-solving capabilities to approach the technological frontier. Ironically, the US government’s chip war is creating a space for indigenous innovation for Chinese semiconductor companies, which was previously absent when China relied on American chips.

A race to subsidies as America’s solution is uncertain and problematic because it is based on a misunderstanding of how innovative Chinese firms advanced their technologies.

Taking the wrong lessons from China could have unintended consequences for the US industry. Since leading American high-tech firms have spent some $633 billion on stock buybacks over the past decade, it can hardly be assumed that their lack of enthusiasm for investing in semiconductor manufacturing is because of a lack of cash. But showering money on business, as China’s experience showed, would certainly lead to unhealthy state-business relations. Already, the CHIPS and Science Act has created perverse incentives for lobbying for more and more subsidies.

Instead of competing on subsidies, the United States should compete with China in areas where it excels, namely innovation in emerging technologies. Historically, the United States has had few successes in reviving mature industries through subsidies, whether it was steel, automobiles, or memory chips. But it has consistently led the world in new technological revolutions over the past century. In a world facing a climate crisis, the United States should compete with China to bring innovations to solve the nation’s existential problems and win the ultimate prize of technological leadership that benefits all humankind. After all, the subsidy war benefits few but corporate bottom lines.

Assistant Professor of Innovation Policy

School of International Relations and Public Affairs

Fudan University

Shanghai, China

Author of China’s Drive to the Technological Frontier (Routledge, 2022)