Lessons From a Decade of Philanthropy for Interdisciplinary Energy Research

In 2016, we at the Alfred P. Sloan Foundation put out a request for proposals to advance research on the economics of energy efficiency. In response, a group of researchers from Western Washington University proposed to investigate how such investments affected housing prices. By performing energy-efficiency audits on home sales across Washington State, they intended to determine whether sharing this information had a detectable impact on final sale price. As funders, we felt the project addressed a unique set of questions and would provide practical, actionable insights into how consumer preferences are reflected in housing prices, so we supported the study.

However, within the first year, the team encountered complicated disclosure rules that delayed their research. To try to speed things up, they redesigned the energy scorecard used to present audit results to would-be homebuyers. When faced with low recruitment rates, the team opened their study to include homes currently on the market along with homes that had recently been sold. Even so, by 2020, two years after the project’s planned end date, the research team had been able to secure data on only a fraction of the homes they had originally intended to include in the study. 

Instead of abandoning this focused research effort, the team shifted to a much wider array of research methodologies—including surveys, interviews, and modeling—to study a more complex set of issues surrounding energy-efficiency interventions: motives behind consumer adoption of energy-efficient technologies, the impact of home energy-efficiency labels, and policymaker perspectives on electrification. The researchers even explored how data from test homes can be used to understand real-world behavior patterns. Though the original approach would have provided an informative and robust analysis of the relationship between energy efficiency and housing prices, the results would have been very specific to the study region. With their expanded approach, the team is producing a more varied set of insights on the opportunities and barriers policymakers might encounter when trying to craft decarbonization policies. 

Funders need to assess both how their grantmaking efforts fit within their stated program goals and how they can best leverage their resources toward their mission moving forward.

As funders, we followed along as these researchers pivoted and adapted their aims to study home energy use through a wider lens. In these shifts, we see important lessons about how philanthropic funders assess the impacts of the programs they support. There is often a tension between measuring the progress of grantees toward project-specific milestones and measuring the overall progress of a funding program toward its stated mission.

Grants such as this show that one consideration informs the other: program goals inform project selection, and tracking a grant’s path to impact contributes to a funding program’s influence and strategic direction. Understanding this feedback loop can provide funders valuable insight into their grantmaking strategies. 

As energy and environmental philanthropy grows, it is vital for grantmakers to look inward. Funders need to assess both how their grantmaking efforts fit within their stated program goals and how they can best leverage their resources toward their mission moving forward. Taking time to reflect and evaluate can increase the impact of philanthropic investments in knowledge generation, enabling funders to direct resources more swiftly in support of decarbonizing energy systems and addressing climate change. To accelerate progress, we believe it is necessary for foundations to enable collective learning by being transparent about their findings; in doing so, they can contribute additional guidance for new and legacy funders alike on how to better allocate their resources or define programmatic impact.

A growing and evolving area for philanthropy

According to a 2023 report by the ClimateWorks Foundation, philanthropic funding directed at climate change mitigation efforts tripled between 2015 and 2021, but has since plateaued. As philanthropies experiment with novel forms of grantmaking, these practices open the field to new opportunities and can direct resources to critically important topics in new ways. However, overall philanthropic giving on these topics remains surprisingly small, representing less than 2% of philanthropic funding globally. And although interdisciplinary academic research on energy, environment, and climate issues is key to developing socially engaged responses to climate change, energy and climate philanthropies in the United States direct a small fraction of their resources toward this end. Support for interdisciplinary social science research remains even scarcer, both among philanthropic and government funders, with one study showing that scholarship on energy systems decarbonization and climate change mitigation received only 0.12% of research dollars globally.

Formalized in 2014, the goal of the Alfred P. Sloan Foundation’s Energy and Environment Program is to support academic research, education, networking, and dissemination efforts to inform the societal transition toward low-carbon energy systems in the United States. Of the top 50 energy and climate funders in the country, we are the only one fully dedicated to supporting academic scholarship. To date, the Energy and Environment Program has awarded just over $107 million across more than 300 grants, including some initial exploratory grantmaking before 2014.

As the program reaches its 10-year anniversary, it is a natural time to reflect on our first decade of grantmaking and take stock of our efforts thus far. How have we improved the world? How can we use evaluation to achieve our programmatic goals and to refine how we do grantmaking in the future? What have we learned that could be useful to other funders in this area?

As the program reaches its 10-year anniversary, it is a natural time to reflect on our first decade of grantmaking and take stock of our efforts thus far. How have we improved the world?

To answer these questions, we reviewed our full catalogue of annual grantee reports and tracked all the outputs supported by our program, including publications, conferences and workshops, students and early-career scholars supported, and additional funding raised by grantees as a direct result of our initial grantmaking. This multiyear impact assessment enabled us to see the full reach of our program, providing a basis to ask a wide range of questions while revealing important stories, like that of the team at Western Washington University. Here we have distilled some of the most important lessons learned from this undertaking.

Providing qualitative context for quantitative metrics

To make sense of our program’s overall impact, we need both specific measures of grantee progress—publications produced and workshops organized—as well as contextual descriptions of how the research fits into the overall academic and policy landscapes. It is only by reading our grantees’ narratives that we learn what a grant can actually accomplish—which is sometimes well beyond the scope of the initial application.

This process of learning must start even before the grant itself begins. Many funders, including the Sloan Foundation, work with their grantees to identify a set of goals and metrics to gauge progress and measure output. Though establishing such goals might seem somewhat perfunctory, this list of metrics sends a powerful signal about funder priorities—through both what is included and what is left out.

This signal-setting is particularly relevant for academic research, as it is easy to overemphasize conventional indicators of productivity such as publication counts or citations. However, more salient measures include collaboration with external partners, interdisciplinary engagement, and dissemination of findings to decisionmakers. Incorporating qualitative goals and metrics broadens our view of success for each grant.

Incorporating qualitative goals and metrics broadens our view of success for each grant.

The narrative descriptions in grantee reports also help funders contextualize research impact and see past the completion of specific grant goals to appreciate how scientific research findings might influence real-world decisionmaking. For example, grantee narratives showed us that Sloan-supported research on how utility disconnections disproportionately impacted vulnerable households helped influence policymaking during the height of the COVID-19 pandemic. One study even prompted the governor of Illinois to announce an $80 million financial aid program to prevent electricity disconnections.

Sometimes these impacts reach well beyond what we anticipated when we set the goals of the grant. We learned that our support to the environmental think tank Resources for the Future to update the framework for estimating the social cost of carbon went beyond simply accomplishing technical improvements for modeling. Since 2017, this project has helped catalyze a wide-ranging, highly influential initiative featuring a publicly available, open-source modeling framework and data explorer tool. Once launched, the initiative drew additional funding from multiple sources, and some of its insights have been incorporated into federal and state decisionmaking processes.

Goals and metrics that encourage grantees to think broadly about the reach of their work have provided us a more holistic vision of impact and a better understanding of the ripple effects of our funding. We would have missed these insights if we had assessed grants solely through quantitative metrics.

Letting grantees adapt

Though grantmaking and grant reporting tend to follow a regular schedule, the research process is nonlinear: projects change, collaborations develop, and results point in unexpected directions. Investigators need autonomy over how best to adapt their projects. Unlike larger federal funders, philanthropic funding can allow for much more flexibility in how research is conducted. We know that research impact is not predetermined, and funders should encourage adaptation when necessary. Leaving room for grantees to make these changes also requires funders to adjust how they evaluate progress.

Our grantee portfolio has many examples of projects that pivoted productively midway. There may be no more salient example of the need for flexibility than the COVID-19 pandemic, which put innumerable research and dissemination efforts on pause. When original plans became infeasible, many grantees modified their work to take full advantage of online platforms. For instance, in 2020 and 2021, the annual Environmental and Energy Policy and the Economy Conference organized by the National Bureau of Economic Research occurred virtually and attracted more participants than the usual in-person gathering. The subsequent 2022 and 2023 meetings were then held in a hybrid format that allowed virtual participation along with in-person attendance. This change had unexpected benefits, allowing researchers and decisionmakers from a broader range of locations to participate. Although pandemic disruptions are a magnified example, deviations from initial workplans can provide openings for grantees to pursue important research questions and innovate in how their findings are shared. 

In fact, ambitious research that could drive social adaptation toward clean energy systems requires the type of flexibility that philanthropies are well suited to provide. Cutting-edge inquiries that involve multiple disciplines, methods, and institutions require a high degree of flexibility integrated into researchers’ workplans from the start.

Ambitious research that could drive social adaptation toward clean energy systems requires the type of flexibility that philanthropies are well suited to provide.

In particular, we support many studies incorporating participatory research, and these efforts need time to develop trusting relationships between the research team and the communities with which they intend to collaborate. Clinging to an overly rigid set of goals and metrics with minimal flexibility might preclude such projects from advancing, potentially leading to imbalances in the research design that perpetuate the same cycles of inequity these projects are meant to interrogate and disrupt. Such interdisciplinary programs also tend to be neglected by federal funding, which is often siloed by discipline. Philanthropy can capitalize on its inherent flexibility to support projects like these, and it also can work to ameliorate obstacles that arise with traditional peer review evaluation that may discourage interdisciplinary scholarship.

Finally, funder self-evaluation also requires a degree of flexibility and reflection. As projects evolve and programmatic priorities get codified, existing evaluation tools and techniques might become unsuitable for the task at hand. We went through multiple iterations of our own internal assessment process to find the right amount of time required of our grantees to provide an appropriate level of information without placing an unneccessary burden on them.

Taking a long-term view

As we’ve looked back over the past 10 years, we’ve seen ever more clearly that impact assessment is a dynamic process, cutting across many timescales and requiring a long-term view to appreciate it fully. Impact can be seen almost immediately at the granular level of an individual scholar publishing novel results or a graduate student learning a new research method and is evident over the medium term as forums that bring together multiple communities begin to forge new connections. But over the years, some projects continue to develop in surprising directions that funders need to pay attention to when conducting impact assessments. Many times, the most significant impacts might be difficult for funders to appreciate because they take place well after the grant reporting period has ended. 

To get a handle on the longer-term influence of our program, we have begun to follow two factors over time: whether the research is used by other scholars and decisionmakers and whether the grantees secure further funding to expand or continue the research. These are direct indicators of impact that extend after the lifetime of a grant that help us measure progress against our own programmatic objectives of linking research with practice and disseminating information for decisionmaking. These factors are also useful for their tractability: both uptake by decisionmakers and the securing of additional funding can be easily gleaned from grantee reports, and neither requires much effort by grantees to summarize.


From 2014 to the present, Energy and Environment Program grantees have raised an additional $75.2 million dollars from a variety of funding sources to support the development of their research. All data were collected and analyzed from internal Sloan Foundation grantee reporting.

This first indicator of impact we track is how research results are incorporated and used by decisionmakers, indicating that a project fills a knowledge gap. For example, in 2021 we made a grant to a scholar at The George Washington University to study the timing of electric vehicle (EV) rebate provisions. This research found that lower-income EV buyers strongly preferred up-front rebates instead of waiting to receive a credit on their annual tax filing. These findings were incorporated into provisions in the Inflation Reduction Act that allow EV dealers to provide rebates at the time of purchase.  Though we had not anticipated this rapid application of research results, the project’s real-world relevance was obvious from the start, which was why we chose to support it.

Another way for research projects to be applicable is when they establish foundational research methodologies that can be applied in other contexts. An example is a 2013 grant we made for a coordinated field research campaign led by Environmental Defense Fund to investigate methane emissions from oil and gas infrastructure in the Barnett Shale region of Texas. The aim of the original grant was to test and compare different approaches for monitoring methane emissions in the United States. But over the past decade, amid declining costs for remote-sensing capabilities, the analytical tools tested in this effort led to development of a high-precision satellite capable of monitoring methane emissions from space that is expected to launch soon. In this way, a more narrowly focused domestic project paved the way for a global remote-sensing effort that goes well beyond both the initial grant scope and the reporting period. Having a better gauge of factors that contribute to the success of projects such as these can help us to identify such qualities in future projects.

The second indicator of impact we track is the ability of grantees to secure subsequent follow-on funding. Positioning early-career scholars to be successful in securing additional funding is a particularly important catalytic role for our program, as faculty at this stage are just beginning to formulate their research agendas and gain leadership experience in managing complex teams. Securing seed funding for their initial ideas is crucial, especially for scholars pursuing interdisciplinary research or working at less-well-resourced institutions. Initial support from a foundation can accelerate the launch of a project, produce preliminary results that boost confidence in the work, and eventually attract other funders to help the work grow.

To that end, collectively, our grantees report leveraging our initial support to raise at least $75.2 million in additional funding to advance their scholarship, compared to approximately $107 million granted from our program. This greatly exceeded our expectations and was an impact of which we were only partially aware before we began collecting information systematically.

This experience also helped us recognize the ways that philanthropy can help position early-career scholars for future success by providing continued guidance throughout the grantmaking process. Early-career researchers often find working with foundations to be opaque, and greater candor and collaboration between funders and grantees can provide invaluable experience that they can apply to future opportunities. At the Sloan Foundation, we often iterate on proposals with prospective grantees, providing feedback and comments on drafts to help set up their projects for the best chance of eventual success. Offering substantive feedback early in proposal preparation helps proposers clarify their arguments and deepen their thinking about how to approach their research project.

Advancing catalytic, interdisciplinary research

After a decade of formal grantmaking on energy and the environment, we are just now beginning to see the durable effects of our program. A major takeaway from our assessment is the unique role that philanthropy can play by supporting interdisciplinary, early-career scholars working on energy and climate research. We need as many perspectives as possible conducting research within and across the social sciences, engineering, and basic science to produce the integrated knowledge necessary to solve the complex energy and environmental problems we face today.

That knowledge is poised to come from early-career scholars conducting interdisciplinary, policy-relevant research, a group that is systematically overlooked and disincentivized within academic institutions. As the average age of principal investigators receiving their first large-scale grant has increased across fields of science, the scientific enterprise is missing out on years of potential innovation from promising scholars.

Philanthropy has a unique opportunity to leverage its funding flexibility to support early-career scholars. Expanding diverse ideas and perspectives associated with energy system decarbonization can enable progress on one of society’s most pressing, complex problems. Supporting these scholars is a primary way for philanthropy to have a lasting, positive impact on the field. More foundations need to devote resources to learn how best to fill this gap. We expect that the findings presented here can help provide a roadmap for others who are developing similar programs.

As the role of science philanthropy grows, it is important to develop a culture of transparency and engage in ongoing, meaningful self-reflection with the scientific and public communities around us. 

Additionally, integrating holistic evaluation has better prepared us to analyze our impact over time and assess our program’s place as we approach our next strategic program review. This assessment exercise made us think about what we signal to grantees in our goals and metrics, how to identify illustrative indicators of influence and impact, and how to contextualize tangible grant outputs within broader programmatic efforts aimed at energy system decarbonization.

Developing a habit of conducting routine self-evaluations and, most importantly, sharing these insights publicly supports successful, sustained growth in energy and environmental philanthropy. Sharing learning from these assessments can help inform practices by other funders, prompting continual reflection and refinement within the field that can support more effective and strategic grantmaking overall. As the role of science philanthropy grows, it is important to develop a culture of transparency and engage in ongoing, meaningful self-reflection with the scientific and public communities around us. 

Translocations

We de-notate and detonate. At breakpoint, we will close our eyes.
We replicate abnormally. We suffer from the rational.
We radiate excessively, fold into flats then flex our thighs.
We cycle then proliferate. We implicate, psychologize.

We replicate abnormally. We suffer from the rational.
Both balanced and unbalanced, we will call ourselves reciprocal.
We cycle then proliferate. We implicate, psychologize.
Our eyes will form before the splice. Our brains will not be typical.

Both balanced and unbalanced, we will call ourselves reciprocal.
Our arms are short. Our legs are long. Our sex will be consensual.
Our eyes will form before the splice. Our brains will not be typical:
Both fibrate and synovial, homologous and cauterized.

Our arms are short. Our legs are long. Our sex will be consensual.
We radiate excessively, fold into flats then flex our thighs:
Both fibrate and synovial, homologous and cauterized.
We de-notate and detonate. At breakpoint, we will close our eyes.


Illustration by Shonagh Rae.

A Great Bioeconomy for the Great Lakes

Nourished by academic research, venture capital investments, and a vibrant community, the bioeconomy is emerging as a transformational force in places such as Boston and San Francisco. Not surprisingly, today’s bioeconomy reflects the priorities of those regions, producing more than seven times more patents for pharmaceuticals than those for, say, plants, pesticides, and herbicides.

We believe that, with proper attention and investment in building a community for bioinnovation, a bioeconomy could take root in the Midwest and Great Lakes area, transforming the region’s stagnating economy and addressing some of its unique ecological challenges, including the legacy of pollution from earlier industries. Importantly, a bioeconomy attuned to this region’s priorities could shape the national industry as a whole, opening up new areas of innovation.

In agricultural production, for example, the cumulative output of five states (Illinois, Indiana, Michigan, Ohio, and Wisconsin) in the Great Lakes region surpasses California, with a total 16% share of the nation’s output versus 10.4%. Engineered biological technology that fosters agriculture and reduces contamination could gain a strong foothold here, while providing an expanded template for the bioeconomy in other agricultural areas.

We have given a lot of thought to how to nurture a bioeconomy where we live, and we suggest that this effort begins in both college and high school classrooms and then builds connections to a community that spans students, scientists, entrepreneurs, and administrators alike.

Despite this potential, there are many more biotechnology firms in Massachusetts alone than in those five Great Lakes states combined. Venture capital investment in Massachusetts was more than $4,350 per capita, according to a 2021 analysis, while Michigan’s rate was $110. Achieving the kind of holistic, decentralized, and integrated bioeconomy that has been promoted by the Biden administration will not happen without deliberate actions to overcome specific regional obstacles.

States in the region have done work to foster the bioeconomy. For example, Michigan has made significant investments in its bioeconomy. Starting in 2012, the Michigan Translational Research and Commercialization Program established innovation hubs across the state, with the Michigan State University Innovation Hub for AgBio tying the university to regional research and commercialization. Other programs directly aid early-career researchers and support early-stage proof-of-concept development of technologies within the state.

As two educators and a high school student who are passionate about biotechnology, we have witnessed the excitement around the regional bioeconomy as well as its hurdles in the Great Lakes region. We have given a lot of thought to how to nurture a bioeconomy where we live, and we suggest that this effort begins in both college and high school classrooms and then builds connections to a community that spans students, scientists, entrepreneurs, and administrators alike.

This is similar to the community-based model that has built the nexus of bioeconomy innovation and activity in Massachusetts and California over the last two decades.  Investment in educational institutions, coupled with established connections to venture capital, helped to germinate successful enterprises, which spawned further interest and growth in the local bioeconomy, inviting younger participants into the excitement. 

In 2008, researchers from the Massachusetts Institute of Technology founded Ginkgo Bioworks in Cambridge. As the company grew, it helped to create more local opportunities through the usual business practices of partnerships, start-ups, and acquisitions. At the same time, Ginkgo and its community have invested resources in outreach activities that welcome young synthetic biologists into the field. For example, Ginkgo’s founders have long been involved with the International Genetically Engineered Machine (iGEM) competition, which has historical roots in the Boston area. Over the last two decades, the annual iGEM competition has become the world’s largest for synthetic biology, with more than 400 teams from 45 countries submitting projects in 2023. Ginkgo’s founders were also involved in creating the BioBuilder Foundation, which brings educational activities involving synthetic biology to high school students and educators in nearly every state and 55 countries.

Our experiences have given us ideas about how a thoughtful, local biotech community might be built in our area by focusing on engaging educators and students starting in middle and high school.

Here in the Great Lakes, building a similarly supportive community of industry and educators will take time and will necessarily look quite different from communities in areas with a more established bioeconomy. Our experiences have given us ideas about how a thoughtful, local biotech community might be built in our area by focusing on engaging educators and students starting in middle and high school.

First off, students in our region are interested and motivated to be involved in biotechnology when given the opportunity. In 2023, Ohio State University, Wisconsin Lutheran College, the University of Chicago, Wright State University, the University of Michigan, and Alma College all fielded undergraduate teams to the iGEM competition, while the Illinois Institute of Technology fielded a graduate team.

At Alma College, a small liberal arts school in rural Michigan, one of us (Camenares) helped lead development of a synthetic biology curriculum that is integrated within biochemistry courses and extracurricular activities. Since 2017, this curriculum has included a special one-month course during the college’s signature spring semester when students work on the year’s iGEM competition project. Camenares’s experiences with the project have been striking, in part because of the way they have revealed students’ hunger for involvement with this potentially transformative technological field.

In 2019, Alma College’s iGEM team challenge was focused on combating atherosclerosis. Students connected with the challenge because of family histories with the disease, and some were inspired by family members who had worked at Dow Chemical, also located in Michigan. They presented their work at the end of the month to an audience that included their parents and other students.

One student was a graduating senior who took the course despite not needing to. After the presentation, the student’s parents pulled Camenares aside to say that they initially doubted their child’s decision to take the course, but upon seeing the final project—a genetically modified probiotic to eliminate an atherosclerosis-promoting metabolite from the diet—they understood the exciting and empowering work their child had become immersed in by taking the course. In places with a low concentration of science, technology, engineering, and math—STEM—careers, the field may be seen as abstract or elitist; but by exploring social concerns, the challenge enables students to envision themselves using biotech to care for their families and communities. Importantly, students who participate in competitions have been found to be more likely to express interest in pursuing a STEM-related career than those who do not, and participating in multiple competitions strengthens that interest.

Another benefit of the iGEM competition is that it connects students to regional, national, and international students, helping them understand the global nature of biotechnology research and business.

It must be said, however, that iGEM also lets students see how other places are preparing their peers to participate in this growing field. Of 124 high school teams competing in 2023, just 10 were from the United States, and 6 of those were from California. The polish and success of the team from Lambert High in Suwanee, Georgia, always stand out (the school has been participating since 2013). But there have not been any high school competitors from the Great Lakes states since 2019, when Dayton, Ohio, fielded a team. Once students have understood the bioeconomy as a growing global phenomenon, they worry about it passing them by. Students frequently asked Camenares why their high school didn’t have a program or an iGEM team. “Why am I only learning about this fantastic and accessible field now?” they wondered.

Far from being a rhetorical question, we see it as a call to action. Students call on us to find ways to adopt iGEM, BioBuilder, or similar programs in many more high schools in our region. They also inspire us to confront the barriers to bringing this experience—and the bioeconomy—to all students who are interested.

At the moment, these barriers are so profound that only the most determined surmount them. In 2021, when one of us (Subramanian) was a freshman at Waubonsie Valley High School in Aurora, Illinois, he cold-emailed community college professors in hopes of finding research to participate in. He was able to connect with a researcher at West Chicago’s BioBlaze Community Bio Lab who was working on a commercial enzyme project for the iGEM competition. When the community lab closed, Subramanian searched again for biotech opportunities before finally becoming involved as an individual in the iGEM community in 2023

Ideally, students would have access to synthetic biology in high school and college classrooms, but many educators lack confidence that these programs can be conducted at smaller institutions. Although examples like Alma College and BioBlaze Community Bio Lab show what is possible, it is difficult for educators with limited knowledge of synthetic biology to teach this field without access to a community of peers. When we have corresponded with local educators, many have said that without easy ways to engage with a community or explore the subject, they are hesitant to invest time and resources in this emerging field. They worry that the benefits to their students may not be worth the considerable investment.

One way to support educators is to build a regional community to provide resources and encouragement for such programs to flourish. Such communities can help build a pipeline from junior high school through college, so a student’s positive experience in synthetic biology need not be their last.

To encourage more widespread adoption of synthetic biology educational programming, educators need to be convinced that the work is worth it, and perhaps not as difficult as it seems at first. A common theme among educators we have spoken with is that, given all the challenges in both secondary and higher education and the many demands on their time, they hesitate to change their practices, even though they are aware that synthetic biology could advance a regional bioeconomy.

One way to support educators is to build a regional community to provide resources and encouragement for such programs to flourish. Such communities can help build a pipeline from junior high school through college, so a student’s positive experience in synthetic biology need not be their last, but rather can feed into even more exciting opportunities in research, academia, industry, or the larger community. 

Another way to build educators’ confidence is to contextualize synthetic biology within existing educational models, such as those that rely on the engineering design process (EDP). These principles and practices were originally developed for industry, but have since been adapted in various forms for the classroom. There is research suggesting that employing EDP in biology classrooms leads to improvement in critical thinking, application, and problem-solving.

However, there are other ways to establish the carefully interwoven relationships at the local level that a budding bioeconomy requires. In contrast to supporting an iGEM team, which requires a significant investment of time, resources, and expertise, educators may find that supervising students as they establish connections with local experts from industry, research labs, or local biofoundries can be enriching and easier to navigate. Importantly, building these connections helps students learn how to interface with and listen to local stakeholders, contributing to the growth of bottom-up bioeconomies. Nurturing these relationships can help students learn more about what is going on around them and actually help bring about innovative solutions to problems, drawing on regional identities and values.

The three of us recently formed the Great Lakes SynBio Association to try to bridge the gap between students, educators, and industry. As we work to build partnerships in the area, we hope that this group can be the start of a vibrant community that leverages the strengths and energy of the place we know and love.

Creating a bioeconomy in the Great Lakes region can build on models found elsewhere in the country, but it will require interventions that are finely calibrated to local people and resources. Rather than adopting programs like iGEM as a one-size-fits-all curriculum, it may be helpful to create regional initiatives that foster local communities among education, industry, and government or nonprofit organizations first. This may take different forms in different cities, counties, and regions. But this targeted approach reflects a bigger truth about the bioeconomy: unlike past types of industrialization, it may succeed the most when it is hyperlocal. The insight that every region has unique problems and opportunities is central to the goal of spurring development of the bioeconomy everywhere.

Revisiting the Connection Between Innovation, Education, and Regional Economic Growth

Forty years ago, Bruce Babbitt, then governor of Arizona, wrote in the first issue of this magazine that state and local governments had “discovered scientific research and technological innovation as the prime force for economic growth and job creation.” The last four decades have tested the soundness of this claim. Though advancements in research and technology have undoubtedly transformed regional and national economies, technological innovation alone has not been an economic silver bullet. In fact, the impacts of innovation have been far more broad—disruptive technologies have driven industry shifts, transformed the nature of work, connected partners across the globe, and affected many aspects of society in ways that were likely unthinkable in 1984.

Although some regions managed to harness innovation as an economic force, many places across the United States still struggle to assemble the components necessary to realize sustained economic growth. We now know that regional growth requires a deliberate blend of ideas, talent, placemaking, partnerships, and investment. First, it calls for dynamic research and development capacity, usually provided by research universities or federal, nonprofit, or industry research labs, to continuously foster discovery and development of new knowledge and concepts. Second, a large and diverse talent pool with expertise and experience relevant to the industrial sectors in the region is paramount. Third, a physical place or an innovation hub is needed to foster dynamic interactions and collaborations among academic researchers, industry partners, entrepreneurs, and community leaders. Fourth, financial and policy support from state and local governments is critical to direct resources and remove barriers. Finally, a growing regional economy often has robust venture capital capacity and a healthy entrepreneurial ecosystem.

Governor Babbitt used gardening metaphors to talk about technology’s impact over time: “rooting,” “blooming,” “ripening,” and “harvesting.” In hindsight, those metaphors leave out the collective, intentional, and coordinated work that must be done to make regional change happen, not only for jobs, but across society. The term I would use is “nucleating,” which refers to creating a central ecosystem that can support continual outward growth. Nucleation requires persistence and intent, and its effects can be far-reaching. 

Economic value of academic research

For much of the 1980s, translating research findings and breakthroughs from universities and government labs into commercially viable products or services was seen as the key to gaining a competitive advantage in the global economy. At the time, Babbitt observed increasing levels of investment in university research and development, coupled with a recognition that “the fruits of university research and development activity have little economic value unless they are systematically harvested in the marketplace.”

Then and now, one would argue that not all academic research should be motivated by economic potential, though many academic research efforts contribute to solutions that have economic value. Following the passage of the Bayh-Dole Act of 1980, many research universities installed technology transfer offices to harvest the economic value of inventions resulting from academic research funded by the federal government and other sponsors. Over the last few decades, these offices have played a significant role in bringing the concept of technology commercialization to university campuses, and have established best practices and policies in patent management and licensing agreements.

However, most university technology transfer offices cannot break even financially. A 2013 Brookings Institution report on university startups estimated that from 1992 to 2012, on average, 87% of technology transfer offices did not generate enough licensing income to cover the wages of their technology transfer staff and the legal costs filing patents. Today, many technology transfer offices face greater pressure to generate more licensing income, which requires balancing necessarily robust patent portfolios with the cost of maintaining such operations. 

The term I would use is “nucleating,” which refers to creating a central ecosystem that can support continual outward growth.

From invention disclosures and patent applications to licensing, follow-on R&D investment, and sometimes clinical trials and regulatory approval, it generally takes years for a new technology to reach the marketplace. The process is more frequently iterative than linear, requiring deep engagement and collaboration between academic inventors and the industry or startup licensees. To facilitate this untidy process successfully, universities must connect technology transfer offices with corporate partnerships and entrepreneurial activities on campus, which can be organizationally challenging. A number of other pitfalls may prevent academic inventions from realizing their full economic potential, including lacking a place for technology incubation, insufficient funding to bridge the “valley of death,” and inadequate understanding of market need or addressable market size for the product. For these reasons, technology commercialization takes integrated efforts and partnerships—it is an ongoing process of investing in the future.

Over the last 50 years, many federal initiatives have been created to foster long-term partnerships and investment to address critical challenges within the research ecosystem. For instance, in 1973, the National Science Foundation (NSF) launched the Industry-University Cooperative Research Centers program to develop long-term partnerships among industry, academia, and government.

In 1985, NSF established the Engineering Research Center (ERC) program. Each center is designed as a 10-year endeavor, and the program has become a successful platform for faculty, students, and staff in academia to collaborate with industry while working on complex long-term challenges; producing new knowledge, technologies, and startups; and preparing talent for emerging technological sectors. 

In 2007, the National Academies of Sciences, Engineering, and Medicine released a congressionally mandated report, Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future. The report recommended federal policymakers take actions to enhance the science and technology research enterprise with the goals of creating high-quality jobs and meeting the nation’s needs in clean, affordable energy. That same year, the America COMPETES Act was signed into law, which officially authorized the creation of the Advanced Research Projects Agency-Energy (ARPA-E). The ARPA model stresses the importance of agile but potentially transformative investments in project-based, high-risk research and technology development. Though the arc of an ARPA project may be just a few years, the existence of an agency—or multiple agencies—to coordinate such investments is itself a long-term, future-oriented effort. 

In 2014, the Revitalize American Manufacturing and Innovation Act authorized the Department of Commerce to initiate the National Network for Manufacturing Innovation, now known as Manufacturing USA, to secure US leadership in advanced manufacturing. Today, Manufacturing USA is a national network of 17 linked regional manufacturing institutes, where academic, industry, and other stakeholders collaboratively develop new technologies, test prototypes, and enable the future manufacturing workforce.

Efforts to capitalize on university research continue. The CHIPS and Science Act of 2022 created NSF’s Regional Innovation Engines and the Economic Development Administration’s Regional Technology and Innovation Hubs programs. These programs are new commitments to the enduring idea that long-term investment that focuses on critical challenges is needed to nucleate and expand innovation ecosystems. In this sense, Babbitt’s initial insight about the centrality of research and technological innovation to regional economic health stands the test of time and has become more significant, albeit in a far more complex form.

Evolution of place-based innovation

Beyond the efforts of federal initiatives and universities, the goal of nucleating and growing innovation ecosystems has sparked new models of place-based innovation at regional and state levels over the past several decades. State and local governments as well as regional business communities played significant roles in the establishment of these place-based innovation ecosystems, which continue to shape the landscape of innovation. It is important to note that the role of state and local government is neither passive nor confined to a single valence such as zoning or tax incentives.

The first university research park—now a widely adopted model in the United States and worldwide—was formed in the 1950s. City and university leaders partnered to allow Palo Alto, California, to annex land from Stanford University for R&D industrial development. The dynamic mix and high concentration of companies that formed across Stanford Research Park became one of the driving forces behind the development of Silicon Valley. Although Stanford Research Park is only two miles away from the university campus, these companies technically do not co-locate with university researchers. 

Regions that can master the art of cultivating partnerships and nucleating place-based innovation will be well positioned for the future.

Another well-known research park is North Carolina’s Research Triangle Park. Leveraging the capacity of three nearby research universities—the University of North Carolina at Chapel Hill, North Carolina State, and Duke University—Research Triangle Park was established in the late 1950s with strong support from the state, cities, local business leaders, and universities. Today, numerous businesses and employees call Research Triangle Park home, and its high density of companies and talent helps attract research-driven organizations and people, fueling regional economic growth.

In recent decades, a new model has been emerging: the co-location of university research and education facilities, industry partners, startup companies, retail, maker spaces, and even apartments, hotels, and fitness centers. This “innovation district” model features a high density of companies and talent; open and highly connected placemaking; and culturally dynamic living, working, and social environment that enables ideation and collaboration. Researchers, industry partners, entrepreneurs, and investors work and socialize in these innovation districts, bouncing ideas, forming partnerships, and starting new ventures. 

Kendall Square in Cambridge, Massachusetts, is a well-known example of an innovation district.  Kendall Square was originally known as an industrial district, but since the 1990s, a concentration of offices and lab spaces for large corporations, startups, incubators, and apartments, hotels, restaurants, and retail have developed. The dynamism of the square mile comprising the district, in walking distance to the Massachusetts Institute of Technology, provides an intellectually stimulating and socially interactive environment that catalyzes partnerships and attracts more co-locating businesses and organizations. 

Innovation districts do not just happen spontaneously; they require tremendous attention to placemaking. Details such as the design of lab and office space, connectivity between buildings, location of open space, position of parking garages, and the density of restaurants and coffee shops can all influence its overall environment. 

Today, regional innovation ecosystems have become globally networked as well as regionally clustered and place-based. Thanks to the widespread adoption of virtual meeting platforms, researchers, business leaders, and entrepreneurs can now connect across the globe. But research, innovation, and technology development often call for deeper collaboration and in-person interactions; therefore, it is unlikely that virtual platforms will replace placed-based innovation. Instead, they will complement each other, making regional ecosystems even more effective.

This new trend in connectivity may also enable more distributed economic growth. In the last few decades, research-driven economic growth has occurred mainly along the coasts or in major metropolitan areas. Virtual networks may now be helpful in nucleating growth in regions that have struggled economically, for example, by bringing funding to regions that currently lack a venture capital or angel investment community.

Inspired by new patterns of public and private cooperation and approaches to stimulate education reform, Babbitt said it was still “too early to pick the fruit.” A lesson from the last 40 years is that successful efforts take deliberate actions. Regions that can master the art of cultivating partnerships and nucleating place-based innovation will be well positioned for the future.

Propagating entrepreneurial ecosystems

Technology-based startups are a key component of an innovation economy. They hold high potential for generating financial returns, but more importantly, they enable new jobs, business models, and even industry sectors. They drive the dynamics of a regional ecosystem, stimulate excitement and creativity, and attract talent and investors who share their motives and passion.

But technology-based startups also face unique risks associated with technology development: a frequently long runway to commercialization, sizable capital investment, uncertain team dynamics, and emerging and ever-changing markets.

These combined risks are often referred to as the valley of death. Since the 1980s, many programs have attempted to bridge the valley of death by “de-risking” technology-based startups. For example, in 1982, through the Small Business Innovation Development Act, the Small Business Innovation Research, or SBIR, program was created to stimulate technological innovation and support small businesses. In 1998, Maryland established the Maryland Technology Development Corporation to facilitate the creation of early-stage companies, provide funding, and support their growth. And around 2000, Kentucky stood up the Kentucky Enterprise Fund, providing pre-seed and seed-stage venture capital-type investments to high-growth startups. In 2011, the NSF launched the Innovation Corps (I-Corps) program, providing experiential learning of market discovery for entrepreneurial teams to evaluate the market need and potential of their inventions.

This constellation of federal and state investments in pre-seed or seed-stage startups has been effective but not sufficient. Substantial follow-on private investment is frequently needed for technology-based startups to develop a market-viable product or service, build business partnerships, establish manufacturing or distribution channels or both, and ramp up revenue streams. Venture capital funds and angel investment networks are essential for the growth of a regional entrepreneurial ecosystem.

However, US venture capital funding is highly concentrated in a few metropolitan areas. According to CB Insights, US venture funding reached a total of $198 billion in 2022, of which about $128 billion was invested in the Silicon Valley, New York, Los Angeles, and Boston areas.

Only concerted efforts among state government, research universities, philanthropy, and local startup incubators can build the resources to host and retain startups in a region, provide seed funding, and cultivate a compelling, high-quality deal pipeline, which will in turn attract more capital investment to regions.

STEM education and talent for new challenges

Babbitt observed the increasing sophistication of science, technology, engineering, and mathematics—STEM—careers and called for education reforms that could prepare a new workforce to brave the coming “information revolution.” But even this insight fell short of understanding the many ways the acceleration of innovation would affect jobs, the economy, and communities.

STEM employment has grown considerably and since the 1980s, technology has transformed health care, banking, insurance, legal services, manufacturing, agriculture, transportation, and retail. Today, STEM jobs are found across almost all business sectors. For instance, the use of predictive analytics to establish customers’ purchasing patterns to manage supply chains has created demand for STEM jobs in the retail industry. In fact, from 1990 to 2016, STEM employment has grown by 79%, while overall employment grew by only 34%. With generative artificial intelligence, the future of STEM jobs remains in flux—a 2023 McKinsey report predicted that an additional 12 million US workers may need to transition to different occupations by 2030.

Today’s societal challenges need more than traditional STEM education. Pressing needs for innovation in energy, water, food, land use, environmental sustainability, health care, and education require solutions that stretch beyond science, engineering, and technology. To be prepared, today’s STEM students need to learn the most advanced knowledge in their fields, in addition to understanding business and policy principles and being able to discern different cultural, societal, and historical contexts. They need to be collaborative team players, creative and critical thinkers, motivated value creators, and effective communicators.

Traditional classroom learning is no longer sufficient to prepare the next generations of STEM workers and leaders. To keep pace, STEM education must provide both foundational knowledge and hands-on experience and skills. For decades, universities have experimented with modalities of experiential learning, ranging from internships, co-ops, on-campus capstone projects, and off-campus project-based learning. These are no longer optional, but required.

Worcester Polytechnic Institute (WPI), where I am now president, has been providing project-based learning since the 1970s. Today, WPI students form interdisciplinary teams and immerse themselves in real-world settings, working in one of WPI’s global project centers to solve problems full time for a period of seven weeks. This transformative learning experience prepares students to work as a team, learn how to learn, communicate and collaborate, see the world from different cultural perspectives, and most importantly, be motivated to address problems that truly matter to society. As a result, WPI graduates are sought out by employers. They are not only knowledge- and job-ready, but also career-ready.  

But reimagining STEM education must also happen beyond college-level preparation. It is widely known that academic interest in STEM is developed in early childhood and middle school. However, there are still many K–12 schools across the country without sufficient access to STEM curricula or extracurricular activities. While this issue is complex and requires persistent effort and sustained investment, one challenge policymakers must face head-on is the K–12 teacher shortage. Babbitt mentioned teacher shortages in science and mathematics in the 1980s. The problem has not budged. A 2023 Learning Policy Institute report estimated that about 1 in 10 of all teaching positions nationally were either unfilled or filled by teachers not fully certified for their assignments. The long-term impact of K–12 teacher shortages is significant and may play a role in undoing other efforts to catalyze economic growth.

The long-term impact of K–12 teacher shortages is significant and may play a role in undoing other efforts to catalyze economic growth.

Cultivating a large K–12 STEM talent pool calls for collaborative and innovative approaches to nurturing curiosity and inspiring deep, lasting interest among learners of all ages. To complement classroom learning, nonprofit organizations such as museums, competitions, networks, and clubs can offer interactive and motivating experiences where this kind of inspiration is often sparked. For instance, For Inspiration and Recognition of Science and Technology, or FIRST, the community behind the youth-serving robotics competition founded in 1989, provides engaging robotics activities that have opened horizons for generations of students to access the power of knowledge, creativity, and teamworking. 

Numerous STEM outreach programs have been established over the last few decades. To benefit more students and deliver lasting impact, these programs need to achieve not only learning outcomes, but also scalability and affordability.

The importance of considering societal impact

The world Babbitt was writing from in 1984 looks markedly different from today: we are now exponentially more connected, we generate and depend on vastly more data, and technology has made many aspects of life and work more convenient and efficient. On the other hand, some technologies have created unintended sociological, societal, and environmental problems. It is useful to contemplate what the differences between the two eras might tell us about the future as we consider many of these challenges, still looking for answers to many of the same questions while facing another dramatic industrial shift.

From his perch, Babbitt saw technological innovation driving economic development at regional and state levels to form a nationwide trend. These shifts were related to the emergence of personal computers and the internet, which transformed business sectors and ultimately enabled new technologies and jobs in the following decades. What Babbitt couldn’t foresee were the ripple effects of changes made to the regional, national, and global economy landscapes, as well as to our daily lives.

Today, we can imagine an analogous multidecade shift as generative and applied artificial intelligence, robotics, and life science breakthroughs—along with the vast data facilitated by ubiquitously connected devices—enable new technologies, businesses, and types of jobs. We must try to anticipate how such cascading changes will impact people’s lives, society, culture, policy, and the planet.

More than ever, societal impact must be integrated with technological advancements, STEM education, and economic growth. It cannot be an afterthought; building a healthier, more bountiful, vibrant, and resilient society must be the guiding vision, as well as the goal, of a regional innovation ecosystem.

Alta Charo Considers Ethics for Stem Cells and CRISPR

A lawyer and bioethicist by training, Alta Charo has decades of experience in helping to formulate and inform science policy on new and emerging technologies, including stem cells, cloning, CRISPR, and chimeras. The Warren P. Knowles Professor Emerita of Law and Bioethics at the University of Wisconsin at Madison, she served on President Clinton’s National Bioethics Advisory Commission, was a member of President Obama’s transition team, was an advisor for the Food and Drug Administration, and served on more than a dozen study committees for the National Academies of Sciences, Engineering, and Medicine. 

In the fourth episode of our Science Policy IRL series, Alta joins Issues contributing editor Molly Galvin to explore how science policy can and does impact people’s lives in real and profound ways. She also describes what it’s like to be one of the only non-scientists at the science policy table, how helping a close friend who died of ALS continues to inspire her work, and why science policy can help us become techno-optimists. 

Is there something about science policy you’d like us to explore? Let us know by emailing us at podcast@issues.org, or by tagging us on social media using the hashtag #SciencePolicyIRL.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Molly Galvin: Welcome to The Ongoing Transformation, a podcast from Issues and Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering and Medicine, and by Arizona State University. I’m Molly Galvin, a consulting editor for the journal.

Today, I’m joined by Alta Charo, the Warren P. Knowles Professor Emerita of Law and Bioethics at the University of Wisconsin-Madison for our next installment of our Science Policy IRL series where we’re pulling back the curtain to learn more about the community of people involved in making science policy happen by interviewing real people about their everyday experiences in the field. If you haven’t already, please check out our previous episodes below.

A lawyer and bioethicist by training, Alta Charo has decades of experience in helping to formulate and inform science policy on new and emerging technologies — from stem cells and cloning, to CRISPR and chimeras. She served on President Clinton’s National Bioethics Advisory Commission and was an advisor at the Food and Drug Administration from 2009 to 2011. She was also a member of President Obama’s transition team where she focused particular attention on transition issues related to the National Institutes of Health, the FDA, bioethics, stem cell policy, and women’s reproductive health. From 2021 to 2023, she served as the lead co-chair of the Safety, Security, Sustainability and Social Responsibility Unit of the U.S. Department of Defense. She is a member of the National Academy of Medicine and has served on more than a dozen National Academies study committees. She was also the inaugural David A. Hamburg Distinguished Fellow at the Nuclear Threat Initiative.

First thing I want to ask you seems like a basic question, but it could also be a very big question. How do you define science policy?

Alta Charo: Well, to give myself time, I’m going to say thank you for having me. It’s a pleasure to see you again. I think science policy has two sides of the same coin. One is the policy about how we are going to fund and perform scientific research and to deploy the results of that research. And so that’s really kind of the government role. The flip side of science policy is looking at the societal implications of what science can accomplish. And these are obviously related: what the government or the philanthropies choose to fund is influenced by what they’re hoping to achieve in terms of social effects. But they are different areas of analysis tend to call on different skills.

Galvin: Can you tell us how you got involved in science policy?

Charo: Well, I had a very confused childhood. I was fascinated both by the classic humanities, history and literature, and I loved to write. I wrote lots of really bad short stories. And I also loved science and I loved especially the kind of social policy implications of science. In the 1960s, we were in the middle of huge debates about nature versus nurture in the context of the civil rights movement. And so, I went to college originally, if you look at my freshman yearbook, announcing that I plan to double major in English and Mathematics and I compromised and I decided to major in Biology instead. And that turned out to be a very lucky choice because in addition to really liking the science, I lucked into classes on biology and social issues, biology and behavioral determinism, biology and evolutionary theory taught by people like E.O. Wilson and Stephen Gould and Richard Lewontin and Ruth Hubbard and George Wald. It was an incredible time to be watching these titans of the field debating one another in my classroom. So when I decided to use the biology degree more for social purposes than for basic science research, law became a tool more than a goal in and of itself. And so I don’t think it’s surprising I wound up doing science policy in many ways, although I thought it would be environmental policy and then it became therapeutics and reproductive genetics and such. It seems like there was always going to be a path that led to this place.

Galvin: You’ve been involved in advising on so many issues, including human genome editing, gene therapy, stem cell research, biosecurity. Can you talk a little bit about one or two projects that really resonated with you?

Charo: Sure. Let me mention briefly for each one I think three different things. So one has to do with stem cells and regenerative medicine. One of my best friends from childhood developed ALS (amyotrophic lateral sclerosis) when she was only 30 years old. And for the first six years of my time at Wisconsin, I flew back to New York very, very frequently to help take care of her and keep her company as she went through the six years of decline and paralysis before she died. And when the opportunity arose to take all the experience I had developed in the area of human reproduction, work on abortion policy and contraceptive policy at USAID (United States Agency for International Development) and the Alan Guttmacher Institute work on in vitro fertilization and surrogacy for the former Congressional Office of Technology Assessment and then academic research on so-called reproductive technologies.

It all came to a head when on my own campus, Jamie Thomson was immortalizing embryonic stem cells for the first time. And it was at the center of a debate that was about everything except embryonic stem cells. It was a debate about abortion. It was a debate about the role of women. It was a debate about the nature of personhood and human life, but it wasn’t actually about embryonic stem cells. It became a stand-in for everything. And it was clear we were at risk of having overreaction at the legislative level, at the state level or at the federal level, and that the scientific community would benefit from some consensus about how to proceed in this very tricky area. And I co-chaired the National Academies Committee on Embryonic Stem Cell Guidelines. And I do believe that the very extensive guidelines and continual revision of those guidelines that we developed helped to stabilize the field. And once there was a change in presidential policy on this with regard to funding, it also helped to shape the federal rules that now govern funding in this area. So, I feel like there was a very concrete outcome.

The second thing I want to mention is also something that came out of work with the Institute of Medicine and it’s not something that people talk about very much, but some of the research on preventing maternal to child transmission of HIV took place in parts of Africa where the American standard approaches simply were not feasible. They were too expensive. They involved too many visits and people were not actually showing up for prenatal care until very close to delivery. It was not going to work. And so there were tests to see which of these drugs that we were now working with could be used late in pregnancy in order to help save children’s lives. And it was a touchy area of research for all kinds of reasons, not the least of which is that women who were HIV positive were often stigmatized or even thrown out of their homes. They were at great risk. So you had to approach this research very carefully. And as a result, it was very carefully reviewed both locally and in the United States before it was started.

Despite that, there were critics of the research and complaints that in some cases, some of the paperwork to document these reviews had not been done. And there was the threat that some of the more successful interventions were going to be pulled out of the president’s PEPFAR (President’s Emergency Plan for AIDS Relief) program. And I was one of the people called in by Tony Fauci to review, yet again. this set of experiments and clinical trials in order to confirm that they were done ethically and that we could therefore use the results. And it was a very difficult task because were trying to recreate a lot of events that took place halfway across the world. And I remember sitting in the briefings and trying to point out that if you were sitting in an airplane and they forgot to do the checklist, they need to be disciplined because the checklist serves to reduce the risk of accidents. But if the plane still took off, it doesn’t mean that they didn’t actually fly the plane. And in this case, if they didn’t actually go through the checklist on the ethics reviews, it doesn’t mean that the experiment was unethical, but they do need to be told how to do it right the next time so that you reduce the risk of actually having an ethical experiment. And this turned out to be helpful in getting people to separate in their minds what the problems were from what the results should be. And I mean, it’s incredibly self-centered of me, but I do like to think that there are children that are alive today and healthy because I worked with other people to try to protect an area of research that had really concrete benefits for their mothers and themselves.

And the last is, I think going to be the genome editing report that again, I co-chaired. And here we were looking at a technology that was caught up once again in a conflation of debates around notions of reproductive autonomy and notions of eugenics and fears of eugenic eras and callbacks to World War II. And at the same time vast potential for advancing the diagnosis and the analysis and ultimately the treatment of a whole range of diseases and conditions. So our committee took this task very, very seriously. There were really very few guidelines to follow because the technology was so terribly new. I was fortunate that Jennifer Doudna had decided early on that she wanted to convene people to talk about this with her. She brought in Paul, the late Paul Berg, and David Baltimore who were veterans of Asilomar, and they put together a very small, very brief meeting in Napa Valley that I was included in along with Hank Greeley from Stanford as the two ethics people with scientists and journalists in a kind of mini replication of Asilomar.

And that set the stage in terms of having a background here and understanding that we needed to be very clear about separating heritable and non-heritable forms of this research. There’s a lot of science, regulatory science, needed to evaluate the science and its effects, but that we had the tools and that in the area of heritable that we had to actually challenge the comfortable consensus that you would never make heritable changes, a consensus that was developed at a time when you couldn’t do it. And to ask, is it absolutely ethically indefensible under every possible imagined circumstance to make a change that would be heritable? And after many months, we concluded the answer is no. But that finding such a circumstance and having the right preclinical evidence and regulatory apparatus to do it would be very, very hard to achieve. So we are miles away from doing it.

I think it was a valuable thing to trigger the regulatory apparatus around the world in every country to use their tools for somatic editing. And we are now seeing the success in approvals from the FDA of genome editing therapies that are profound and life-changing as well as being the victim of a business model that creates extraordinarily high upfront costs, may make sense over a lifetime, but we have to change the business model. And I think equally valuable was that we forced open the debate about heritable changes and force people to become more nuanced in their critiques and more precise in their predictions about the kinds of things that might happen if this were made available.

Galvin: You’ve been involved in so many of those different, really controversial, and very high level, important policy discussions. And most of the time it seems that your role is an advisory role. Could you talk a little bit about how that works as an advisor versus someone in the government implementing policy?

Charo: Well, I think I have to preface it by saying that even inside the government, many roles are advisory, including the roles I’ve played. I’ve worked for the Office of Technology Assessment as a staffer, and we wrote reports for Congress that were deliberately bipartisan and laid out options for congressional action that range from do nothing to do something dramatic and that were not geared to any particular political party’s agenda based upon our analysis of a technology and prediction of its effects and the advice of the advisory committees that we created. So there, the role as an advisor was to simply create in some great detail a set of options, but to let somebody else choose among them. So the great thing is that you become the people who really understand this business, but on the other hand, you have the frustration of not being able to express your opinion.

I worked for the US Agency for International Development. There, I was actually implementing. I was there as a AAAS fellow (American Association for the Advancement of Science Science & Technology Policy Fellow), which is something that people should definitely consider if they are thinking about moving from bench science to policy. And there, I was there to implement the executive policy in this case, having to do with access to family planning in Francophone Africa and Latin America. And I can tell you that there was at least one project I worked on, which I was not particularly fond of. I thought that the focus was unbalanced and reflected a set of priorities that I thought actually could backfire, but my job was to make that project work as well as possible, and that’s what I did. I wasn’t happy about that. And so I suppose it’s no surprise I wound up after the experience of not being able to express my opinion when I worked for Congress or not being able to actually follow through on my opinion when I worked for the executive branch, I went to academe where you could express your opinion all you want, very rarely having an effect on the world.

These are the trade-offs, right? These are the trade-offs. Once you get into academe, you either have to be so profound that you can become fundamentally influential. I mean, on the order of, you know, a Karl Marx starts an entire movement for the globe, or you can be fortunate enough to be plucked out to serve in these committee roles for the Academies, for government advisory committees. I served on President Clinton’s Bioethics Commission. And there you get to, once again, advise. Here allowed to express your opinion, because they’re getting you specifically for your expertise and opinions, but once again, having to then hand it off to see if somebody else will implement. Right. So on the Clinton Bioethics Commission, we wrote a number of reports with very strong recommendations for federal agency action to improve how human subjects research is done. They didn’t take all of our recommendations, but some of them they did, and some of them were influential on others, but there were other things where we were not influential at all. So we participated in yet another effort that’s been going on for many decades to have a better set of rules for research with people who’ve got impaired competence, like people who are developing dementia. And it has still never made it into public policy, formalized in federal regulation or funding rules. So it’s a lottery, whether or not your work will actually have an effect. And I think you have to just keep throwing that pasta against the wall, hope that some of it sticks.

Galvin: And how about working on these very controversial issues? Do you feel like you’re kind of drawn to those for any particular reason or is it just so happens that bioethics and biotech is such a huge issue in general that a lot of that does become controversial?

Charo: Yeah, I think it’s kind of the latter. I mean, we’re talking about biotechnology, life sciences technology, and so you’re talking about things that touch on what’s most personal to people. So I’ve worked on things having to do with human reproduction. I think that’s probably got the highest controversy associated with it because it is the most intensely personal thing that happens to most of us. I think that the work on, when I was at the FDA, I worked on, among other things, the approval process for engineered foods, bio-engineered foods. Again, highly controversial because food is something we all take into our bodies. It’s something that we feel intimate with because we touch it, we taste it, we make it, we grow it. And so even aside from its economic importance, there is just a kind of visceral sense that it is something that should be accessible to the ordinary person. If you’re going to talk about, I don’t know, a super collider, it’s just not something that most of us, myself included, can really understand and get emotional about because we don’t feel personally attached to it unless we’ve worked with one or we’ve been closely involved in the kind of research that depends upon its results. But the stuff I work on, because it’s about lives and food and babies and how we die and what it feels like when we get sick, these are things that everybody experiences. So everybody has an opinion.

Galvin: A lot of times when you’re involved in these projects, you are one of the few non-scientists in the room, and I was wondering if you could tell us a little bit about what it’s like for you to work so closely with scientists and how you have viewed that throughout your career.

Charo: I think that one of the reasons I’ve had as good a run for my money as I have in the area of science policy is I think science is cool, and I think scientists are cool. I see too many meetings where the non-science people around the table are either uncomfortable with or even somewhat suspicious or even hostile toward people who are embedded in the scientific world. And that just sets us up for failure. The fact that I think scientists are cool and that science is cool, doesn’t mean I think it should be done without any kind of public input, public guidance, et cetera. That’s not the point. But the point is you have to be on the same team, right? We have to both be rooting for the same thing, which is for science to flourish and do something good for the world. Now, how do we all get there together?

Both my older brothers became scientists. They’re both doctorates, one in bioengineering and the other one in physics. And so I grew up in a household where science was something that we talked about. My middle brother had a telescope and would bring it up to the roof of our apartment building and we would try to see the stars in the very overly lit Brooklyn, New York lighted skies. It was a very small apartment. I shared a bedroom with both my brothers. I remember late at night, it’s ridiculous, it’s coming to memory, late at night, and my oldest brother would be lying in the dark and he’d say things like, so if you’re sitting in a train and you throw a ball up in the air, how come you’re able to catch it instead of it just flying behind you while you keep going? And he was doing this to make me try to understand what he was already studying.

He was almost 10 years older than me. He was already studying physics and learning about the theory of relativity. Now I’m struggling to understand what happens in a train. So for me, it was a game and it’s part of what made the whole thing fun. So I don’t feel uncomfortable at those tables. Having the biology undergraduate degree and then having sat through so many scientific meetings and really trying to hear what they’re saying and to grasp as much as I can and to read the scientific journals, I feel like I’ve had a continuing education program, does not make me a scientist, but it has helped me to hear, understand enough to be able to figure out how that might affect something that I need to know about and to ask questions. I have to say that asking questions is probably the most important thing at the table.

Galvin: I think different people and different disciplines just think differently and questions that a scientists might be asking, sometimes they’re not thinking about the practical applications of science and the real world effects. One thing that we talk about a lot is how to bring the public perspective into these big questions for genome editing or stem cell research. We talk a lot about public participation. I wonder if you have thoughts about how we can really do that on a practical level, how to make the public part of policymaking.

Charo: Yeah, I have had some thoughts about that. Having sat through many different settings in which there’s some form of public participation, whether it’s the public hearings where people could testify before the Bioethics Commission or it’s sitting in an IRB meeting where you’ve got one public member who’s supposed to somehow represent the world out there. So a couple of things. I think the first is that it’s almost always a mistake to have one public member, one non-specialist member. It puts that person in a terrible position because somehow he or she is supposed to represent all other people besides the experts, and it can feel like you are being drowned out, and the conversation can also get away from you as it gets hyper-technical and all the people who are in the field start using their acronyms and they’re making references to experiments and you’re sitting there like, “What are they talking about?” And it can make you very wary about even participating.

So I think the first thing is that you have to have a critical mass so that you have a range of people who are the non-experts and also people who can help give each other kind of the emotional wherewithal to wade into this thing. The second is to be clear about what you’re expecting from them. And I’m going to draw now on the IRB experience because in a sense that’s also policymaking. And actually along with that, the same thing when it comes to this issue about how you talk to people about end of life care. I think it’s a big mistake to ask somebody, let’s talk about end of life actually. First, it’s a big mistake to ask somebody, do you want to continue antibiotics or do you want to be put on a ventilator? Because these are technical questions. What you really want to be getting from somebody who’s not a physician or a bioengineer is do you want to continue if it’s going to be painful? Do you want to continue if it means you’ll never leave the hospital? Do you want to continue if it means you will be attached to a machine for as long as we can tell? In other words, you ask people about how they want to live and then you tell them, well, if that’s how you want to live, then this is the machine that we should use or shouldn’t use, or this is the drug we should use or shouldn’t use.

In some ways, I think bioethicists have created a monster in the form of informed consent because they’ve now driven it to the point where there’s the sense that somehow the individual patient has to make all the decisions, but the whole reason that they’re seeing a professional medical provider is because they’re looking for professional advice. What the patient is an expert on is what the patient’s life preferences are, not on the tools that will achieve that.

Similarly, if you’re in a committee or in an IRB or something, I think what the public members are there to do is to bring back the human element. So you can have people, let’s say, talking about the genetic technologies that might reduce the severity of some early onset disease, but it’s the non-experts who may be the ones who are in the best position to say, let’s talk about the family dynamics. Let’s talk about how this affects the way in which somebody is going to grow up or their relationship with their siblings. This is not to say that I’m telling you you should do it or shouldn’t do it, or it’s good medicine or bad medicine, but let’s broaden the picture so that instead of treating bodies as if they are machines with a broken part, we recognize that human beings have both a physical and an emotional and a psychological component that has to all be considered together, and in that you can actually get help if it’s in a more kind of public policy, what should we be doing about biospecimen gathering, et cetera.

Again, the scientists will be able to tell you lots about what they could learn from the specimens and how that might be usable in this area or that area of research, but it’s going to be from the non-scientists that you’re more likely to have somebody talk about, well, what if somebody uses that for something that I think is horrible? I mean, what if they want to do, I don’t know, intelligence research, and I think that’s terrible stuff. I want to be able to say no to that.

Galvin: One question that we’re asking everybody participating in this series, and we’ve kind of already talked about this a little bit, but what are the big questions that motivate you to do this work?

Charo: Well, what motivates me is two things. I mean, one is this just kind of human curiosity that started back, as I said in the 1960s with the debates around nature and nurture and all that that led to, and the second is again, what I mentioned before, which is that really tragic six years of watching my friend die slowly from ALS and the frustration at the randomness of the universe. There’s no particular reason why she should have gotten this, and there’s no reason in the world that I can imagine that somebody should have to suffer like that. It’s one of those diseases that just makes you question the point of the universe. And the idea that anything I do like trying to save embryonic stem cell research and regenerative medicine research from being squelched by overreactive legislation, anything I can do that may help to make that situation something that’s only a matter of the past is incredibly motivating.

And so whether it’s testifying about the use of fetal tissue in research before Congress, or it’s in the work on regenerative medicine and genome editing for the Academies or the Bioethics Commission on sensible regulation of research, that doesn’t stop. It feels like I’m doing it, in a weird way I feel like I’m doing it all for her. The second is that I’ve been quoted this way before. I really do kind of divide the world into bio-pessimists and bio-optimists, and I’m a bio-optimist. I’m not foolish enough to think that technologies are all good, every technology is dual use. If we were to have imagined before there was fire, all the things that fire could do, we would’ve gone, wow, that’s great, and oh my God, that could destroy whole swaths of Canada and could send smoke flying all the way down to Washington D.C..

But overall, I do feel like science and technology can be gathered and made into a force for good and progress, and we do see life expectancies increasing, disease rates dropping, famine rates dropping. I mean, we live in a better world as human beings than we would have 20,000 or 10,000 years ago, 5,000 years ago. We just do. So I’m waiting for the world in which we use biology for so many more things. I’m waiting for the world where our buildings are not built out of bricks, but they are grown from the ground up, in which we have living skins that react to the external environment, in which our parks are lit by bioluminescent plants instead of eating up fossil fuel that increases the greenhouse gas problem, where foods are cultivated in ways that don’t involve the use of animals to the point of animal torture and are actually healthier for us anyway. I see this world of possibility. It’s probably the Star Trek coming out in me, but the thing that frustrates me more than anything else is that I’m now old enough to know that I’m not going to see a lot of this and I really want to be around to see it all happen.

Galvin: What role do you see science policy playing in making that innovation happen? How important is it and how can science policy be crafted to both protect people’s interests and protect from risks, but also maximize those benefits?

Charo: I think we’ve made tremendous strides in having a set of ethical principles and actual governing rules to protect people and the environment while we do scientific research. I do think we’ve made vast progress, not only in the United States, but around the world, in doing that. I think there’s a different challenge and it’s a more difficult challenge, and that is figuring out how to marshal the power of technology and innovation in a way that quickly makes its benefits more evenly and more equitably distributed among people and its risks and burdens more equitably distributed among people. We live in a largely capitalistic globe. That’s the dominant system, and it has had vast, vast benefits for us because of the way it has incentivized innovation. But, even the most purest capitalist economist will tell you that it’s not perfect and that there are places where you need to intervene in order to direct innovation and human efforts in ways that the profit motive will not, at least not initially.

And so the National Academies has embarked on an effort to look at ways that we can better embed ethical principles into the innovation process, and recently released a report specifically on embedding equity into the innovation life cycle and looking at ways that government and funders, including nonprofit funders, as well as the individual scientists and the private equity, all of them, can be thinking early on about ways either to direct research with an anticipation of how to make sure that its benefits are going to flow out more equitably or to create parallel lines of research. Now, I’ll give you a very concrete example. At the Second International Summit on Genome Editing, we were already hearing about the possibility of developing a treatment for sickle cell disease. Now, sickle cell disease is endemic in West Africa, although we have plenty of it here in the United States, especially in the African-American population, and the technology was going down the road of an ex vivo methodology. You take cells out of a human body, you edit them and you put them back in the human body. It’s incredibly difficult, can be very painful, very expensive, requires very elaborate facilities, et cetera, but it made more sense because it gets away from the problem of trying to do something in the body without the genome editing going wild and going into organs and systems you don’t want it to go in. It makes sense to do that research.

The Gates Foundation was thinking, well, we need to have a parallel line of research that will slowly get us to the point of, so-called in vivo editing where you don’t have to do all that because otherwise it’s unlikely that we will ever be able to see this work in places like West Africa who have far fewer medical facilities. That is the kind of thinking I think we need to encourage so that we can all see the benefits of science flowing out to as broad a population as possible as early as possible, and in that sense, I think that we can really help to promote science policy for the public’s benefit.

Galvin: Alta, it’s been such a great conversation. Really appreciate you taking the time. Thank you so much for joining us.

Charo: Oh, it was really my pleasure. You’re very welcome.

Galvin: If you would like to learn more about Alta Charo work, check out the resources in our show notes. Is there something about science policy you’d like to know? Let us know by emailing us at podcast@issues.org or by tagging us on social media using the hashtag #SciencePolicyIRL. You can subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producers, Sydney O’Shaughnessy and Kimberly Quach and our audio engineer Shannon Lynch. I’m Molly Galvin, contributing editor at Issues and Science and Technology. Thank you for listening.

How Health Data Integrity Can Earn Trust and Advance Health

Imagine a scenario in which Mary, an individual with a rare disease, has agreed to share her medical records for a research project aimed at finding better treatments for genetic disorders. Mary’s consent is grounded in trust that her data will be handled with the utmost care, protected from unauthorized access, and used according to her wishes. 

It may sound simple, but meeting these standards comes with myriad complications. Whose job is it to weigh the risk that Mary might be reidentified, even if her information is de-identified and stored securely? How should that assessment be done? How can data from Mary’s records be aggregated with patients from health systems in other countries, each with their own requirements for data protection and formats for record keeping? How can Mary’s wishes be respected, both in terms of what research is conducted and in returning relevant results to her?

From electronic medical records to genomic sequencing, health care providers and researchers now have an unprecedented wealth of information that could help tailor treatments to individual needs, revolutionize understanding of disease, and enhance the overall quality of health care. Data protection, privacy safeguards, and cybersecurity are all paramount for safeguarding sensitive medical information, but much of the potential that lies in this abundance of data is being lost because well-intentioned regulations have not been set up to allow for data sharing and collaboration. This stymies efforts to study rare diseases, map disease patterns, improve public health surveillance, and advance evidence-based policymaking (for instance, by comparing effectiveness of interventions across regions and demographics). Projects that could excel with enough data get bogged down in bureaucracy and uncertainty. For example, Germany now has strict data protection laws—with heavy punishment for violations—that should allow de-identified health insurance claims to be used for research within secure processing environments, but the legality of such use has been challenged.

Health data integrity can serve as a guiding principle that embodies the collective conscience of health care.

What will help is to step back from focusing on the minutia and embrace a larger principle: health data integrity. We see this term as encompassing both technical safeguards (accuracy, security, access) and ethical values (protecting patients, respecting their wishes, and advancing equitable, high-quality health care). As the Belmont Report and the Declarations of Helsinki and Taipei did for clinical research on human subjects, we believe that an international, multistakeholder effort to define and commit to health data integrity can help facilitate frameworks and cultural norms that justify Mary’s trust that her data will not be altered or misused, that her privacy will be respected, and that her contribution to medical science will be meaningful and secure. In other words, health data integrity can serve as a guiding principle that embodies the collective conscience of health care.

Integrity in a big data era

When Nick Schneider was a boy living in Argentina in 1988, he was hit by a car. After a brain scan resulted in an (incorrect) incidental finding of early dementia, his parents were able to mail medical data to experts in Germany and the United States for helpful second opinions. Today, such cross-country consulting could put health care providers in legal limbo. Numerous other practices that could potentially benefit individual patients are often challenging to navigate. For example, one of us (Lennerz) regularly encounters situations where he needs to identify patients similar to Mary in national or international databases. However, finding patients with identical or related genetic alterations and obtaining dependable medical information is a challenging—if not impossible—task, not because patients have opted out, but because health systems aren’t set up to enable it.

The regulatory and legislative frameworks governing health care data have, in many cases, struggled to keep pace with the requirements for collaborative research and innovation. The late Robert Eiss, who helped coordinate international projects at the US National Institutes of Health (NIH), highlighted several significant consequences of data sharing restrictions: almost 50 clinical research sites in the European Union were prevented from participating in NIH-sponsored COVID-19 trials, and dozens of EU projects assessing genetic and environmental factors for cancer risk were stalled. Prohibitions against exporting data prevent EU-run trials from submitting evidence to non-EU regulators, including the US Food and Drug Administration.

Different specifications around essential ethical practices—such as protecting sensitive data and obtaining informed consent—can also prevent collaborations. And the practical realities of working with real-world data, such as the heterogeneity of electronic medical records, often undercut efforts to put data to use. As health data science advances, the need for coordinated, internationally standardized, and reliable frameworks grows more apparent.

Effective frameworks for establishing health data integrity need to accomplish many aims simultaneously. They should honor informed consent and balance privacy needs with the benefits of sharing data—while also encouraging collection of the broadly representative data required to inform equitable health care practices. Frameworks should provide overarching requirements that ensure ethical data handling, responsible data use, and the transparent operation of language models to prevent fraud and abuse; and they need to enforce strict authentication protocols. International data sharing might seem to add to the complexity of these tasks, but we think it could actually ease them. These multifaceted ethical, regulatory, and practical challenges are best tackled via collaboration across countries and functions.

Solutions to these disparate problems share a common prerequisite: health care depends upon trust. Trust in the context of health data science encompasses trust between researchers, between patients and their health care providers, between humans and the technology they apply, and between nations in transnational collaborations. Health care workers must also trust that, say, blood samples and biopsies are analyzed in ways that enable good decisionmaking and patient care. And trust is earned through integrity.

Dedication to integrity

The need for integrity as a larger principle was brought home to us several years ago in a fortuitous encounter between two of us, Karl Lauterbach, Germany’s health minister, and Jochen Lennerz, who, at the time, ran a technology assessment laboratory at Massachusetts General Hospital. Lauterbach, an epidemiologist, faces national and supranational barriers to enabling health data research that improves care while simultaneously addressing privacy concerns in Germany and Europe. Lennerz faces practical and regulatory challenges to introducing cutting-edge diagnostics into cancer and other clinical care. For both, proper, effective handling of highly complex data is of paramount concern.

We joined forces with our third author, Nick Schneider, who negotiated the European Union’s General Data Protection Regulation (GDPR) on behalf of the German Federal Ministry of Health and led both the taskforce to adapt German federal health laws to the GDPR and the current negotiations toward a European Health Data Space, an infrastructure and framework set up to empower patients, protect their data, and foster health data science. Together we organized a high-level brainstorming meeting in Berlin, hoping to set the stage for strategic alignment.

Integrity extends beyond the realm of data accuracy or security; it encompasses a commitment to fairness, honesty, and respect throughout the entire health data life cycle.

This Data for Health conference brought together over 400 international stakeholders in the summer of 2023—representatives from industry, academia, law, biomedical sciences, and civil society. There were patient advocates, legislators, health policy advisors, consultants, students, ethicists, creative commons legal experts, data protection and cybersecurity experts, government and tech industry representatives, as well as private citizens and patients.

Instead of the usual conference setup with lectures and posters, we had panel discussions and participant-driven conversations, following the BarCamp format. This let us delve into some of the most pressing questions in health data science: assuring adequate levels of data protection and consent, assessing current data transfer practices, identifying legal bases for transfers, implementing additional safeguards within a legal vacuum, and creating mechanisms that enable health data to be treated differently from consumer data. We made much of the content available online for anyone who wants to follow the conversation and perhaps join in
the future.

In these sessions, we also learned the depth of the conundrum this effort faces: discrepant regulatory and legislative frameworks on either side of the Atlantic lack any appropriate, practical working guidelines for enabling collaborative research. The threats to inadequately secured data are very real, as made clear by the list of breaches maintained at the US Department of Health. A better focus on the most pressing risks could improve data security and health data science. One theme that came up at the conference was that data protection officers within hospitals and health agencies often see their roles as solely protecting data against, say, an abstract risk of reidentification or unauthorized disclosures—rather than considering how data could be used to advance health care or how patients wish their data to be used.

The concept of health data integrity emerged as a guiding principle that resonated throughout the gathering in Berlin, surprising even the most seasoned participants. Integrity extends beyond the realm of data accuracy or security; it encompasses a commitment to fairness, honesty, and respect throughout the entire health data life cycle. It includes enabling appropriate use of data to advance health care, drive innovation, and enhance the well-being of diverse populations.  

It also intertwines with the pursuit of equitable health care. Health data integrity is essential in any effort to share sensitive data, and sharing diverse, representative datasets is the only way to gain insights across a spectrum of patients and so enable a more comprehensive understanding of health patterns, treatment efficacy, and various health influences on different demographic groups. In this context, health record vendors, health care providers, or any other stakeholders that interfere with permitted access, exchange, or use of health data violate integrity by hindering research and patient care. Recent laws in both the United States and Europe already ban this kind of interference as “information blocking,” but it still happens in practice. The development of common patient information and consent forms, as well as collaboratively written codes of conduct, can serve as practical means to ensure transparency, shareability, and accountability across the system. This was proposed by the Council of the European Union in its conclusions on COVID-19 lessons learned in health and confirmed by conference participants in Berlin.  

We continued to address these topics in a follow-up workshop in Boston last fall. The initiative demonstrated that commitment to integrity is an essential enabler; without that assurance, the medical field will not be able to move on from outdated, contradictory frameworks and embrace overarching ones to protect patients and advance research. The situation demands a concerted, comprehensive effort to produce an effective regulatory landscape. By embracing integrity, health care professionals, vendors, researchers, and policymakers can establish a financially sustainable health data science ecosystem that honors data subjects and drives improved patient care.

Creating a cultural imperative

At this point, instead of getting bogged down in detailed discussions about the numerous complex regulations that complicate the landscape, it might be more effective and efficient to simply commit to a clear declaration:it is not enough to merely share data; it must be done with integrity.

This approach has helped before. The declarations of Helsinki (established in 1964 and updated several times) and Taipei (established in 2002) have long served as beacons of ethical conduct in medical research. The first declaration sets rules for medical research involving human subjects, and the second specifies research on health databases, big data, and biobanks. The infamous Tuskegee syphilis study in the United States also brought substantial changes in ethical guidelines for medical research. This study, conducted by the US Public Health Service from 1932 to 1972, withheld treatment from African American men with syphilis without their knowledge or consent. The resulting outrage led to the US National Research Act of 1974 and Belmont Report of 1979, which mandated the creation of institutional review boards and established basic bioethical principles, such as respect for persons, fair treatment, and an expectation that research subjects will benefit from participation. Together these declarations built up a cultural imperative to uphold ethical research on human subjects.

Now it’s time to extend the conversation to a new ethical and moral code for the use of data technologies in medicine. The medical profession, research communities, patient organizations, and civil societies need to set clear ethical and moral boundaries to underpin technical and legal requirements. The cultural imperative of health data integrity should be made strong enough to prevent health care providers, researchers, or vendors from violating the spirit of integrity, with appropriate legal implications.

Reasons why people wouldn’t want their data shared should be proactively assessed and honored, and everyone within health data science should be frank with how data might be used, including that it may not be feasible to retrain models if patients opt out and that there can be no guaranteed protections against hacking, resale of data, or nefarious unanticipated uses of data.

A cultural imperative compels people in the field to focus on more than meeting requirements and avoiding liability; they will be expected to do right by their patients and to enable data practices that produce better health care.

Regulatory and legislative governance structures ensure that ethical standards are upheld, patient rights are protected, and data privacy is maintained. We argue that elevating health data integrity to a cultural imperative can achieve a kind of commitment that frameworks alone cannot. A cultural imperative compels people in the field to focus on more than meeting requirements and avoiding liability; they will be expected to do right by their patients and to enable data practices that produce better health care.

Imagine a future where the consent that Mary gives within her health care setting is compatible with processes used around the world. When she signs the forms, she is provided with realistic options to opt out in the context of a conversation with her trusted provider. In this future, trans-agency coordination fosters health data integrity across health care institutions and regulatory bodies; seamless collaboration and information sharing are designed to benefit patients while upholding ethical standards. Additionally, trans-Atlantic consent mechanisms are established, integrating the requirements of both sides to foster cross-border health care data exchange that respects individual privacy and security needs.

Through the Data for Health Initiative, we have uncovered more than a dozen forms of integrity across various technical, professional, and other contexts. To move forward, several concepts must converge and harmonize with secure data practices to enable the power of large language and other artificial intelligence models. For instance, interoperability can enhance data sharing and collaboration, tokenization can provide a secure way to handle sensitive information, and blockchain can ensure the transparency and integrity of data—all of which are essential to unleash the potential of these technologies to transform health care while safeguarding patient privacy and security. We cannot imagine accomplishing these huge, important tasks without the concept of health data integrity to unite and motivate efforts.

We implore the medical profession, research communities, patient organizations, and civil society at large to take proactive steps in shaping the future of health data science. By embracing integrity as a cultural imperative, stakeholders can navigate the complexities of health data science with ethics, transparency, responsibility, and improved care as guiding stars. This will help overcome the challenges of interdisciplinary miscommunication and other barriers to drive meaningful advancements in health care. A culture of health data integrity can ensure that patients have less to risk when they share data, and more to gain.

Rebuilding Public Trust in Science

As Kevin Finneran noted in “Science Policy in the Spotlight” (Issues, Fall 2023), “In the mid-1950s, 88% of Americans held a favorable attitude toward science.” But the story was even better back then. When the American National Election Study began in 1948 asking about trust in government, about three-quarters of people said they trusted the federal government to do the right thing almost always or most of the time (now under one-third and dropping, especially among Generation Z and millennials). Increasing public trust in science is important, but transforming new knowledge into societal impacts at scale will require much more. It will require meaningful public engagement and trust-building across the entire innovation cycle, from research and development to scale up, commercialization, and successful adoption and use. Public trust in this system can break down at any point—as the COVID-19 pandemic made painfully clear, robbing at least 20 million years of human life globally.

For over a decade, I had the opportunity to support dozens of focus groups and national surveys exploring public perceptions of scientific developments in areas such as nanotechnology, synthetic biology, cellular agriculture, and gene editing. Each of these exercises provided new insights and an appreciation for the often-maligned public mind. As the physicist Richard Feynman once noted, believing that “the average person is unintelligent is a very dangerous idea.”

The exercises consistently found that when confronted with the emergence of novel technologies, people were very consistent regarding their concerns and demands. For instance, there was little support for halting scientific and technological progress, with some noting, “Continue to go forward, but please be careful.” Being careful was often framed around three recurring themes.

As the physicist Richard Feynman once noted, believing that “the average person is unintelligent is a very dangerous idea.”

First, there was a desire for increased transparency, from both government and businesses. Second, people often asked for more pre-market research and risk assessment. In other words, don’t test new technologies on us—but unfortunately this now seems the default business model for social media and generative artificial intelligence. People voiced valid concerns that long-term risks would be overlooked in the rush to move products into the marketplace, and there was confusion about who exactly was responsible for such assessments, if anybody. Finally, many echoed the need for independent, third-party verification of both the risks and the benefits of new technologies, driven by suspicions of industry self-regulation and decreased trust in government oversight.

Taken as a whole, these public concerns sound reasonable, but remain a heavy lift. There is, unfortunately, very little “public” in the nation’s public policies, and we have entered an era where distrust is the default mode. Given this state of affairs, one should welcome the recent recommendations proposed to the White House by the President’s Council of Advisors on Science and Technology: to “develop public policies that are informed by scientific understanding and community values [creating] a dialogue … with the American people.” The question is whether these efforts go far enough and can occur fast enough to bend the trust curve back before the next pandemic, climate-related catastrophe, financial meltdown, geopolitical crisis, or arrival of artificial general intelligence.

Visiting Scholar

Environmental Law Institute

Coping in an Era of Disentangled Research

In “An Age of Disentangled Research?” (Issues, Fall 2023), Igor Martins and Sylvia Schwaag Serger raise interesting questions about the changing nature of international cooperation in science and about the engagement of Chinese scientists with researchers in other countries. The authors rightly call attention to the rapid expansion of cooperation as measured in particular by bibliometric analyses. But as they point out, we may be seeing “signs of a potential new era of research in which global science is divided into geopolitical blocs of comparable economic, scientific, and innovative strength.”

While bibliometric data can give us indicators of such a trend, we have to look deeper to fully understand what is happening. Clearly, significant geopolitical forces are at work, generating heightened concerns for national security and, by extension, information security pertaining to scientific research. The fact that many areas of cutting-edge science also have direct implications for economic competitiveness and military capabilities further reinforces the security concerns raised by geopolitical competition, raising barriers to cooperation.

Forms of cooperation remain, continuing to give science a sense of community and common purpose.

Competition and discord in international scientific activities are certainly not new. Yet forms of cooperation remain, continuing to give science a sense of community and common purpose. That cooperative behavior is often quite subtle and indirect, as a result of multiple modalities of contact and communication. Direct international cooperation among scientists, relations among national and international scientific organizations, the international roles of universities, and the various ways that numerous corporations engage scientists and research centers around the world illustrate the plethora of modes and platforms.

From the point of view of political authorities, devising policies for this mix of modalities is no small challenge. Concerns about maintaining national security often lead to government intrusions into the professional interactions of the scientific community. There are no finer examples of this than the security policy initiatives being implemented in the United States and China, the results of which appear in the bibliometric data presented by the authors. At the same time, we might ask whether scientific communication continues in a variety of other forms, raising hopes that political realities will change. In addition, what should we make of the development of new sites for international cooperation such as the King Abdullah University of Science and Technology in Saudi Arabia and Singapore’s emergence as an important international center of research? Further examination of such questions is warranted as we try to understand the trends suggested by Martin and Schwaag Serger.

It is tempting to discuss this moment in terms of the familiar “convergence-divergence” distinction, but such a binary formulation does not do justice to enduring “community” interests among scientists globally.

In addition, there is more to be learned about the underlying norms and motivations that constitute the “cultures” of science, in China and elsewhere. Research integrity, evaluation practices, research ethics, and science-state relations, among other issues, all involve the norms of science and pertain to its governance. In today’s world, that governance clearly involves a fusion of the policies of governments with the cultures of science. As with geopolitical tensions, matters of governance also hold the potential for producing the bifurcated world of international scientific cooperation the authors suggest. At the same time, we are not without evidence that norms diffuse, supporting cooperative behavior.

We are thus at an interesting moment in our efforts to understand international research cooperation. While signs of “disentanglement” are before us, we are also faced with complex patterns of personal and institutional interactions. It is tempting to discuss this moment in terms of the familiar “convergence-divergence” distinction, but such a binary formulation does not do justice to enduring “community” interests among scientists globally, even as government policies and intellectual traditions may make some forms of cooperation difficult.

Professor Emeritus, Political Science

University of Oregon

In Australia, the quality and impact of research is built upon uncommonly high levels of international collaboration. Compared with the global average of almost 25% cited by Igor Martins and Sylvia Schwaag Serger, over 60% of Australian research now involves international collaboration. So the questions the authors raise are essential for the future of Australian universities, research, and innovation.

While there are some early signs of “disentanglement” in Australian research—such as the recent mapping of a decline in collaboration with Chinese partners in projects funded by the Australian Research Council—the overall picture is still one of increasing international engagement. In 2022, Australian researchers coauthored more papers with Chinese colleagues than with American colleagues (but only just). This is the first time in Australian history that our major partner for collaborative research has been a country other than a Western military ally. But the fastest growth in Australia’s international research collaboration over the past decade was actually with India, not China.

At the same time, the connection between research and national and economic security is being drawn more clearly. At a major symposium at the Australian Academy of Science in Canberra in November 2023, Australia’s chief defense scientist talked about a “paradigm shift,” where the definition of excellent science was changing from “working with the best in the world” to “working with the best in the world who share our values.”

This is the first time in Australian history that our major partner for collaborative research has been a country other than a Western military ally.

Navigating these shifts in global knowledge production, collaboration, and innovation is going to require new strategies and an improved evidence base to inform the decisions of individual researchers, institutions, and governments in real time. Martins and Schwaag Serger are asking critical questions and bringing better data to the table to help us answer them.

As a country with a relatively small population (producing 4% of the world’s published research), Australia has succeeded over recent decades by being an open and multicultural trading nation, with high levels of international engagement, particularly in our Indo-Pacific region.

Increasing geostrategic competition is creating new risks for international research collaboration, and we need to manage these. In Australia in the past few years, universities and government agencies have established a joint task force for collaboration in addressing foreign interference, and there is also increased screening and government review of academic collaborations. But to balance the increased focus on the downsides of international research, we also need better evidence and analysis of the upsides—the benefits that accrue to Australia from being connected to the global cutting edge. While managing risk, we should also be alert to the risk of missing out.

Executive Director, Innovative Research Universities

Canberra, Australia

The commentary on Igor Martins and Sylvia Schwaag Serger’s article is closely in tune with recent reports published by the Policy Institute at King’s College London. Most recently, in Stumbling Bear; Soaring Dragon and The China Question Revisited, we drew attention to the extraordinary rising research profile of China, which has disrupted the G7’s dominance of the global science network. This is a reality that scientists in other countries cannot ignore, not least because it is only by working with colleagues at the laboratory bench that we develop a proper understanding of the aims, methods, and outcomes of their work. If China is now producing as many highly cited research papers as the United States and the European Union, then knowing only by reading is blind folly.

A strong, interconnected global network underpins the vast majority of highly cited papers that signal change and innovation. How could climate science, epidemiology, and health management work without such links?

These considerations need to be set in a context of international collaboration, rising over the past four decades as travel got cheaper and communications improved. In 1980, less than 10% of articles and reviews published in the United Kingdom had an international coauthor; that now approaches 70% and is greatest among the leading research-intensive universities. A similar pattern occurs across the European Union. The United States is somewhat less international, having the challenge of a continent to span domestically. However, a strong, interconnected global network underpins the vast majority of highly cited papers that signal change and innovation. How could climate science, epidemiology, and health management work without such links?

The spread across disciplines is lumpy. Much of the trans-Atlantic research trade is biomedical and molecular biology. The bulk of engagement with China has been in technology and the physical sciences. That is unsurprising since this is where China had historical strength and where Western researchers were more open for new collaborations. Collaboration in social sciences and in humanities is sparse because many priority topics are regional or local. But collaboration is growing in almost every discipline and is shifting from bilateral to multilateral. Constraining this to certain subjects and politically correct partners would be a disaster for global knowledge horizons.

Visiting Professor at the Policy Institute, King’s College London

Chief Scientist at the Institute for Scientific Information, Clarivate

Founder and Director of Education Insight

Visiting Professor at the Policy Institute, King’s College London

Former UK Minister of State for Universities, Science, Research and Innovation

In Plain Sight

Community colleges are the workhorses of American higher education. They are what this book refers to as “the ‘hidden engines’ that can drive shared prosperity.” Community colleges are tied tightly to their localities, engaging students in associate degrees, pre-baccalaureate courses, skills-based certificates, and noncredit classes. Those enrolled at community colleges are more likely than students at four-year schools to be first-generation college students or come from disadvantaged backgrounds. By design, community colleges are embedded in the economies of their communities and responsive to the needs of local firms.

America’s Hidden Economic Engines profiles five community colleges and the impact these institutions have on their local economies. Edited by Robert B. Schwartz and Rachel Lipson, this slim volume is the eighth installment in Harvard Education Press’s Work and Learning series. The cases are authored by graduate students and were written over three months, with each community college providing access to someone on the senior leadership team as a point of contact. The authors describe the cases they examine as exemplars, but I suspect they are a fairly representative sample of community colleges throughout the United States and the creative ways they have adapted to the needs of their communities amid decades of economic changes.

The first case study examines Lorain County Community College in Ohio, which has taken a thoughtful approach to matching students with career opportunities, highlighting the importance of jobs in manufacturing. However, the chapter credits this single institution as a driver of growth for northeastern Ohio, ignoring the other community colleges and institutions of higher education that pepper that landscape. This is not to take anything away from Lorain County Community College, but simply to point out that economic development is never attributable to a single organization or institutions—it always involves an ecosystem. Just as we now recognize that most successful innovations come from teams rather than lone geniuses, I don’t think it’s productive to look at single institutions without appreciating their context and role in the local economy.

The cases are each different, in that they represent a different focus and geography and client population. The other colleges the book examines include Mississippi Gulf Coast Community College, Northern Virginia Community College, Pima Community College in Arizona, and San Jacinto Community College in Texas. Disappointingly, the cases do not follow a similar template, making it difficult to compare them with one another. The editors make other questionable choices. For example, tables look at enrollment by race using absolute numbers rather than percentages, and the definitions of the community college service area are conflated from counties to full metropolitan areas.

Just as we now recognize that most successful innovations come from teams rather than lone geniuses, I don’t think it’s productive to look at single institutions without appreciating their context and role in the local economy.

Although well intentioned, the case study write-ups can reflect the elite orientation of the authors, who sometimes seem surprised by their discovery of these important institutions. Here the use of the term “hidden” in the book’s title is revealing. These important institutions are certainly not hidden from the 10 million students and the numerous educators who teach there. Local policymakers and employers who depend on community colleges are also aware of their potential. This book can be useful to raise awareness for policymakers who do not support or understand community colleges, but its identification of the cases as exceptional is problematic.

The comprehensive reference guide to community colleges in the United States is The American Community College by Carrie B. Kisker, Arthur M. Cohen, and Florence B. Brawer. The book’s seventh edition is the latest installment in a four-decade effort to provide information and statistics about community colleges. It provides a comprehensive understanding of the role community colleges play in the American educational system and economy, including their collaborations with community, economic, and workforce development organizations; recent efforts to improve student persistence and attainment through guided pathways and equity-minded student supports; and the growing emphasis on preparing a skilled workforce via noncredit training, credit for prior learning, microcredentials, and community college baccalaureate programs. This reference work demonstrates how essential community colleges are to American education in a way that’s difficult to grasp from a select few case studies.

Community colleges are important and often underappreciated institutions in the American economy and deserve more study and consideration. America’s Hidden Economic Engines will help draw attention to the role of community colleges and the way they operate. However, much more work needs to be done to better understand how to harness these remarkable economic engines, best engage with local companies, and provide reliable funding to increase opportunity for a full range of Americans.

The Technologist

Soon after completing a master’s program in advanced manufacturing and design, Neha was hired as a process engineer at a semiconductor wafer-cutting plant. Not long after starting, she realized multiple processes at the plant were inefficient and out of control, but she could not solve all the issues simultaneously without the help of the other workers. She began meeting with operators and technicians from across the plant during coffee breaks to teach the process-level principles she had learned in her master’s program. By working with the operators to implement changes, she was able to get multiple processes at the plant back on track.  

A few years later, Neha was hired as the production supervisor of two interconnected nanoparticle factories in New England. Again, she quickly recognized a significant quality problem spanning both plants, which could only be solved by running complex production-scale experiments. And again, she realized resolution would require buy-in from the operators and technicians. Over the course of several months, she built trust, rapport, and understanding around systems-level operations, variation, and flow among the workers in both plants. Their mutual learning paved the way for plant-wide experiments that eventually resolved the quality issue and yielded significant improvements for her business. 

At first glance, this is a story of Neha’s ingenuity and persistence. But at a deeper level it exposes a problematic gap in the US manufacturing workforce. Engineers, who are expected to know systems and processes, are generally separated from operators, who are often only trained on specific machines. New manufacturing technologies, whether in robotics or digital production, are beginning to transform factory floors, requiring more workers to bridge this gap. Advanced manufacturing requires workers with a technician’s practical know-how and an engineer’s comprehension of processes and systems. Companies that want to move into advanced manufacturing often struggle to find people on the ground who know how to integrate technologies to optimize the whole system, manage technological advances, and drive innovation. Workers who have these mixed skills are hard to find.  

We call this new type of worker the “technologist.” As advanced technological manufacturing progresses, technologists will be essential in the adoption of next-generation factory systems. We believe that training programs for technologists can empower both incumbent and aspiring workers to be knowledgeable, productive, and adaptable contributors to a more robust US manufacturing economy.  

Training programs for technologists can empower both incumbent and aspiring workers to be knowledgeable, productive, and adaptable contributors to a more robust US manufacturing economy.

The need for technologists has been created in part by the deep disconnect between manufacturing and workforce development in the United States. For decades, US companies have sought gains in productivity by investing in capital equipment over labor. But when capital goods are globalized, so are the productivity gains, resulting in a much shorter-term competitive advantage. Alternatively, Germany prioritizes investment in the development of human capital for manufacturing through a robust system for workforce education, training, and apprenticeship. Notably, German manufacturers pay employees about 60% more than US companies. These investments have given Germany a distinct competitive advantage. The United Nations Industrial Development Organization has ranked the country as having the world’s most competitive manufacturing sector each year since 2001.    

Although worker training is a reliable way to effect productivity change, the United States has historically underinvested in workforce education. Schools nationwide lack robust vocational tracks—education reforms starting in the 1970s required more college-prep courses for high school graduation and hollowed out funding for technical education—so few high school students are ready to transition to jobs in advanced manufacturing. Community colleges are too often underfunded, have low completion rates, and lack programs in advanced manufacturing technologies. As a result, innovative companies struggle to build and maintain a qualified workforce. 

Without a systemic approach to workforce education, companies’ expectations for worker qualifications can be inconsistent. While developing our ideas about technologists, we talked to scores of industry executives who said they wish their technicians had more analytical skills and their engineers had more shop-floor skills. Studies have found that companies increasingly want their technicians to have the problem-solving skills of engineers—who might be less likely to visit the shop floor. 

Traditional manufacturing training, which continues to be equipment-focused, does not nurture such skillsets. Companies and manufacturing education programs tend to train operators and technicians for specific machines, such as those for computer numerical control machining, welding, or injection molding. In prior eras of manufacturing, the hands-on skills this type of machine-specific training imparted could last a career. Today, however, workers need to understand the “why” of operations. As the pace of technology development and implementation accelerates, conventionally trained workers struggle to keep pace. Technicians that lack an understanding of systems-level processes are less able to engage in problem-solving and innovative thinking or to pick up new skills as processes evolve.  

For many such technicians, there is no system in place for the kind of upskilling needed to assume more adaptive roles. Though shop floors are rich with opportunities to improve processes, quality, and design, technicians are rarely encouraged or taught to build the skills required to inform such changes. This is why Neha’s experience stands out. We have heard from workforce boards that companies try to promote their strongest technicians to lead teams in order to adopt or optimize new processes; however, those technicians often fumble without sufficient training in how to recognize what process-level changes might increase productivity. 

In view of the failures in the current workforce training system and the large gap between technicians and engineers, we believe that new types of education to train and support technologists should be encouraged. Creating the position of technologist within the workforce can not only improve advanced manufacturing and boost industry productivity—it could also create a category of workers with more satisfying and resilient careers. 

Training technologists 

The integration of systems-based technologies into advanced manufacturing processes requires reconsidering the division of labor in manufacturing, and correspondingly reevaluating the skills and concepts necessary for workers at different levels. 

The Massachusetts Institute of Technology first tackled the development of an educational framework for a one-year master’s degree of engineering for advanced manufacturing and design 20 years ago. The program is intended to train manufacturing engineers and eventual plant managers or entrepreneurs in automotive factories and state-of-the-art semiconductor foundries. The framework—built on years of research and manufacturers’ operational insights—is designed to teach critical-thinking skills using the four “whys” of manufacturing: the concepts of flow and variation in the (1) manufacturing processes, (2) manufacturing systems, (3) supply chains, and (4) management of people. Students begin with an introduction to foundational principles and are given opportunities to practice analyzing the fundamental building blocks that compose manufacturing. They then have hands-on experiences to apply these principles to manage manufacturing operations. By the end of the program, students are expected to know how to evaluate emerging technologies and understand how to make their company operations more agile and resilient. Recently, MIT began to broadly disseminate the core of this curriculum online through the Principles of Manufacturing MicroMasters program. The MicroMasters is aimed at engineers, product designers, and technology developers with an interest in advanced manufacturing. To date, more than 200,000 students have enrolled. 

Building on the lessons learned by starting the master of engineering program and the MicroMasters, the MIT team—with support from the US Department of Defense’s Industrial Base Analysis and Sustainment (IBAS) program—is now turning its attention to adapting the curriculum to train technologists. Improved training is part of IBAS’s mission to forge more robust and efficient manufacturing capacity for the Department of Defense’s industrial base, which employs more than a million workers and includes over 56,000 companies.  

Candidates for the technologist program will generally have solid factory floor experience, a community college education, or both, but there are no required credentials for enrollment. The curriculum builds on “earn and learn” approaches. Students learn hands-on knowledge of manufacturing skills using the online simulations and lab activities developed for community colleges. And as part of the program, all students participate in paid internships or apprenticeship programs to gain further hands-on experience. 

The new program is structured in a hub-and-spoke model, where the core curriculum (the hub) covers the four “whys,” and elective classes (the spokes) cover topics in advanced manufacturing such as digital production, robotics, additive manufacturing, and data analytics (see Figure 1).  

Figure 1: The Hub-and-Spoke Model for Training Technologists. The hub-and-spoke model relies on a core curriculum broadly applicable across all manufacturing industries. Once students have completed core courses that provide a systems framework, they branch out and specialize, with education in specific manufacturing technologies. Recent research shows how priority skills shown in the orange boxes that New England companies are seeking directly map to the model. Source: John Liu and Randolph Kirchain.

The entire sequence of courses is designed to take nine months. Students who complete this intensive program earn a certificate at participating community colleges. They also earn credentials that US manufacturers across industries will recognize. 

The technologist program aims to take from the best of online and in-person instruction, based on the science of how people learn. First, studies show that people learn best in bite-sized chunks, so the online content includes video segments lasting five to seven minutes broken up by assessments and feedback loops to reinforce learning and correct misunderstandings. Second, people forget what they don’t use over time, so the curriculum prompts students to periodically retrieve key pieces of knowledge. Third, people need authentic environments to try things and learn from mistakes, so the hands-on labs enable students to practice and acquire new technical competencies.  

Another feature of the technologist program is its accessibility as an educational pathway. Many nontraditional students face significant barriers to education, including distance, financing, and family or work arrangements. Blending digital content with classroom sessions provides flexibility. Hands-on experience will remain vital for contextualizing theory and learning new manufacturing technologies, but conveying basic information online can cut costs and accommodate student schedules. If in-person sessions can be scheduled on weeknights or weekends, students can keep their jobs and take classes when their schedules allow. Regional partners and community colleges can support students on the in-person side by offering free or low-cost services such as childcare and transportation and by offering mentorship support to help keep students on track to complete the program. 

Creating the position of technologist within the workforce can not only improve advanced manufacturing and boost industry productivity—it could also create a category of workers with more satisfying and resilient careers.

It is important also to consider affordability at the institutional level. Curriculum specialists can limit costs by developing courses once so that many students can access content. Institutions can reduce the cost of hiring expert instructors—instead, instructors to facilitate in-person labs and discussion sessions can be partial substitutes. The lab component may require schools to purchase materials and equipment to augment makerspaces, machine shops, and fabrication labs, which can cost millions of dollars. Where costs are prohibitive, virtual reality simulations can provide training for in-demand skills. For industry, since the online content is modular, courses can be used and reused without needing to constantly develop new content. 

Most importantly, the technologist program has the potential to create a career advancement pathway for manufacturing workers. Prepared with a foundational understanding of how manufacturers approach production in the digital age, a worker with the technologist certification might be able to progress from technician to technologist, with a new range of responsibilities, and then with further education and experience, to engineer. Graduates of the program would be able to take on engineering tasks, qualifying them for substantial raises. Together, the curriculum and job redefinition could help break manufacturing out of its recent history of wage stagnation by creating significant new economic value and productivity advances for employers and employees. 

Creating opportunity through partnership  

The pilot technologist program has brought together community colleges, area employers, workforce boards, and regional universities. Similar partnerships and collaborations between sectors will be necessary to sustain the development of the technologist occupation—as well as its benefits for industry—across the country. 

Community colleges are central partners in the effort to build the advanced manufacturing workforce. These schools can form vital connections between state and local governments, universities, industry, and a region’s workforce. To best leverage the nation’s community college resources to advance the technologist occupation, reforms that prioritize the goals of applicability, accessibility, and certification should be adopted across more schools. The first step is to fill the need for more community college courses and degrees qualifying new students to work in cutting-edge manufacturing fields overall. Second, more programs must be created for incumbent workers to earn certificates that qualify them for higher-skilled jobs. To accommodate the schedules and personal situations of these workers, programs should be short and skills-oriented, with certifications for each stage of skill completion. And third, courses and programs should adopt measures and tools that keep students on track with coursework. Program completion should be the goal for workers and institutions alike. 

Community colleges are central partners in the effort to build the advanced manufacturing workforce. These schools can form vital connections between state and local governments, universities, industry, and a region’s workforce.

Manufacturing is inherently tied to regional economies and actors. Community college curricula, therefore, need to tie into local workforce needs. In our program, community colleges in Massachusetts, Rhode Island, and Connecticut are advising on curricula so that students will be able to earn jobs as technologists in regional manufacturing industries, such as submarine production, electronics, and medical devices. Area workforce boards and government partners help coordinate resources, advertise programs, and recruit students. Importantly, participating companies have pledged to hire entering technologists at a wage more than 30% higher than many entry-level operators in the region. This New England example can be replicated in other regions, serving as a model for industry-community college partnerships throughout the United States. 

We’ve found that industry partnerships work best with groups of employers. Community colleges partnering with a single company or industry often struggle due to the fact that a company’s level of engagement may rise or fall depending on the business climate and need for workers. But to survive, community college programs need a steady flow of students. Working with groups of employers tends to mitigate the ebb and flow of individual company needs. If groups of employers and colleges work together, it’s possible to train workers who are able to assume new roles in different kinds of jobs. State and local governments should encourage these kinds of collaborations. 

Community colleges working with industry have already been important partners in establishing educational programs to move students from technicians to technologists. A group of these colleges are already well-positioned to educate new technologists in the United States because of their future-oriented approaches to educating technicians. Students completing programs like those highlighted below would be exceptional candidates to pursue further training as technologists. 

Lorain County Community College (LCCC) in Cleveland, Ohio has developed a group of certificate programs in advanced manufacturing skills including industrial robotics, data analytics, digital fabrication, cyber and information security, microelectronics manufacturing, manufacturing engineering, and automation systems. The certificates can stack into associate degrees and applied bachelor’s degrees. LCCC offers learn-and-earn options where students work at area companies while taking courses. Community college mentors coordinate company and school programs, providing support for students and helping place them in appropriate new jobs. Because of the effectiveness of these programs, LCCC was named the top community college in the country for excellence in student success by the American Association of Community Colleges. 

Monroe Community College in Rochester, New York, offers accelerated programs in advanced manufacturing. Students move through the intensive program together as a cohort, working from 6:30 a.m. to 2:30 p.m. at partner employers and taking classes in the afternoons and evenings. Program administrators find this cohort approach fosters a supportive environment among the students and can help lower attrition rates. The program in advanced machining lasts 22 weeks and takes advantage of a new advanced equipment center, where companies as well as students can try out new technologies. The school has worked with area industry partners to develop advanced manufacturing curricula and uses a “guided pathways” approach where students plan and complete courses aligned to their own career and skill roadmaps. 

With 44 full and satellite campuses throughout the state, Ivy Tech Community College is the sole community college in Indiana. It has developed a unified curriculum in manufacturing across all its campuses, which helps employers understand the schools’ degrees and certificates—and thus graduates’ qualifications. Individual campuses have flexibility to tailor part of their program to particular employers’ needs. Because Indiana is home to many production industries, there is an especially robust need for advanced manufacturing skills. Ivy Tech consequently built new elements into both its short and long manufacturing technician programs to train students to work in areas such as logic control, computer-aided design, automation, mechatronics, and robotics. It recently added systems integration and data analytics programs. 

Examples such as these show that the potential to create skilled technologists exists in many parts of the country—but there is a problem of scale. The US manufacturing economy produces $1.6 trillion in exported goods annually, but federal investment in manufacturing programs across agencies is in the hundreds of millions. Federal support at the Department of Education is focused on higher ed rather than workforce education. The Department of Labor’s workforce programs are underfunded, don’t reach incumbent workers, and don’t promote advanced manufacturing skills. The National Science Foundation’s Advanced Technological Education program, the National Institute of Standards and Technology’s Manufacturing Extension Partnership, and the 16 Manufacturing USA institutes support integral efforts to develop curriculum and partnerships for advanced manufacturing education and training but have limited support. Expanding and reforming all these programs to strengthen the connective tissue across the workforce education system can create more opportunities for meaningful employment earlier in students’ careers and form the foundations for developing the new technologist workforce.  

Technologists and revitalized manufacturing  

American manufacturing jobs once promised economic security and mobility to the middle class. But the erosion of union power and intensified global competition have stagnated manufacturing wages. Decades of shifting production overseas has shrunk the US share of global manufacturing to only 17% compared to China’s 30%—over the past 25 years the countries have traded places in the rankings. The United States lost a third of its manufacturing workforce between 2000 and 2010 as its output share fell.  

If groups of employers and colleges work together, it’s possible to train workers who are able to assume new roles in different kinds of jobs.  

With a ready workforce, advanced manufacturing can help the United States regain this ground. But there is much to do to get this workforce in place. At the firm level, manufacturers are increasingly desperate for an educated and motivated workforce. Meanwhile, workers without university degrees—the largest base in the manufacturing workforce—continue to face educational barriers and career dead ends. Until this is fixed, US manufacturing will be stuck in a low-tech, low-skill rut

Creating the new worker category of technologist can bridge the gap between labor demand and stymied human capital. A systematic effort to train technologists can create an empowered and resilient workforce, new pathways for more equitable careers in advanced manufacturing, and re-energized factory floors across the United States. 

But a revitalized manufacturing sector cannot be realized without systemic investment in manufacturing labor. To incentivize and enable workers to pursue educational advances in manufacturing, companies need to offer high-wage jobs to employees. A starting technologist wage will be lower compared to what workers with university degrees in technical fields can expect—but it is still a major bump from typical entry-level technician earnings. All of this is a reminder that healthy economies are bolstered by both innovation and production, and investing in American technologists is an important first step to restoring American manufacturing competitiveness. 

In the Heart of the Yakni Chitto

Monique Verdin began documenting the lives of her relatives in the Mississippi delta in 1998, when she was 19. That year, her grandmother Armantine Marie Billiot Verdin and other Houma elders traveled by boat to the point of land in the heart of the Yakni Chitto (Big Country) where they were born. Verdin raised her camera and began snapping photos, unaware that she was beginning her life’s work of understanding the profound ways that climate, the fossil fuel industry, and the shifting waters of the Gulf of Mexico would change the place that had been a refuge and a retreat for her Houma ancestors.

MONIQUE VERDIN Headwaters : Tamaracks + Time : Lake Itasca, 2019, digital assemblage. Photograph taken in 2019; United States War Department map of the route passed over by an expedition into the Indian country in 1832 to the source of the Mississippi River.

In one photo, her grandmother and her best friend are telling stories about the arrival of the oil and gas men during their childhood, in front of what Verdin calls a “ghost forest.” “There’s one dead tree and one living tree and at 19, you know, they were trying to tell me something.” She’s spent the years since trying to understand that story of home and dislocation, working in photography, film, performance, collaborative community projects, and installations. “Art has been a real gift in that it has been my teacher, requiring me to do the research and to literally frame what I’ve seen.”

Top to bottom: MONIQUE VERDIN Armantine and Jeanne Verdin : Pointe-aux-Chênes : Lost Treasure Map, 2022, digital assemblage, 36 x 24 inches. Photograph taken in 2000; United States Geological Survey map, Lake Bully, 1994. Clarice Friloux : Grand Bois : Lost Treasure Map, 2022, digital assemblage. Photograph taken in 2008; United States Geological Survey map, Bourg 1998. Janie Luster : Bayou Dularge : Lost Treasure Map, 2022, digital assemblage. United States Geological Survey map, Lake Theriot, 1994.

Some of that framing has taken Verdin back in time to understand how her grandparents’ generation was pushed out of their homes by the oil industry. That caused her to research the histories of Native Americans, many of whom migrated to the bayous after being forced off land elsewhere by European colonists. This layered legacy of previous dislocations is represented in the collages of Lost Treasure Maps, which show members of her community juxtaposed against historical maps. “This struggle with land losses is not new for us,” she says, explaining that understanding it brought her a deeper awareness of the future: “We’re not going to be able to outrun climate change. Anywhere we go there will be consequences. And we have the right to remain.”

When she first started making art about Yakni Chitto, Verdin thought that the story would be about rising waters and land loss. But the area was then hit by one disaster after another: Hurricane Katrina, the BP drilling disaster, flooding in chemical waste pits, more hurricanes. “I don’t think we should even call them disasters anymore. This is where we’re at,” she observes, speculating that her future—and many peoples’—will be a process of retreat and return. Collaborating with scientist Jody Deming on the Ocean Memory Project gave her a new appreciation of the relationship between climate, ocean, rivers, and atmosphere, connecting her work in the delta to a global conversation.

Recently the question of what survival means has become a focus of Verdin’s art as she works with her communities: Houma people, family, scientists, and fellow artists. What started as a look backward has become something else: “I feel like I’m downloading sci-fi information to my community these days.”

MONIQUE VERDIN Ghost Forest : Pointe-aux-Chênes : Lost Treasure Map, 2022, digital assemblage. Photograph taken in 2011; United States Geological Survey map, Terrebonne Bay, 1983.

MONIQUE VERDIN Abandoned Camp on Vanishing Land, Pointe-aux-Chênes, Louisiana, 2000.
MONIQUE VERDIN Ghost Forest : Pointe-aux-Chênes : Lost Treasure Map, 2017, digital assemblage. Photograph taken in 2011; United States Geological Survey map, Lake Bully, 1932 + 1994.

Zach Pirtle Explores Ethics for Mars Landings

NASA’s Artemis project aims to establish a long-term human presence on the moon—and then put astronauts on Mars. So in addition to designing rockets and spacesuits, NASA is also exploring the ethical and societal implications of living in space. In the third episode of our Science Policy IRL series, Zach Pirtle, who got his undergraduate degrees in engineering and philosophy at Arizona State University, explains how he came to work in the agency’s Office of Technology Policy and Strategy, where he recently organized a seminar on space ethics. He also works as a program executive within the Science Mission Directorate working on commercial lunar payload services. Zach joins Issues editor-in-chief Lisa Margonelli to talk about how he almost accidentally found his way to a perfect career, and how agencies engage hands-on in science policy as they figure out how to implement legislation.

Is there something about science policy you’d like us to explore? Let us know by emailing us at podcast@issues.org, or by tagging us on social media with the hashtag #SciencePolicyIRL.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

(This transcript is AI-generated and may contain errors.)

Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, engineering and Medicine and Arizona State University. My name is Lisa Margonelli and I’m the editor-in-chief at Issues. This is the third episode in our Science Policy IRL series, where we explore what science policy is and how people build careers in it. 

We often think of science policy as happening at high levels. Congress decides how much money to appropriate for scientific research or the president sets a goal for getting to the moon or curing cancer. But a lot of hands-on science policy is made within federal agencies as they define and pursue their mission. In this episode, we talk with Zach Pirtle about doing science policy at NASA. Zach works in NASA’s office of Technology Policy and Strategy, and he’s also a program executive within the Science Mission Directorate working on commercial lunar payload services. Welcome, Zach.

Zach Pirtle: Thank you, Lisa. Very happy to be here.

Margonelli: Yeah, I’m really looking forward to this conversation. So my first question is the same first question as always, how do you define science policy?

Pirtle: I’ve thought a lot about how to answer this, and by training, I’m an engineer, although I also have a degree in philosophy and I work at NASA headquarters. My trainings in engineering, I think science policy science and engineering policy, sort of my passion, it’s this way in which all the programmatic things surrounding the technical work that scientists and engineers do are shaped in order to provide some deeper benefit to society, to the public. And I think there’s different levels at which policy occurs. A lot of people focus on national policy, things that come out of Congress, out of the White House. But having spent my career at NASA headquarters for 13 and a half years now, there’s so many things that happen to implement all of that, that the implementation itself almost becomes policy. The nuances of how do you set up and manage a program, how do you decide what the long-term planning should be and what the rules should be that inspired that.

Right now NASA’s been doing a lot of work to plan how we’re going to go to explore not just to the moon, but also to go on towards Mars and beyond. And there’s been a lot of effort to figure out what the architecture for that should look like. And so from a systems engineering sense, that’s what are all the pieces you need to explore into deep space and what functions do they need to accomplish and how do you manage the art of balancing everything together to have a program that can explore towards the moon and beyond in a way that’s safe, that is reasonably timely and that can accomplish a myriad of scientific and other goals. And so there’s this constant balance that while Congress has passed many different laws about how we need to explore and go beyond, there’s a lot of things that are left to a federal agency to decide about how to go about and to implement that.

And so I think for me, this effort to benefit society policy work at NASA is oftentimes trying to think through how do we strategically identify what our long-term goals are? How do we get from where we are today towards those goals and how do we also manage this huge institution that we have? I think looking at a workforce, looking at people, looking at all that I think matters deeply. And there’s also an implicit thing that I really care deeply about as an engineer, that as we’re building systems today, we’re often locking in consequences that will shape what’s done, what we actually do on the moon. So it’s really exciting that we’re getting ready to have our first commercial moon or landings that are set to occur this January. And the types of science that we’re doing is really influenced by engineering decisions that were made a long time ago. And I think there’s more that engineers could do to help reflect on that because in some ways engineers are implicitly shaping policy by some of the technical decisions that they make.

Margonelli: Have opened up so many different things in this one of them being that science policy for you is not just about implementing or getting to say the moon. It is about building and setting up systems that enable things for the future, which is a really interesting perspective. So I want to ask you a little bit about how do you do science policy in your day-to-day life? What’s a day in the life of Zach Pirtle?

Pirtle: So for my work for the Office of Technology Policy and Strategy, the policy shop is really focused on providing good evidence backed advice to the NASA administrator and to try to provide strategic assessments and studies to help the administrator achieve the goals for the country that we’re looking for from the space program. And so sometimes there are special requests that pop up about looking at what are the major policy questions leading up to a upcoming Artemis landing. An Artemis for us is our series of human missions that will land on the moon and that will help to pave the way towards increasingly more complex missions towards Mars eventually. And so there’s different things that pop up, so specific studies that I’m called to help serve on. For example, I organized a workshop in April about what are the ethical and societal implications of the overall moon to Mars effort and how should we think about them?

So for a day-to-day perspective, it’s almost like organizing a wedding, trying to get a lot of great experts, different perspectives. We really sought to get in people from social science and humanities backgrounds that could help to understand the broader societal impact of space flight in a way that trained engineers often aren’t encouraged or asked to look into. There were different NASA engineers and scientists in the room along with these outside scholars. It was a bit of trying to get people to talk the same language, whereas people are coming from very different backgrounds, but ended up being very exciting and affirming and that we had a wonderful conversation and that a lot of people are trying to think through how could we do better and how could we try to think about these longer term implications. Now, there’s a lot to unpack in that if I were to more literally say what’s done in a day, there could be three or four engineering status tag up that I need to dial into in a day.

Margonelli: What is an engineering status tag up?

Pirtle: So we’ve got different deliveries that we’re sending science and technology payloads to the moon, and I have to help oversee two of those deliveries, including the third intuitive machines delivery, which is a company that’s part of our commercial linear payload services work that’s going to deliver a rover to Reiner Gamma, which has this fascinating magnetic swirls on the surface of the moon. And we understand exactly what past magnetic activity caused and created those swirls. So we’re hoping to do really good science on that, but we’ve been tracking and working to look at are the instruments on track? Are we going to be okay with having lunar lander ready? We’re actively managing budgets and trying to make sure that we’re able to balance and look at all the funding that we have to get our activities to work together towards doing more science on the moon. So there’s a lot going on there. NASA headquarters is this fun barrier between the NASA centers that are leading the work on the technical side, and we’re trying to provide a strategic and unified vision on how to help our stakeholders in Congress and the White House and internationally as we’re trying to move forward. So there’s a lot that goes on there from an engineering and policy perspective.

Margonelli: So you have these phone calls where you check in on a rover that’s going to drive around these magnetic swirls and measure them. Yeah,

Pirtle: Make sure the rover’s on track to succeed too. Yeah, want to make sure they’re going to succeed.

Margonelli: Wow. Well that sounds pretty exciting. I mean, what’s your favorite part of your job?

Pirtle: I have to say the work I’m doing for the OTPS, the policy shop and helping engineers think this bigger picture, I’m very lucky I’m able to do this, stay connected towards engineering work. We’re going back to the moon for the first time in 50 years, but to also try to think bigger and to try to do these studies that are pushing the boundary on how are the engineering decisions we’re making today really going to shape humanity’s future in space and are there ways we should be doing it better? I feel that’s something that NASA can help lead, and I’m so excited that OTPS and NASA have been pursuing this work.

Margonelli: Okay, so you’ve got a degree in engineering and you’ve got a degree in philosophy and you end up at NASA doing sort of a mixture of engineering things that are going to the moon and the philosophy of space travel to Mars at a certain level. And so how did you get this job? What was your path? First of all, you start off as an engineer. Why did you become an engineer

Pirtle: In high school? I was a speech and debate fan. I just loved trying to have principled arguments to try to think through debate to the pros and cons of something. And I also love science fiction deeply. I kind of stumbled into engineering. My was an engineer and my brother ended up also becoming an engineer. And I think I was trying to search for something that was more than just the technical work. And I was lucky as an undergrad at Arizona State, I took a philosophy of science class about my second or third year and helped really crystallize for me like, holy cow, the reason why you do all these endless homework assignments is that you’re actually learning a paradigm about how to do engineering and that the pain is actually for a deeper and higher purpose. And eventually I was able to get a job as an engineering intern, and I got to see the context there. And then I was so lucky that I ended up working with people like Dan Sarewitz at Arizona State’s Consortium for Science Policy and Outcomes. And a lot of what I was saying earlier really follows from Sarah Wit’s vision about whether science policy matters and how you can do more good for society. And that for me, that was crystallizing. I ended up getting hired after my master’s degree through something called the Presidential Management Fellowship.

Margonelli: Just to back up for a second. Okay, so you got a master’s degree in philosophy or you got the master’s degree in engineering? 

Pirtle: Got the master’s degree in Civil and Environmental Engineering, but I had a philosopher as my co-chair for my committee. At a certain point in time, I really thought I was going to go become a philosopher of science, but who focused on engineering in a very rigorous and technical way. And after that degree I was actually hired by NASA and ended up getting into an engineering role, and I did later pursue and finish my PhD in systems engineering again with the philosopher of science on my committee. So I’ve tried to keep staying in both of those worlds and philosophy of science is really, it’s something special about getting towards the conceptual foundations and also the values that underlie a lot of what scientists and engineers do. So a lot of my work was on modeling. And how do you think about values and sort of the epistemic limits of modeling?

Margonelli: It’s not like an easy jump. Just to go from this interesting mix of things that you were studying to the government. How did you get involved in government work?

Pirtle: I was very fortunate I was going to apply for PhD programs right after my master’s degree and someone when I was actually interning at the National Academies through their resign program, which is a wonderful policy fellowship, I encouraged people to look at it. And I was working for the Center for Engineering Ethics and Society and someone told me Apply for the Presidential Management Fellowship PMF. And that’s an informal way to describe it, is that it’s a way to get hired by anyone with a recent graduate degree that’s a US citizen that can sort of skip the USA jobs hiring process. And it makes it much easier to get hired. It’s just as open. And I think there’s a lot of pros to the PMF program versus other more well-known science policy programs like the AAAS Science Policy Fellows program, PMF, you’re a civil servant from day one, and there’s generally a clear intent that they hire you on long-term if it hadn’t have been for the PMF and getting hired on at NASA in 2010.

I didn’t know anyone as a civil servant growing up. I didn’t know how things worked inside government. I’d worked with some great people like Dan Sarewitz who knew a lot about how worked, but they never were inside the executive branch, which is where most of the science policy jobs I think are. And so for me it was transformative to get into the federal government. I began to realize that there’s so many decisions being made inside of a government agency that outside academics have a hard time knowing about or even knowing the context of how to make their own research relevant. And it became very clear and important to me that there could be a role of helping to improve policymaking and policy reflection from inside government. It’s also good for a government agency too. I think it’s been helpful for NASA to think about more deeply and to have a little bit more structured time on how can we best accomplish our goals.

Margonelli: So you got the presidential management fellowship and you immediately got snatched up by NASA? Yeah. Was it clear that you were going to NASA when you applied for the fellowship?

Pirtle: So this is getting a little personal. I actually was so naive as a 24-year-old looking to get hired. I told a lot of agencies that I wanted to serve for two years as a fellow and then go off into a PhD elsewhere. And after having nine interviews at the job fair, the only agency that actually called me back was NASA because they were willing to take a chance on me. It turned out I ended up being able to finish my PhD on the side while working here at NASA. And it was wonderful. And actually the George Washington Systems Engineering PhD program, and my advisors always Shane Farber were the perfect fit for me. But yeah, I didn’t understand that PMF really was a way that agencies are looking to hire for a career purpose. I thought it was just like another fancy fellowship that one could explore. And so I’m very glad that NASA took that chance on me. But once you’re hired and once you’re inside the government, you don’t need to look back. You’re in. And I do spend a lot of time talking to science policy graduate students or STEM grad students about just the importance and the ways in which they could contribute inside government. I do strongly encourage people who are interested in science policy to check out the PMF.

Margonelli: It seems like it was really fortuitous that you got picked up by NASA, but maybe you can explain just the fact that they were willing to take a chance on you suggests that some agencies are different than other agencies. I mean, each agency kind of has its own mission and do they also have their own personality?

Pirtle: Yeah, I think that’s very true. As a PMF, I was lucky to be in a cohort with people at different agencies. I’m also lucky that NASA regularly for 11 years running has been the best voted best agency to work at based on employee satisfaction. I do think the way in which people make decisions at NASA, the long-term strategic perspective, we do have a complex interplay of how we interact with industry, but it is very different from other agencies that are a lot more regulatory and focused. I do think from an engineering perspective, there’s a deep richness of jobs, whereas some other agencies, it’s harder to be able to do engineering work in line. So NASA as an engineering mission-focused agency is special in that sense. I’ve been involved in hiring for other people. Sometimes it’s a complex mix of what an agency is looking for and also just trying to find someone that you can shape and find.

That’s why for me, even though I probably wasn’t the perfect candidate for NASA back when I was 24 in 2010, I could see there’s a lot of people that come out of a master’s degree or a PhD that can still be shaped by an agency. So even if you think you’re not a perfect match, there’s ways in which an agency can look at you and find a lot of value. And it does depend a lot on culture. The agency you’re going to, it depends a lot on the individual supervisor that you have, but once you’re inside the federal government, there’s a lot of different worlds you can explore.

Margonelli: So you have a really solid foot in academia and you’ve thought about the way academia looks at science policy and the way that it looks at what goes on inside agencies. And then you’ve also got this other foot inside NASA. And what have you learned about the difference between the way we think policy gets made and the way you see it getting made on the ground or in space?

Pirtle: That’s a great example. I think, yeah, and when I was hired in 2010, it was right as the space shuttle was ending and the Constellation program was being ended as well and moving towards the space launch system, which flew successfully as part of the Artemis one mission last year. And I was very lucky to be able to spend several years working to set up the management and systems engineering function that led our NASA human space flight efforts for several years. It was so exciting to be there in setting up the new human space flight programs, the space launch system, and Orion, there’s some constraints historically that government agencies have to operate under. There could be politics that play a part in shaping, influencing what’s done, and also there’s a lot of deep uncertainty about how much is something going to cost, when is something going to be able to fly?

And just the art of how you manage that and manage a lot of people, it can be very difficult. There’s lots of people who can come up with a great PowerPoint idea, but trying to have something that is realistic and executable as an engineering term that you can actually perform and go do the mission. I think a lot of academics on the outside, they don’t understand the things that people inside are focused on to make decisions. And I think that we did touch on this a little bit in the ethics workshop where we actually had some of the things that we talk about in our report there are about just the challenges of thinking long-term about the ultimate benefits of space flight and how do you think about it? And even coming to the vocabulary that engineers can use to think and reflect on that.

So I think that there’s deep complexities. What I wish people really knew about that interface is that it requires careful cultivation, academics and government personnel. There needs to be time to develop trust. There needs to be time to talk and communicate and develop that framework about how to do things. But one amazing thing about being a civil servant is that you are doing things for the greater good. It’s clear as an agency what your ultimate goals are. And it’s a lot of fun when I’m able to sit back and a lot of my very busy colleagues and if we’re able to sit back and reflect for a little bit about, Hey, if we did this slightly differently, we could have this much better impact for society. And then people are like, wow, that’s a great idea. Let’s try to explore that.

Margonelli: That’s cool because that’s not, I mean, you could be sitting at a large corporation and when you sat back, that’s not what you were discussing. That’s not what you would be discussing if you were at a large corporation sometimes is what is best for society. So after you got into NASA, were there further ways that you got into policy? How did you end up in policy or did they just plop you into policy?

Pirtle: It’s a great question. I focused on my work in the missions at NASA, which we call Mission Directorates, which are responsible for executing our mission and running and managing our programs. And I focused on being very good at helping things pass and evolve through the NASA headquarters ecosystem. I kept my interest going academically. I was doing my PhD on the side. There was one effort four years after I started where there was a citizen forum that another part of NASA had worked to organize to think through what should NASA’s goals be with the asteroid mission. And that was my first chance to actually dip into policy formally inside NASA. Being a civil servant, you’re inside the building. And I just offered and volunteered with my boss’s approval like, Hey, I could help out. And I ended up helping to co-lead a lot of the effort to execute the citizen form with Jason Kessler.

And I know you’ve helped publish a little bit of that journey with at the Issues website, but for me, that was the first chance to actually implement a lot of the ideas I’d written about academically. Thinking about when I worked with Dan Sarewitz, I’m really proud of the publications that we did, thinking through how engineers could use some of this public opinion and public values to think through how we might approach something like redirecting an asteroid so that humans can engage with it. How should we think about some of the challenges with going towards Mars? And I’ve kept up a work in the policy sphere. I try to always have one toe dipped in there despite my engineering work. And I’ve been able to do other things such as a historical article on where does innovation come from and the DODs old Project Hindsight report, there’s a few different things that I’ve been able to keep an interest in and to work on that help me stay sharp and stay creative, and they give me an excuse to go and talk to really smart academics on the outside that I hope to learn from and to help bridge their ideas inside of NASA so that their work can helpfully make people be more reflective as well.

Margonelli: So you had Dan Sarewitz as a mentor in academia, and you probably had other mentors as well, but did you find mentors within NASA?

Pirtle: I think two of my first bosses, Dan Dun Barker and Bill Hill, those are some of the best years of my career working for them. They had a passion about how do we do the mission and also a desire to how can we think about doing the mission better? And NASA was sort of relearning how to do big rocket development for the first time in generations, and the sky was sort of the limit on how do we think about this? And they really empowered me to engage on these policy projects. I’m very thankful that my current office and the Science Mission directorate is supportive of my working on this big picture work on the ethics of Moon de Mars and to work on that. I think there’s a number of engineers that get excited for a chance to think about the questions that sometimes you’re just too busy to be able to think about, and there needs to be some small part of NASA that’s thinking about those in a more structured way.

Margonelli: Well, this is all really cool. So I want to switch gears a little bit and talk about what are the big questions that motivate you to do this work? You’ve talked a little bit about the ethics of space travel. You’ve talked about engineering ethics. What are the big questions that keep you up at night or get you out of bed in the morning?

Pirtle: Yeah, the report that we publish based on our workshop, Artemis Ethics and Society is available and we focus on a few things that I think are vitally important, one of which I already alluded to, which is just about the challenge culturally of how do you work with scientists and engineers to think about and talk about these topics? How do you have a language for thinking about these things? There’s sometimes are engineers who are like, my job is purely technical. How am I even supposed to think about the broader societal impact? And there are ways if you start to break it down and dig into it, but it takes time and effort to attend to that. And we’re lucky in the workshop report, which is reflecting the discussion by participants, not necessarily NASA’s views per se, but we’re able to talk about different ideas and concepts for thinking about that.

I think some of the deep questions there that participants raised at the workshop that do come to my mind on a regular basis is how can we best understand what our ultimate impact on society is and are there ways in which we can steer what we’re doing now today towards more beneficial goals, and also to avoid negative unintended consequences? I think we’re going to be learning a lot in the next few years, especially with hopefully things stay on track for our January landing with commercial landers. But I think just the idea is that we’re doing so many things right now for the first time and it might set precedence for decades to come. And are we doing all that we can to try to do the right thing to nudge it? We do talk in the report about are there policy and management mechanisms by which we could think about these issues a little bit more and how could we, are there avenues by which we could talk to other countries and think about how collectively humanity thinks we should be exploring these process issues on how to best be reflective about ethical and societal issues.

And so I am passionate about that. There’s a lot of work there and NASA, the web feature that announced our report being sent out had some comments about how NASA is going to continue to do work in this area. And when we’re able to talk about it, we will. A lot of it’s in formulation, but I do think just these deep questions are important for engineers and also just anyone who’s interested in space travel to help reflect on a bit more. I do think that if someone’s listening to this who’s interested in space, but they don’t have a technical background, I think their perspective can still be vitally valuable and trying to reflect on what the overall objectives should be for going towards Moon to Mars and what are some considerations about how we should implement it. I do think everyone can have an important viewpoint that can help shape this.

It’s the ultimate question. This comes back to something that a foster science would say is a basic insight, but many of these questions about what we should do in space, there’s no technical right answer. A lot of it is based on what are our values and our ultimate goals and how should we help to steer that in more beneficial ways? And so I think it’s something that engineers have a responsibility to consider because engineers are shaping things in ways that will set these precedents for decades. But it’s also this idea of science policy and this broader sense about how do we ultimately benefit society. It’s something that everyone should be able to reflect on.

Margonelli: What are the things that you worry about as maybe negative outcomes of space travel or engineering decisions?

Pirtle: So some of the concerns that people raised at the workshop were tied to how do we actually make sure that we’re able to share the benefits of what we do in space? And NASA does a lot with ensuring transparency and sharing of data as we do science going abroad, but as people look more towards resource utilization and other activities on the moon, it’s a bit more uncertain about how to understand how benefits are going to be shared. I do think that there are cultural sensitivities that are tied towards the moon and what payloads are done there. Famously, if you go back towards Apollo B, Aldrin took communion prior to the Apollo 11 landing, and there were lots of different viewpoints of what happened on earth, but I do think that topics tied to that could continue to pop up based upon what payloads are privately sent to the moon.

I think that trying to make sure we think about opportunity costs and that we’re able to do enough science as we’re going out for human exploration. That’s something that I also care a lot about in my science mission director job, is trying to make sure that we’re able to translate the science requirements that we get from the National Academies and the Decal survey and to make those salient to the systems engineering planners who are doing our Moon Mars architecture. And I do think it’s really tough to think about all these issues. Oftentimes, NASA we’re given challenges where we have a lot of content, a lot of work to do, and there’s not necessarily enough money to do all of it. And so how do we do? We manage that carefully, and I think sometimes you have to make a decision that is the right thing for that time, but it’s one that you hope you can improve on later on.

Margonelli: It’s an interesting portrait that you’ve given us of this job where it’s really beyond the sky is the limit in terms of what you’re supposed to do. Your space is the limit. Mars is the limit. Somewhere beyond Mars is the limit. You don’t really know where the limit is. And at the same time, there are lots and lots of limits. There are ethical limits, there are practical limits, there are time-based limits. It’s a lot to balance.

Pirtle: I think that’s actually going back to your first question on what science policy is. I think science and engineering policy is a matter of dealing with all these programmatic factors, these things that influence how you manage, how you plan, and how you implement, and you do all this work. So I think it all hangs together. It’s not just thinking about what’s the ultimate dollars for science or exactly what science you’re going to get. It’s more holistic, more things are involved.

Margonelli: You have two little kids. Do you hope that they get to travel to Mars?

Pirtle: That’s a great question. Personally, I don’t want to risk the chance of them not coming back, but if it really meant something to them, I think I’d be excited about it. I do hope they live in a world where, this is my philosopher hat on for a second. I do hope they live in a world where given just how much science and engineering affect our lives, that we’re reflective about this and we’re trying to do the best we can to make the world a better place, that engineers are trying to do the best that they can to do that. I do believe that something exciting in space occurring is part of that future world that I hope my kids grow up in, but being astronauts themselves might make me a little bit too scared as a dad.

Margonelli: Thank you, Zach. It’s been great to talk to you about this, and I’m really happy to know that someone who’s thinking about all these things is also thinking about how to get us all into space.

Pirtle: Thank you, Lisa.

Margonelli: If you’d like to learn more about Zach’s work, check out the resources in our show notes. You can subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producers, Sydney O’Shaughnessy and Kimberly Quach, and our audio engineer Shannon Lynch. I’m Lisa Margonelli, editor-in-chief at Issues in Science and Technology. Thank you for listening. 

A Scientific “Forced Marriage” Takes on the Mysteries of the Loop Current

As it enters the Gulf of Mexico, the shape-shifting stream of tropical water known as the Loop Current passes between the Yucatan Peninsula to the west and Cuba to the east. When the swift current extends into the northern Gulf, its clockwise spin turns east and south to form a loop before it exits through the straits of Florida and joins the Gulf Stream. Lurking beneath its undistinguished name is a powerful and erratic force that intensifies hurricanes, disrupts oil and gas supplies, and influences fisheries in an area that provides 40% of US seafood harvests.

At times, the extended loop becomes unstable and pinches off an eddy that detaches from the current. The unfettered eddy may take months to lumber west across the Gulf. At 200 to 400 kilometers in diameter, these eddies are so formidable that, like hurricanes, they are named. In June 2010, during the massive oil spill caused by the explosion of BP’s Deepwater Horizon oil rig, a giant named “Eddy Franklin” shed off. Spill responders had worried that the Loop Current would carry the oil out of the Gulf to Cuba, the Bahamas, and the Florida Keys. But Eddy Franklin ultimately kept the oil inside the Gulf—highlighting just how little is understood about the current and its eddies.

“You don’t ass around with the loop,” retired Lieutenant General Russel Honoré told me.

“You don’t ass around with the loop,” retired Lieutenant General Russel Honoré told me. Honoré led the Defense Department’s response to Hurricane Katrina, which rapidly intensified after it gained energy from the dangerously warm 84-degree Loop Current before going on to devastate New Orleans. Anticipating when the loop will expand into the northern Gulf—and more importantly, shed an eddy—is crucial information for oil production, fisheries, flooding, shipping lanes, and search and rescue operations. Honoré said a better understanding of Loop Current norms is necessary for policymakers to prepare for active hurricane seasons, especially as the climate changes. “Loop [Current dynamics] should be part of any risk assessment,” he said. But so far the Loop Current’s behavior has proved devilishly difficult to predict.

A uniquely chaotic system

The Gulf of Mexico is a semi-enclosed mini-ocean; it holds numerous distinct water masses that combine with freshwater flowing in from rivers, all superimposed over submarine mountains and shelves. These dynamic features create a uniquely chaotic system. On average, the Loop Current sheds an eddy once every nine months, but that statistic is deceptive. Some years have none; other years have many. The trickiness of predicting Loop Current behavior was clear even when the first scientific paper on it was published in 1972.

In 2018, a disparate group of international researchers began the task of identifying new observations to improve Loop Current prediction. Now in its third and final phase, three teams of scientists involved in the Understanding Gulf Ocean Systems (UGOS) Initiative are installing subsurface monitoring equipment and preparing to launch dozens of floats, underwater drones, and advanced sensing equipment to surveil the loop’s surface currents in real time for the 2025 hurricane season.

The project is unique in that the real-time measures require coordination across US, Mexican, and Cuban waters. It is also an outlier because it is funded by the National Academies of Sciences, Engineering, and Medicine (NASEM), which is not primarily a grantmaking body; its power is in convening top scientists to tackle big problems. However, in 2013, $500 million in criminal settlement money from the 2010 BP oil spill was funneled to NASEM to start the Gulf Research Program.

The goal of the UGOS initiative is to extend accurate Loop Current predictions from days to months, explained Michael Feldman, senior program manager at NASEM’s Gulf Research Program. Timely predictions could boost safety for oil and gas work, marine traffic, and search and rescue operations in the Gulf, while also improving forecasts of hurricanes, fisheries productivity, climate change-induced coastal flooding, and sea level rise. Longer range Loop Current predictions are also crucial for anticipating two other phenomena that affect the Gulf’s economy as well as its ecosystems: the extent of the annual Dead Zone, a roughly 5,000-square mile expanse of oxygen-free water resulting from excessive nutrients, and the movements of Sargassum seaweed, extensive mats that form in the Atlantic and move toward shore.

Timely predictions could boost safety for oil and gas work, marine traffic, and search and rescue operations in the Gulf, while also improving forecasts of hurricanes, fisheries productivity, climate change-induced coastal flooding, and sea level rise.

Predicting such complex processes is never an easy task, but the UGOS project has had to navigate an unusual set of tensions—including an unanticipated collaboration of cross-disciplinary rivals, a remit to get research outputs applied to agencies’ operations, and the risk that the capricious Loop Current will go quiet as soon as all the expertise and equipment have been assembled to measure it.

Already, the clock is ticking toward the study’s end in 2027, noted Francis Wiese, who as chair of the UGOS standing committee is charged with guiding the effort. “If this information is to be meaningful for resilience and the safety of people in the Gulf, it has to move from research to operations,” he told me. As I followed the team this fall, it became clear that science as usual would not automatically yield improved forecasts—and that the uncharted waters of “transition to operations” can run counter to traditional academic research.

Creating a “forced marriage” among researchers

Until recently, the puzzling behavior of the Loop Current was largely a concern of oil and gas companies. During one unusually active period, between July 2014 and July 2015, the Loop Current shed five eddies into the Gulf, which disrupted work on Chevron’s Big Foot oil installation for months, costing the company at least $1 billion. Eddies can also shut down production entirely, damage equipment, and potentially exacerbate oil spills.

At its creation in 2013, NASEM’s Gulf Research Program was tasked with investing in three decades of research to improve oil system safety, human health, and environmental resources in the Gulf of Mexico and the US outer continental shelf. Predicting the Loop Current quickly emerged as a priority, and in 2017 a consensus committee on Advancing Understanding of Gulf of Mexico Loop Current Dynamics was convened. Its 2018 report identified knowledge gaps and called for an international campaign of complementary research, observation, and analysis to build an observing system for the Loop Current, at a cost of about $120 million. Ultimately, the current plan is a smaller-scale version of the original and focuses on collecting the most critical data at roughly one-third that cost.

“With limited resources, we have to be strategic where and when we make observations,” said Ruoying He, a physical oceanographer at North Carolina State University who has spent more than 20 years modeling oceans and is a coauthor of the consensus report. Still, He said the researchers are enthusiastic the project will achieve its goal. He will assimilate the resulting data into a regional ocean model to be used by the National Oceanic and Atmospheric Administration (NOAA). Together with colleagues, He has already spent considerable effort analyzing simulated data to identify what types of observations have the most potential to improve Loop Current models, notably to predict how eddies are shed.

Today, a large swath of ocean observations come from satellites, which unfortunately can have low resolution or be hampered by cloud cover. Although a network of 1,600 sensors, operated by a collection of academic and government entities called the Gulf of Mexico Coastal Ocean Observing System, has provided real-time measurements of temperature, salinity, wind, and current speed since 2005, those efforts are concentrated in the northern Gulf. To observe and model the Loop Current, researchers must look to the southern Gulf and then strategically choose which elements to monitor there.

To help, oil and gas companies have shared at least $20 million of proprietary data collected by ships crossing Loop Current fronts since the early 2000s. “That’s a pretty special thing,” explained Feldman. “For a very long time, industry wouldn’t share any of this, but there was enough belief in what we’re doing that they are willing to share with UGOS.” 

Data from existing sensors, including those from Mexico as well as industry, led UGOS researchers to conclude that understanding two particular regional features would be essential to improve predictions. The first is understanding how warm water flows through the Yucatan strait into the Gulf. The second is understanding the current’s subsurface structure—both the density and the velocity of the water below the Loop Current. Understanding these two features could help researchers model how eddies on the surface interact with those thousands of feet below to shape the current—while also influencing the way the current sheds and reabsorbs eddies.

In 2022, the Gulf Research Program solicited proposals for a five-year effort to improve Loop Current forecasts to support offshore energy safety, fisheries’ resilience, and hurricane safety. “We hoped to get proposals from integrated teams that combined modeling, observations, and application,” said Wiese. When that didn’t happen, UGOS leaders selected proposals from three separate groups: a real-time observational system team, led by Amy Bower of the Woods Hole Oceanographic Institution; an adaptive surface current sampling team, overseen by Steven DiMarco at Texas A&M University; and a modeling team, helmed by Eric Chassignet at Florida State University.

As I followed the team this fall, it became clear that science as usual would not automatically yield improved forecasts—and that the uncharted waters of “transition to operations” can run counter to traditional academic research.

After the awards were made, however, Feldman informed the teams they would be required to work closely together to accomplish the overarching goal. All of the team leads were caught off guard. “This was a forced marriage among competitors,” Ruoying He told me. “It’s not something everyone was happy about, but looking at the big picture, it was a necessary action to take.”

To achieve the scientific goal, the combined group knew it needed to integrate observationalists and modelers. To make something useful to agencies and industry players, they would also have to include members of those organizations. Having seen other projects fail to transition their output into practice, UGOS members included researchers from, for example, NOAA and the US Navy. “We set up the [first in-person] meeting to drive ‘transition to operations’ from the get-go,” said Wiese.

Observers and modelers unite to find the “white whale”

Learning to work together has been a time-consuming process. For two months in 2022, Bower, DiMarco, and Chassignet met twice weekly to hammer out a structure involving six working groups. At the most recent UGOS meeting, in Tallahassee, Florida, in October, the consortium decided on another, less radical, reorganization of teams and goals as they continue to refine their processes and prepare to place equipment into the Gulf. 

The ambition and scale of the project were clear in Tallahassee as Bower gave a status update to the roughly 50 people present who make up the three teams. With her guide dog, Intrepid, looking on, Bower, who is legally blind, detailed the vast array of instruments being deployed to fulfill the project’s mission. First, the addition of 35 UGOS-funded floats in the southern Gulf will expand coverage of temperature and salinity already gathered by existing floats, which are funded by NOAA. As a result, she explained, the Gulf of Mexico currently has the “highest density of floats in the world.” Five pressure-equipped inverted echo sounders (PIES) are stationed in the Yucatan Channel to monitor deep ocean currents. Additional PIES that also measure current will be installed in an array with other current profilers on the northern Gulf’s continental slope next year, which Bowers said will “serve as an antenna,” detecting energy transfers deep in the water column. Beyond that, high-frequency (HF) radar installed in the Yucatan Channel will provide near real-time measures of the speed and direction of surface currents.

Then DiMarco’s adaptive sampling team from Texas A&M described how they will deploy observational equipment in the Yucatan Channel—including gliders, floats, and most notably airplanes equipped with a Remote Ocean Current Imaging System (ROCIS) to get high-resolution data of sea surfaces, ideally during key moments of the Loop Current extension and eddy shedding.

The ROCIS system was originally built for the Department of Defense and was declassified for commercial use in 2016. “This project will be the first time ROCIS is used for science,” explained Jan van Smirren, UGOS technical coordinator and consultant with Ocean Sierra, LLC, a private company that provides the oil and gas industry with oceanographic and meteorological insights.

“Deciding how to adaptively sample the Gulf of Mexico really depends on what stage the loop is at,” explained Ruoying He. “It will require timely two-way communication between observers who are running the surveys and modelers who are assimilating real-time observations and making model forecasts of the Loop Current.” But the whole team is aware that the success of the project hinges on figuring out how to strategically coordinate the observers and modelers to conduct adaptive sampling.

In mid-January 2024, if all goes well, the team is scheduled to conduct a series of adaptive sampling tests to determine how best to coordinate ROCIS, underwater gliders, and other measures across the Gulf. The team has learned “how challenging a multi-instrument, multi-institution, multination coordinated field effort actually is,” noted Feldman. Four autonomous gliders—two from the Centro de Investigación Científica y de Educación Superior de Ensenada(CICESE) in Mexico and one each from Texas A&M and Rutgers University—will be released into the Yucatan strait, where the warm waters first rush into the Gulf from the Caribbean. The gliders will spend 45 to 60 days zigzagging across the strait, depending on their battery life.

“This was a forced marriage among competitors,” Ruoying He told me. “It’s not something everyone was happy about, but looking at the big picture, it was a necessary action to take.”

While the gliders are unmanned, navigating their deployments remotely in this complex submarine bathymetry is tricky, Enric Pallas of CICESE explained at the Tallahassee meeting. The gliders ascend and descend through the region’s submarine mountain range to collect current, salinity, and temperature data while contending with a strong secondary northward current and a great deal of marine traffic. “It can be quite scary for the pilots to fly a glider in this scenario and avoid the thousands of boats in the Yucatan area,” said Pallas.

In July 2023, researchers lost contact with a launched glider after 10 days due to what they now believe was a surface crash with a boat. It was ultimately recovered—in two parts. It appeared that someone, possibly a fisherman, had opened it and thrown it on the beach.

Equipment failures and subsurface terrain are simpler to navigate than political boundaries. Because the Loop Current crosses multiple borders, the gliders need to operate in three different exclusive zones: Mexico, Cuba, and the United States. It took nearly a year to get the necessary permissions. “It is not the Gulf of the United States, it’s the Gulf of Mexico,” observed DiMarco. “International collaborations here are just crucial.”

All of these novel data—built up from disparate teams—are necessary to unearth key predictive metrics, Bower told me before the Florida meeting. When I asked her what UGOS’s “white whale” is, she said, “a more comprehensive understanding of the interaction between the lower and upper layers of the Gulf of Mexico.” Teasing apart this interplay requires extraordinary collaboration between researchers who normally work independently. She observed, “In my experience, this closeness of relationships between such large groups of observationalists and modelers is unusual—and it’s hard.” 

Watching the current in real time

New technologies and sampling regimes are already offering researchers an unprecedented ability to see what the current is doing. It had long been theorized that the Loop Current might reverse from time to time, but until recently that had never been observed. During 2022’s Hurricane Ian, DiMarco’s team documented a short-term reversal of the Loop Current flow west of Cuba due to wind. Where the winds were strong, the surface layers of the Loop Current arrested and reversed. Although it is unclear whether the flow reversals are indicative of more significant changes to the current, this observation demonstrates that it is now possible to document what is happening in the Gulf in real time. 

During the 2023 hurricane season, the project gave other exciting glimpses of insights to come. On August 26, Hurricane Idalia began to form as a tropical depression in the Caribbean. On the HF radar, Scott Glenn, a Rutgers University researcher and member of the adaptive sampling team, watched as it turned into a tropical storm. Unfortunately, communications went down, and the team feared data collection had stopped. About three days later, communications came back up, and all the data filled in, giving a picture of how Idalia rapidly intensified from Category 1 to Category 4 as it passed over the Loop Current near western Cuba before making landfall on August 30. “We have three stories here,” Glenn explained. First, HF radar identified the storm formation in the Yucatan. Then, the floats documented the storm interaction with the Loop Current. Finally, the gliders recorded the conditions during the rapid intensification in shallow water.

UGOS’s critical data-gathering push—dubbed the Grand Adaptive Sampling Experiment (GrASE)—is expected to begin in 2025. GrASE will require observers and modelers to communicate weekly to determine when ROCIS flights should take off or when gliders, floats, or drifters should be deployed. GrASE’s success will hinge on how well the teams sustain what will be months-long collaboration. The lead for the GrASE project is Steve Morey, oceanographer at Florida Agricultural and Mechanical University, who told me that the overarching goal is to capture upstream data as waters enter the Yucatan Channel from the south, once the Loop Current is already extended enough to shed an eddy.

Timing will be key. Model forecasts are very sensitive to the initial conditions of the ocean, and some features are undersampled—notably, the small frontal cyclones that develop around the Loop Current and introduce instabilities that grow over time and influence whether eddies separate. As a result, forecast models are lousy at predicting whether and when the Loop Current will shed eddies after it extends out.

One enormous challenge is getting data on all the different types of eddies that influence the Loop Current’s behavior. “Eddy shedding events are driven by Loop Current instability; many variables interact nonlinearly,” explained Luna Hiron, a postdoctoral researcher on the modeling team at Florida State University. Hiron’s PhD research demonstrated how small frontal counterclockwise eddies near the boundary of the Loop Current can shape how it meanders. Simultaneously, there are also deep eddies roughly 3,000 meters below the surface that can influence when Loop Current eddies on the surface separate. And, if this isn’t confusing enough, Loop Current eddies can detach and reattach mutiple times.

The whole team is aware that the success of the project hinges on figuring out how to strategically coordinate the observers and modelers to conduct adaptive sampling.

To enhance predictability of the Loop Current’s behavior in both the short and long term, Hiron said the team analyzed how well the models perform. Presently, she explained, satellites provide reasonable data on larger surface counterclockwise eddies found on the northern and eastern sides of the Loop Current, but that’s not the case for the smaller, more dynamic eddies on the western side. “We don’t see them at all on satellites,” Hiron said. The smaller eddies can introduce instabilities that grow over time and influence when eddies separate. New data from HF radar or information from the ROCIS flyovers may offer insights that can assist in making predictions. 

HF radar is also able to measure the position of the current in the Yucatan Channel. The water moving through the Yucatan Channel seems to prefer flowing east or west of a submerged bank called Banco Arrowsmith, but not directly over it, DiMarco told me. And that east-west wobble has downstream impacts. Mexican mooring data, published in 2023 by CICESE’s Julio Sheinbaum and colleagues, demonstrated that shifts in the position of the Loop Current as it enters the Yucatan Channel can determine whether the current extends north into the Gulf or retracts. The findings suggest that eddies only get shed when the Loop Current extends more than 1,800 kilometers from the Yucatan Channel.

“Understanding the flow out of Yucatan is critical,” agreed Tim Gallaudet, an oceanographer who is a retired rear admiral in the US Navy. “Any model is dependent on reducing uncertainty in the initial environmental conditions. Everything upstream matters,” he said.

In the middle of all these meticulous preparations flows another anxiety: What if the Loop Current doesn’t extend or shed eddies during the couple of years the project has left? UGOS teams have spent over a year coordinating the movement of equipment and securing permits to deploy equipment in the Mexican Caribbean—all the time knowing that the Loop Current may not cooperate. Ruoying He said these worries are well founded: “It’s happened before in another research program. We planned a comprehensive campaign, but nature didn’t give the variability we’re trying to measure.”

Turning knowledge into predictive insights

Although there is little doubt that UGOS will offer an unprecedented characterization of the Loop Current, the project is ultimately meant to improve the data that feed disparate models. It is not as simple as producing a standalone model that industry or federal agencies could easily plug into existing operations. Improving multiple models, therefore, will be an iterative process of data assimilation unique to each application and agency.

Researchers have been keenly aware of this gap from the beginning. “Research-to-operations transitions have been and remain one of the biggest challenges in all scientific research,” explained Feldman. Scientists, Hiron noted, have a tendency to try to understand the whole system, whereas UGOS is focused on more specific goals: to improve the operational forecasts of the Loop Current system, so the outcomes provide what governments and companies need to make practical decisions. “If we achieve what the stakeholders want, then that’s going to make the transition into operational use much more effective,” she said.

The oil and gas industry, which needs a significantly better system for forecasting Loop Current behavior, has identified a metric to help its operations in the short term. Jim Stear, senior principal offshore engineer at Chevron, explained that when currents are above 1.5 knots, oil and gas operations need to shut down because oscillations in floating platforms can weaken the mooring system. Today, there are 57 permanent deepwater structures in the Gulf, which may soon be joined by offshore wind platforms. Satellite imagery and models are able to deliver, at best, a few-day forecast of the Loop Current. Getting UGOS to deliver the position of the 1.5 knot line along the boundary of the current would be a good first step toward anticipating the loop’s behavior because knowledge of the line’s position would help workers determine when to shut down operations before faster waters arrive. 

The energy industry will likely be the speediest to turn UGOS’s data into predictions because it has a strong economic incentive. Still, Stear is not expecting that papers and codes from UGOS will be taken up within months to produce an operating, dependable system. “I think that transition period is probably going to be several years,” he said.

It had long been theorized that the Loop Current might reverse from time to time, but until recently that had never been observed. During 2022’s Hurricane Ian, DiMarco’s team documented a short-term reversal of the Loop Current flow west of Cuba due to wind.

The way forward to assist the National Weather Service’s hurricane prediction, NOAA’s fisheries, and the Coast Guard is less straightforward. Hurricane prediction, for example, makes use of a dozen disparate models. Although UGOS findings have the potential to be integrated on different timescales into this alphabet soup of models—named HYCOM, ROMs, and MOM6—each of these models and the stakeholders that incorporate them into their various forecasts work on different time horizons.

Matthieu Le Henaff, a hurricane modeler at the University of Miami, is a stakeholder who is exploring how to incorporate UGOS data into NOAA models. According to Le Henaff, data from gliders, floats, and PIES will be assimilated into the MOM6 model, the ocean component of global climate and earth system models, which will ultimately feed into the Hurricane and Forecast System model. But the MOM6 ocean model won’t be operational for at least a couple of years, he noted. In particular, he anticipates that glider data will better characterize how the Mississippi freshwater river plume spreads on the ocean surface, resulting in a large density gradient that can favor hurricane intensification—and is currently something tricky for models to represent. However, assimilating data from ROCIS and HF radar will require more work.

Fisheries biologists work on even longer timescales. “We tend to update our stock assessment models, and therefore catch advice, to fisheries managers every five to seven years,” said Mandy Karnauskas, a fisheries biologist at NOAA’s Southeast Fisheries Science Center in Miami. Catch limits, for example, are based on historical population dynamics, response to fishing pressure, and estimates of recruitment of new fish into the population, which can be highly influenced by large-scale ocean dynamics such as the Loop Current. “When the Loop Current is extended to the north, it acts as a barrier to transport from the shelf to offshore—which favors recruitment. UGOS will investigate Loop Current influence on recruitment of red snapper, arguably the most important commercial fish in the Gulf of Mexico,” said Ana Vaz, a fisheries modeler at the University of Miami who works with Karnauskas. On the other hand, the Loop Current might influence environmental conditions that indirectly shape species distributions. For example, the greater amberjack lives around Sargassum seaweed during its early pelagic years, and when the Loop Current brings the seaweed rafts into the Gulf it changes their population dynamics.

Several of the researchers I spoke with were concerned that, without defined transition pathways and metrics, UGOS’s diffuse insights risk not being operationalized, even though UGOS members are working with representatives of federal agencies like Le Henaff, Karnauskas, and Vaz. And there is a range of opinions regarding the UGOS team’s responsibility to transition to operations. “We are a scientific community,” Chassignet, lead of the UGOS modeling team, told me. “The best thing for us to do is science.” If the science is good, he reasoned, it will get taken up into models. With almost 300 scientific publications and over 13,000 citations, Chassignet exemplifies a perspective rooted firmly in academia.

However, several other UGOS members, notably Glenn and van Smirren, think UGOS needs to actively engage stakeholders to ensure these new insights and capabilities are taken up by agencies—or they could have trouble justifying the millions spent. “We can’t just say we improved the Loop Current forecast by some percentage and then we left the room,” said Glenn. “I want this work to have an impact by saving lives and improving the economy.”

Perhaps proving the point that getting new insights incorporated into long-standing models requires a hands-on approach, Ruoying He is certain that the model he oversees will readily accept UGOS outputs: “I know for sure the ROMS [regional ocean modeling system] is intended to be adopted by NOAA by the end of the project.” As the person coleading that charge, he said, “It will be an operational forecast system for the ocean circulation of the Loop Current in the Gulf of Mexico and beyond.”

Hard-won results

After 18 sometimes bumpy months, Amy Bower reflects that the project’s level of integration has come at a cost. Battling meeting fatigue and more work than UGOS leadership may have anticipated, she said it’s been an iterative process of reevaluating goals, metrics, and output. “I see the original consortia falling more into the history books, to have almost disappeared now,” she explained.

Scientists have a tendency to try to understand the whole system, whereas UGOS is focused on more specific goals: to improve the operational forecasts of the Loop Current system, so the outcomes provide what governments and companies need to make practical decisions.

“It’s not a common approach because it is very complicated and costly,” agreed Le Henaff, but he’s convinced this level of coordination will lead to practical insights. For example, when the teams are in the same room, they are able to navigate how everyone’s biases impact the goals. “We will learn things we would not be able to learn if people were doing things on their own without consulting each other,” he said.

While the forced marriage of competitors was a gamble, NASEM’s Feldman stands by the decision. “Forcing this collaboration was without a doubt the right thing to do,” he said. “[Loop Current prediction] had been a problem unsolved for so long, and we had the opportunity, resources, and attention of people to take our best shot at trying to solve it.” In hindsight, he might have crafted the call for proposals such that the teams would organize themselves. He estimates the need to restructure and rethink how each project would run probably caused six to nine months of delay. Nevertheless, “I would still do it again,” he affirmed. “Everyone has put ego aside to work together,” he said. “And that is a really rare thing.”

The UGOS program still has years to go, but the hard-won cooperation between the teams may account for its legacy and ultimate success. In Ruoying He’s view, one of the most important outcomes of UGOS has been creating a strong partnership between observers and modelers—while overcoming the challenges of sustained science operations and their adoption by agency and industry partners. “This is what makes UGOS transformative—making it operational to deliver outputs to stakeholders,” he told me. “Otherwise, it will just be a new set of scientific papers that will be produced from the program. We’ve been doing that for the last 50 years.”

Living Computers

In 1967, the idea of computer science as a distinct discipline seemed outlandish enough that three leaders of the movement felt the need to write a letter to Science addressing the question, “What is computer science?” In their conclusion, Allen Newell, Alan J. Perlis, and Herbert A. Simon firmly asserted that computer science was a discipline like botany, astronomy, chemistry, or physics—a study of something dynamic, not static: “Computer scientists will study living computers with the same passion that others have studied plants, stars, glaciers, dyestuffs, and magnetism; and with the same confidence that intelligent, persistent curiosity will yield interesting and perhaps useful knowledge.”

Speed forward six decades, and the application of “intelligent, persistent curiosity” has succeeded in routing nearly all aspects of daily life through computing machines. Objects, such as cars, that once seemed reassuringly analog now re-render themselves on a regular basis; in the past month alone, 2 million electric cars were recalled for updates to their self-driving software. Meanwhile, generative artificial intelligence powers hundreds of internet-based news sites, fueling concerns about misinformation and disinformation—not to mention fear for the profession of journalism. And digital communication has become a front in modern armed conflict: the Russian military has been accused of hacking Ukraine’s cell and internet service, which shut off streetlights and missile warning systems. Shifting meanings of “truth” and “news”—let alone “war”—is not strictly within the scope of computer science, but none of these concepts would be intelligible without it.

Given the eventual success of computer science as a discipline, the defensive tone of Newell, Perlis, and Simon’s letter is surprising. Just two years before, in 1965, the three had founded one of the country’s first computer science departments at Carnegie Mellon University. And they had little patience for doubters. “There are computers. Ergo, computer science is the study of computers…. It remains only to answer the objections posed by many skeptics,” the authors quipped in their letter. They efficiently dismissed six objections, as handily as you’d expect: Newell and Simon were members of the National Academy of Sciences, and Perlis was a member of the National Academy of Engineering. Newell and Perlis won the Turing Award and Simon the Nobel Prize.

A computer lab in the early 1960s at Carnegie Institute of Technology. General photograph collection. Carnegie Mellon University Archives.

In defining the discipline of computer science, the three had what seems to be a premonition of today’s hybrid reality in which computers have spilled across boundaries to mediate the world. “‘Computers’ means ‘living computers’—the hardware, their programs or algorithms, and all that goes with them. Computer science is the study of the phenomena surrounding computers.”

With so many aspects of life now fitting under the phenomena of “living” computation, the initial logic behind the creation of the discipline of computer science is getting turned on its head. The totalizing power of computational machines means that making sense of the present requires insights from disciplines that once seemed hopelessly removed from technology—like philosophy, history, and sociology. 

In this issue, philosopher C. Thi Nguyen writes about what his field has revealed about the inherent subjectivity and potential weaknesses of data. “When a person is talking to us, it’s obvious that there’s a personality involved,” he writes. “But data is often presented as if it arose from some kind of immaculate conception of pure knowledge,” obscuring the political compromises and judgement calls that make the gathering of data possible. He quotes the historian of science Theodore Porter on this sleight of hand: “Quantification is a way of making decisions without seeming to decide.”

The same could be said for living computers. As algorithms have become embedded in our lives, through social media and now artificial intelligence, it’s increasingly difficult to tell where the decisions are made. In this issue, a collection of historians, sociologists, communications scholars, and an anthropologist share useful insights into generative AI’s effect on society. They explore how the technology is changing cultural narratives, redefining the value of human labor, and outstripping reliable conventions of knowledge, all in the interest of protecting society from AI’s harms. 

The totalizing power of computational machines means that making sense of the present requires insights from disciplines that once seemed hopelessly removed from technology—like philosophy, history, and sociology. 

By looking at generative AI through the lens of the humanities, these scholars reveal new pathways for equitable governance. “The destabilization around generative AI is also an opportunity for a more radical reassessment of the social, legal, and cultural frameworks underpinning creative production,” write AI researcher Kate Crawford and legal scholar Jason Schulz. “Making a better world will require a deeper philosophical engagement with what it is to create, who has a say in how creations can be used, and who should profit.”

These insights have long underpinned Issues’ work, but they are more urgent now. By 2018, an Academies report, known as Branches From the Same Tree, proposed that humanities education be more tightly coupled with training in science, technology, engineering, and mathematics (STEM), along with medicine. “Given that today’s challenges and opportunities are at once technical and human, addressing them calls for the full range of human knowledge and creativity. Future professionals and citizens need to see when specialized approaches are valuable and when they are limiting, find synergies at the intersections between diverse fields, create and communicate novel solutions, and empathize with the experiences of others.” Already, this somewhat defensive advocacy for the value of the humanities is starting to seem as prescient as the 1967 call for computer science.  

The other half of the story of living computers is that over the last 50 years, fostering technological industry has become a policy imperative at federal, state, and local levels. In this magazine’s first issue, in 1984, then governor of Arizona Bruce Babbitt wrote that state governments had discovered scientific research and technological innovation as “the prime force for economic growth and job creation.” Pointing to the University of Texas at Austin’s success with the Balcones Research Center, Babbitt compared the frenzy to turn university research into an economic propellant to the nineteenth century’s Gilded Age, “when communities vied to finance the transcontinental railroads.”

Living in the age of living computers, and profiting from it, requires understanding how societies work, how people get along, and how they make meaning together. Now should be a time of reinvigorated collaboration between STEM and humanities at every level.

Forty years later, the search for the keys to enduring regional growth has become ever more frantic, while income inequality has grown tremendously. Even as economic stagnation and declining global competitiveness contribute to a sense of drag, faith remains in technological innovation as a silver bullet. The White House heralded the 2022 CHIPS and Science Act as positioning US workers, communities, and businesses to “win the race for the twenty-first century.”

But as Grace Wang argues in this issue, the old, simplistic sense of how innovation can catalyze regional economies has been surpassed by a recognition of the complexity of that process. Today’s innovation districts involve dense concentrations of people with “colocation of university research and education facilities, industry partners, startup companies, retail, maker spaces, and even apartments, hotels, and fitness centers.” If the traditional vision of harvesting the fruits of university innovation involved the provision of durable goods like laboratories and supercomputers, today’s research clusters require an entire upscale digital lifestyle: good coffee, good venture capital, good vibes, and good gyms to counteract all that screen time.

The harder trick may be helping those place-based ecosystems to persist. Wang observes that for a regional innovation center to last, it must draw a steady stream of new workers. The entire society around the region must be transformed so that children can imagine themselves as part of this innovation ecosystem from an early age. And even a traditional STEM education is not enough to create the kinds of workers who can thrive in a global competition. “They need to be collaborative team players, creative and critical thinkers, motivated value creators, and effective communicators.”

“Winning” the twenty-first century, whatever that comes to mean, will require soft skills as well as software. In a sense, Wang ends at the same place as our philosophers, sociologists, and the National Academies’ Branches report: living in the age of living computers, and profiting from it, requires understanding how societies work, how people get along, and how they make meaning together. Now should be a time of reinvigorated collaboration between STEM and humanities at every level.

Treating STEM and the humanities as mortal competitors for scarce funding—or worse, as a moral competition between “problem solvers” and “problem wallowers”—is not a wise industrial strategy.

But it is not. In September, West Virginia University announced that it was eliminating 28 majors, shutting down the department of world languages and linguistics, cutting faculty in law, communications studies, public administration, education, and public health. It is just one among many state universities that have cut non-STEM classes over the past few years: Missouri, Kentucky, New York, Kansas, Ohio, Maine, Vermont, Alaska, and North Dakota.

Rural states, in particular those that have lost jobs, increasingly see STEM as their lifeline. But in places with few options, eliminating humanities risks creating environments that fall further behind on providing the soft services and the critical thinkers necessary for industrial competitiveness. STEM degrees may initially make students a better fit for employers, but who wants to hang around and engineer innovations in a place without coffee shops and art, music, and theater? Social transformation is an inherently cultural activity. Treating STEM and the humanities as mortal competitors for scarce funding—or worse, as a moral competition between “problem solvers” and “problem wallowers”—is not a wise industrial strategy.

Newell, Perlis, and Simon’s vision of “living computers” has come to pass, but paradoxically that has only increased the necessity of other disciplines to understand and remake the world.

Lessons from Ukraine for Civil Engineering

The resilience of Ukraine’s infrastructure in the face of both conventional and cyber warfare, as well as attacks on the knowledge systems that underpin its operations, is no doubt rooted in the country’s history. Ukraine has been living with the prospect of warfare and chaos for over a century. This “normal” appears to have produced an agile and flexible infrastructure system that every day shows impressive capacity to adapt.

In “What Ukraine Can Teach the World About Resilience and Civil Engineering,” Daniel Armanios, Jonas Skovrup Christensen, and Andriy Tymoshenko leverage concepts from sociology to explain how the country is building agility and flexibility into its infrastructure system. They identify key tenets that provide resilience: a shared threat that unites and motivates, informal supply networks, decentralized management, learning from recent crises (namely COVID-19), and modular and distributed systems. Resilience naturally requires coupled social, ecological, and technological systems assessment, recognizing that sustained and expedited adaptation is predicated on complex dynamics that occur within and across these systems. As such, there is much to learn from sociology, but also other disciplines as we unpack what’s at the foundation of these tenets.

Agile and flexible infrastructure systems ultimately produce a repertoire of responses as large as or greater than the variety of conditions produced in their environments. This is known as requisite complexity. Thriving under a shared threat is rooted in the notion that systems can do a lot of innovation at the edge of chaos (complexity theory), if resources including knowledge are available and there is flexibility to reorganize as stability wanes. The informal networks Ukraine has used to source resources exist because formal networks are likely unavailable or unreliable. We often ignore ad hoc networks in stable situations, and even during periods of chaos such as extreme weather events, because the organization is viewed as unable to fail—and therefore too often falls back to its siloed and rigid structures to ineffectively deal with prevailing conditions.

Thriving under a shared threat is rooted in the notion that systems can do a lot of innovation at the edge of chaos, if resources including knowledge are available and there is flexibility to reorganize as stability wanes.

Ukraine didn’t have this luxury. Management and leadership science describe how informal networks are more adept at finding balance than are rigid and siloed organizations. Related, the proposition of decentralized management is akin to imbuing those closest to the chaos, who are better attuned to the specifics of what is unfolding, with greater decisionmaking authority. This is related to the concept of near decomposability (complexity science). This decentralized model works well during periods of instability, but can lead to inefficiencies during stable times. During rebuilding, you may not want decentralization as you try to efficiently use limited resources.

Lastly, modularity and distributed systems are often touted as resilience solutions, and indeed they can have benefits under the right circumstances. However, network science teaches us that decentralized systems shift the nature of the system from one big producer supplying many consumers (vulnerable to attack) to many small producers supplying many consumers (resilient). Distributed systems link decentralized and modular assets together so that greater cognition and functionality are achieved. But caution should be used in moving toward purely decentralized systems for resilience, as there are situations where resilience is more closely realized with centralized configurations.

Fundamentally, as the authors note, Ukraine is showing us how to build and operate infrastructure in rapidly changing and chaotic environments. But it is also important to recognize that infrastructure in regions not facing warfare is likely to experience shifts between chaotic (e.g., extreme weather events, cyberattacks, failure due to aging) and stable conditions. This cycling necessitates being able to pivot infrastructure organizations and their technologies between chaos and non-chaos innovation. The capabilities produced from these innovation sets become the cornerstone for agile and flexible infrastructure to respond at pace and scale to known challenges and perhaps, most importantly, to surprise.

Professor of Civil, Environmental, and Sustainable Engineering

Arizona State University

Coauthor, with Braden Allenby, of The Rightful Place of Science: Infrastructure and the Anthropocene

In their essay, Daniel Armanios, Jonas Skovrup Christensen, and Andriy Tymoshenko provide insightful analysis of the Ukraine conflict and how the Ukrainian people are able to manage the crisis. Their recounting reminds me of an expression frequently used in the US Marines: improvise, adapt, and overcome. Having lived and worked for many years in Ukraine, and having returned for multiple visits since the Russian invasion, leaves me convinced that while the conflict will be long, Ukraine will succeed in the end. The five propositions the authors lay out as the key to success are spot on.

Ukraine’s common goal of bringing its people together (authors’ Proposition 1), along with the Slavic culture and a particular legacy of the Soviet system, combine to form the fundamental core of why the Ukrainian people not only survive but often flourish during times of crisis. Slavic people are, by my observation, tougher and more resilient than the average. Some will call it “grit,” some may call it “stoic”—but make no mistake, a country that has experienced countless invasions, conflicts, famines, and other hardships imbues its people with a special character. It is this character that serves as the cornerstone of their attitude and in the end their response. Unified hard people can endure hard things.

Some will call it “grit,” some may call it “stoic”—but make no mistake, a country that has experienced countless invasions, conflicts, famines, and other hardships imbues its people with a special character.

A point to remember is that Ukraine, like most of the former Soviet Union, benefits from a legacy infrastructure based on redundancy and simplicity. This is complementary to the authors’ Proposition 5 (a modular, distributed, and renewable energy infrastructure is more resilient in time of crisis). It was Vladimir Lenin who said, “Communism equals Soviet power plus the electrification of the whole country.” As a consequence, the humblest village in Ukraine has some form of electricity, and given each system’s robust yet simple connection, it is easily repaired when broken. Combine this with distributed generation (be it gensets or wind, solar, or some other type of renewable energy) and you have built-in redundancy.

During Soviet times, everyone needed to develop a “work-around” to source what they sometimes needed or wanted. Waiting for the Soviet state to supply something could take forever, if it ever happened at all. As a consequence, there were microentrepreneurs everywhere who could source, build, or repair just about everything, either for themselves or their neighbors. This system continues to flourish in Ukraine, and the nationalistic sentiment pervading the country makes it easier to recover from infrastructure damages. As the authors point out in Proposition 3, decentralized management allows for a more agile response.

The “lessons learned” from the ongoing conflict, as the authors describe, include, perhaps most importantly, that learning from previous incidents can help develop a viable incident response plan. Such planning, however, should be realistic and focus on the “probable” and not so much on the “possible,” since every situation and plan is resource-constrained to some degree. The weak link in any society is the civilian infrastructure, and failure to ensure redundancy and rapid restoration is not an option. Ukraine is showing the world how it can be accomplished.

Supervisory Board Member

Ukrhydroenergo

When Farmland Becomes the Front Line, Satellite Data and Analysis Can Fight Hunger

It is difficult to predict exactly how events like extreme weather, pandemics, conflict, and politics will disrupt global food systems and cause people to go hungry. The destabilizing impacts of the COVID-19 pandemic (inflation, employment crises, supply chain disruptions, higher prices for fertilizer and fuel), exacerbated by climate disruptions and war, has led 200 million more people to experience higher levels of food insecurity than prepandemic levels. Today, 1 in 10 people around the world are food insecure, but forecasting how global events will affect insecurity remains a challenge.  

Timely, transparent, actionable crop production data aggregated from local to global levels is necessary to inform farmers, policymakers, and humanitarian organizations making decisions about food production and distribution. Most countries publish crop information that has been collected through ground-based surveys of farmers’ production levels, acreage, and yields. Governments and international organizations also track supply chain disruptions, trade flows, food stocks, and market data on consumption. These data are then used to anticipate how supply and demand will affect prices and, by extension, food insecurity and social unrest.

Today, 1 in 10 people around the world are food insecure, but forecasting how global events will affect insecurity remains a challenge.

When a shock to the global food system occurs—such as during the Russian invasion of Ukraine in 2022—collecting the usual ground-based data is all but impossible. The Russia–Ukraine war has turned farmland into the front lines of a war zone. In this situation, it is unreasonable to expect civilians to walk onto fields riddled with land mines and damaged by craters to collect information on what has been planted, where it was planted, and if it could be harvested. The inherent danger of ground-based data collection, especially in occupied territories of the conflict, has demanded a different way to assess planted and harvested areas and forecast crop production.

Satellite-based information can provide this evidence quickly and reliably. At NASA Harvest, NASA’s Global Food Security and Agriculture Consortium, one of our main aims is to use satellite-based information to fill gaps in the agriculture information ecosystem. Since the start of the Russia–Ukraine conflict, we have been using satellite imagery to estimate the impact of the war on Ukraine’s agricultural lands at the request of the Ministry of Agrarian Policy and Food of Ukraine. Our work demonstrates how effective this approach can be for delivering critical and timely insights for decisionmakers.

Prior to the war, Ukraine accounted for over 10% of the world’s wheat, corn, and barley trade and was the number one sunflower oil exporter, accounting for close to 50% of the global market. In other words, food produced in Ukraine is critical for its national economy, for global trade, and for feeding millions across the globe. As such, it was immediately important to have an accurate perception of how the war was impacting agricultural production. How much cropland was being abandoned due to Russian occupation or proximity to active fighting? How much cropland had been damaged by artillery craters? How would land losses impact future crop yields? How was planting and harvesting on unoccupied land progressing? Satellite data collected from multiple sources, including Planet Labs (a satellite imaging company), NASA, and the European Space Agency, provided the only means to answer these questions quickly.

Ukrainian Crop Types

Map of Ukraine by planted crop types at 3-meter resolution, summer 2022. (Satellite data sources: PlanetScope and Copernicus Sentinel 1 and Sentinel 2; cropland extent based on ESA WorldCereal. Crop type map produced by Inbal Becker-Reshef, Josef Wagner, Shabrinath Nair, Sergii Skakun, Abdul Qadir, Yuval Sadeh, Sheila Baber, Fangji Li, Mehdi Hosseini, Saeed Khabbazan, and Blake Munshell; NASA Harvest.)

CROPLAND DAMAGED BY WAR

NASA Harvest mapped ~2.5 million artillery craters across the frontline with 1.2 million craters falling within 81,000 agricultural fields. This image covers the frontline battlefields of Horliivka, Velyska Novosilka, and Vuhledar in Ukraine in 2022, showing many fields contain more than 1,000 craters each. (Date: May to August 2022. Satellite data sources: MAXAR and Planet SkySat. Artillery crater mapping by Erik Duncan and Sergii Skakun; field boundaries by Yuval Sadeh; map composition by Shabrinath Nair.)

Our analysis found that Russia occupied approximately 22% of Ukraine’s cropland. While many observers speculated that in 2022 production would be significantly reduced—with the winter crop harvest (mainly wheat and barley) and the spring crop planting (largely corn and sunflower) 30% to 50% lower than previous years—the satellite data revealed a different story. We found that close to 90% of the wheat crop had been harvested, and that the large majority of available croplands were planted with spring crops. Planting and harvesting losses were concentrated along the front line. We estimate the amount of abandoned cropland in Ukraine in 2023 is equivalent to about 7.5% of total cropland in the country. Still, that’s a lot of land; had this land been planted, it could have produced enough to feed 25 million people for one year.

In 2022, production of staple crops in Ukraine was only slightly below the five-year average and in 2023 it was close to or slightly above average (depending on the crop), largely owing to good yields. Crucially, however, a large proportion of this production—approximately 22% of wheat and 10% of sunflower—was harvested in the Russian-occupied territories. To put this in perspective, the wheat harvested in the occupied territories in 2023 is roughly equivalent to the 2023 wheat harvest of Kansas (the second-largest producing state in the United States) and represents about 60% of total wheat imports to Egypt (the world’s largest wheat importer). To date, NASA Harvest is the only entity reporting on the production of crops in the occupied territories, which are continuing to produce a sizable amount of staple crops critical for global food supplies.

Our work in Ukraine underscores the potential for satellite data and analysis to fill serious gaps in agricultural information during food system shocks and when ground access is disrupted.

Taras Vysotskyi, first deputy minister of Agrarian Policy and Food of Ukraine, noted that NASA Harvest’s assessment has helped his government understand the real state of agricultural production in Ukraine, evaluate how best to ensure regional food security, and determine export levels to support global food security.

Indeed, information about food production is its own kind of currency. “Assessing the global supply situation and being able to predict unexpected shortfalls is a critical task to guarantee global food security,” according to Abdolreza Abbassian, former secretary of the Agricultural Market Information System (AMIS) and senior economist at the Food and Agriculture Organization of the United Nations. Reliable data about agricultural commodities can enable governments and other organizations to buffer global price volatility and ensure well-functioning markets and trade relationships that are critical for ensuring food security and access, particularly in the world’s poorest regions and in countries dependent on food imports. Quantifiable, science-based information on crop production and potential shortfalls can inform policy decisions on trade, infrastructure, agricultural investments, and farmer safety-net programs—and, most directly, which crops to plant when and where. These insights can be used to shape markets and steer humanitarian relief to help prevent suffering and political upheaval.

ABANDONED CROPLAND ON THE FRONT LINES

Satellite data show unplanted or abandoned fields in 2023 in red, along the war’s front lines. (Satellite data source: PlanetScope used for analysis. July 2023 PlanetScope image displayed as background. Front lines from Institute for the Study of War and American Enterprise Institute’s Critical Threats Project. Analysis by Joseph Wagner, Shabrinath Nair, and Inbal Becker-Reshef.)

IRRIGATION SYSTEMS BEFORE AND AFTER THE KAKHOVKA DAM COLLAPSE

After the destruction of the Kakhovka Dam on June 6, 2023, all four of the major dam inlets that supply irrigation canal networks were disconnected from the dam. Satellite observations show a narrowing of the canals shortly after the dam collapse, cutting off water supplies for critical irrigation systems in this semi-arid region of Ukraine. (Dates: June 3, 2023, and June 19, 2023. Satellite data source: PlanetScope.)

Our work in Ukraine underscores the potential for satellite data and analysis to fill serious gaps in agricultural information during food system shocks and when ground access is disrupted. In our experience, despite a large and growing demand for such analysis, there is a notable deficit in institutional capacity to deliver the kinds of rapid, satellite-driven assessments necessary to guide policy and humanitarian decisions. Many national and international organizations use remote sensing technologies for agricultural assessments, but a dedicated, state-of-the-art agricultural analysis facility does not exist. The need for such standing capacity has been recognized by multiple US government agencies, as well as by other national governments, United Nations organizations, humanitarian organizations, and policy frameworks such as AMIS. The demand for such analyses is currently unmet.

We propose establishing a dedicated facility that can be activated whenever events threaten agricultural production, distribution, or information transparency. The facility should focus on rapid, satellite-driven agricultural assessment in support of decisionmaking. It should be connected to ongoing efforts in this space, including the Group on Earth Observations Global Agricultural Monitoring (or GEOGLAM) initiative, an open community that leverages international agricultural remote sensing capacity developed across the globe. And it should be a hub where stakeholders, including national government and humanitarian agencies, can guide analysis requests.

We propose establishing a dedicated facility that can be activated whenever events threaten agricultural production, distribution, or information transparency. The facility should focus on rapid, satellite-driven agricultural assessment in support of decisionmaking.

Today, capacity to provide information and analysis with each new crisis grows and shrinks, depending on the resources allocated to the response. A standing facility would create more stable capacity to leverage the full suite of satellite, ground, and socioeconomic data; cloud computing and machine learning models; domain expertise; and a network of diverse partners to quickly and accurately prepare assessments for crises that affect agriculture. This would allow for actionable science-driven information to be produced in a timely fashion in response to political, economic, and humanitarian needs and end-user requirements.

The facility’s focus would be on analyzing three primary types of food system shocks: armed conflict and war; extreme weather events, such as drought and floods or natural disasters; and regions with high agricultural uncertainty or low data transparency. The facility would build a sustainable system prepared to fill key agricultural information gaps. In addition to producing rapid agricultural assessments in response to requests, the center would simultaneously develop methodologies that could be shared with the international community to build further capacity for agricultural monitoring and decisionmaking.

As disruptions from severe climate-related events and armed conflicts are projected to increase and as agricultural market transparency in critical producer countries declines, rapid satellite-based assessment will become ever more vital. NASA Harvest has recently received requests for assessments on the food emergency impacts of the conflicts in the Democratic Republic of the Congo and in Ethiopia’s Tigray region; the effects of late rains over wheat-growing regions in China; the outcomes of efforts to map Togo’s smallholder croplands in support of the government’s aid programs to its smallholder farmers during COVID 19; and the enduring consequences of the 2022 floods in Pakistan. We can certainly expect, and must plan to meet, an increasing demand for food-related information on our rapidly changing planet.

Attaining global food security is necessary for a healthier, more prosperous, and more equitable world. Preventing or efficiently managing food system disruptions requires timely and reliable information. With a rigorous, sustainable approach to data collection and analysis, satellite data can improve our understanding of global food systems and prepare society to respond to the next crisis.

To Reckon with Generative AI, Make It a Public Problem

Often, problems that seem narrow and purely technical are best tackled if they’re recast as “public problems,” a concept put forth almost a century ago by philosopher and educator John Dewey. Examples of public problems include dirty air, polluted water, global warming, and childhood education. Public problems bring harms that are not always felt individually but that nonetheless shape what it means to be a thriving person in a thriving society. These problems need to be noticed, discussed, and collectively managed. In contrast to problems that are personal, private, or technical, Dewey wrote, public problems happen when people experience “indirect consequences” that need to be collectively and “systematically cared for,” regardless of an individual’s circumstance, wealth, privilege, or interests. Public problems define our shared realities.

Although generative AI has been framed as a technical problem, recasting it as a public problem offers new avenues for action. Generative AI is quickly becoming a language for telling society’s collective stories and teaching us about each other. If you ask generative AI to make a story or video that explains climate change, you are actually asking a probabilistic machine learning model to create a statistically acceptable account of a public problem. Tools such as ChatGPT and Midjourney are fast becoming languages for understanding public problems, but with little analysis of their power to shape the stories that humans use to understand the shared consequences that Dewey told us create public life.

All members of society should reject the assertions of technology companies and AI “godfathers” who claim that generative AI is both an existential threat and a problem that only technologists can manage. Public problems are collectively debated, accounted for, and managed.

To grapple with generative AI effectively, consumers and developers alike need to see it not only as biased datasets and machine learning run amok—we need to see it as a fast-emerging language that people are using to learn, make sense of their worlds, and communicate with others. In other words, it needs to be seen as a public problem.

First, researchers need to see generative AI as a powerful language—as “boundaries,” “infrastructures,” and “hinges” that scholars of science and technology tell us create technologies. This means tracing the connections among the people and machines that make synthetic language: engineers who build machine learning systems, for example, entrepreneurs who pitch business models, journalists who make synthetic news stories, audiences who struggle to know what to believe. These are the complex and largely invisible assumptions that make generative AI a language for representing knowledge, fueling innovation, telling stories, and creating shared realities.

Second, as a society, we need to analyze the harms created by generative AI. When statistical hallucinations invent facts, chatbots misattribute authorship, or computational summaries bungle analyses, they produce dangerously wrong language that has all the confidence of a seemingly neutral, computational certainty. These errors are not just rare and idiosyncratic curiosities of misinformation; their real and imagined existence makes people see media as unstable, unreliable, and untrusted. Society’s information sources—and ability to gauge reality—are destabilized.

Finally, all members of society should reject the assertions of technology companies and AI “godfathers” who claim that generative AI is both an existential threat and a problem that only technologists can manage. Public problems are collectively debated, accounted for, and managed; they are not the purview of private companies or self-identified caretakers who work on their own timelines with proprietary knowledge. Truly public problems are never outsourced to private interests or charismatic authorities.

A public problem is not merely a technical curiosity, a moral panic, or an inevitable future. It is a system of relationships between people and machines that creates language, makes mistakes, and needs to be systematically cared for. Once we understand generative AI as a vital language for creating shared realities and tackling collective challenges, we can start to see it as a public problem, and then we will be in a better place to solve it.

How AI Sets Limits for Human Aspiration

We are watching “intelligence” being redefined as the tasks that an artificial intelligence can do. Time and again, generative AI is pitted against human counterparts, with textual and visual outputs measured against human abilities, standards, and exemplars. AI is asked to mimic, and then to better, human performance on law and graduate school admission tests, advanced placement exams and more—even as those tests are being abandoned because they perpetuate inequality and are inadequate to the task of truly measuring human capacity.

The narratives trumpeting AI’s progress obscure an underlying logic requiring that everything be translated into the technology’s terms. If it is not addressed, that hegemonic logic will continue to narrow viewpoints, hamper human aspirations, and foreclose possible futures by condemning us to repeat—rather than learn from—past mistakes.

The problem has deep roots. As AI evolved in the 1950s and ’60s, researchers often made human comparisons. Some suggested that computers would become “mentors” and “colleagues,” others “assistants,” “servants,” or “slaves.” As science and technology scholars Neda Atanasoski, Kalindi Vora, and Ron Eglash have shown, these comparisons shaped the perceived value not only of AI, but also of human labor. Those relegating AI to the latter categories usually did so because they believed computers would be limited to menial, repetitive, and mindless labor. They were also reproducing the fiction that human assistants are merely mechanical, menial, and mindless. On the other hand, those celebrating potential mentors and colleagues were tacitly assuming that human counterparts could be stripped of everything beyond efficient reasoning.

The portrayal of AI’s history is usually one of progress, where constellations of algorithms attain humanlike general intelligence and creativity. But that narrative might be more accurately inverted with a shrinking definition of intelligence that excludes many human capabilities.

Comparisons between AI and human performance often correlate with social hierarchy. As society and technology scholars Janet Abbate, Mar Hicks, and Alison Adam have shown, in the 1960s and 1970s, women and minorities were encouraged to advance in society by learning to code—but those skills were then devalued, while domains dominated by white men were seen as the realm of the truly technically skilled. More recent OpenAI measures of AI against standardized exams endorse a positivist, adversarial, and bureaucratic understanding of human intelligence and potential. Similarly, AI-generated “case interviews” and artworks encode mimicry as the definition of intelligence. For a result from generative AI to be validated as true—or to shock others as “true”—it has to be plausible, that is, recognizable in terms of past values or experiences. But looking backward and smoothing out outliers forecloses the rich wellsprings of humanity’s imagination for the future.

Such practices will ultimately affect who and what is perceived as intelligent, and that will profoundly change society, discourse, politics, and power. For example, in “AI ethics,” complex concepts such as “fairness” and “equality” are reconfigured as mathematical constraints on predictions, collapsed onto the underlying logic of machine learning. In another example, the development of machine learning systems for game-playing has led to a reductive redefinition of “play” as simply making permissible moves in search of victory. Anyone who has played Go or chess or poker against another person knowns that, for humans, “play” includes so much more. 

The portrayal of AI’s history is usually one of progress, where constellations of algorithms attain humanlike general intelligence and creativity. But that narrative might be more accurately inverted with a shrinking definition of intelligence that excludes many human capabilities. This narrows the horizon of intelligence to tasks that can be accomplished with pattern recognition, prediction from data, and the like. We fear this could set limits for human aspirations and for core ideals like knowledge, creativity, imagination, and democracy—making for a poorer, more constrained human future.

History Can Help Us Chart AI’s Future

Current technical approaches to preventing harm from artificial intelligence and machine learning largely focus on bias in training data and careless (even malicious) misuse. To be sure, these are crucial steps, but they are not sufficient solutions. Many risks from AI are not simply due to flawed executions of an otherwise sound strategy: AI’s penchant for enabling bias and misinformation is built into its “data-driven” modeling paradigm.

This paradigm forms the foundation of present-day machine learning. It relies on data-intensive pattern recognition techniques that generalize from past examples without direct reference to, or even knowledge about, what is being modeled. In other words, data-driven methods are designed to predict the probable output of processes that they can’t describe or explain. That deliberate omission of explanatory models leaves these methods particularly receptive to misdirection.

Today, this data-intensive, brute-force approach to machine learning has become largely synonymous with artificial intelligence and computational modeling as a whole. Yet history shows that the rise of data-driven machine learning was neither natural nor inevitable. Even machine learning itself was not always so data-centric. Today’s dominant paradigm of data-driven machine learning in key areas such as natural language processing represents what Alfred Spector, then Google’s vice president for research, lauded in 2010 as “almost a 180-degree turn in the established approaches to speech recognition.”

Data-driven methods are designed to predict the probable output of processes that they can’t describe or explain. That deliberate omission of explanatory models leaves these methods particularly receptive to misdirection.

Through its early decades, AI research in the United States fixated on replicating human cognitive faculties, based on an assumption that, as historian Stephanie Dick puts it, “computers and minds were the same kind of thing.” The devotion to this human analogy began to change in the 1970s with a highly unorthodox “statistical approach” to speech recognition at IBM. In a stark departure from the established “knowledge-based” approaches of the period, IBM researchers abandoned elaborate formal representations of linguistic knowledge and used statistical pattern recognition techniques to predict the most likely sequence of words, based on large quantities of sample data. Those very researchers described to me how this work owed much of its success to the unique computing resources available at IBM, where they had access to more computing power than anyone else. Even more importantly, they had access to more training data in a period where digitized text was vanishingly scarce by today’s standards. During a federal antitrust case against the company from 1969 to 1982, IBM had manually digitized over 100,000 pages of witness testimony using a warehouse facility full of keypunch operators to manually encode text onto Hollerith punched cards. This material was repurposed into a training corpus of unprecedented size for the period, at around 100 million words.

What resulted was an abandonment of knowledge-based approaches aimed at simulating human decision processes in favor of data-driven approaches aimed solely at predicting their output. This signaled a fundamental reimagining of the relation between human and machine intelligence. Director of IBM’s Continuous Speech Recognition group Fred Jelinek described their approach in 1987 as “the natural way for the machine,” quipping that “if a machine has to fly, it does so as an airplane does—not by flapping its wings.”

The success of this approach directly triggered a shift to data-driven approaches across natural language processing as well as machine vision, bioinformatics, and other domains. In 2009, top Google researchers pointed the earlier success of the statistical approach to speech recognition as proof that “invariably, simple models and a lot of data trump more elaborate models based on less data.”

Framing machine intelligence as something fundamentally distinct from, if not antithetical to, human understanding set a powerful precedent for replacing expert knowledge with data-driven approximation in computational modeling. Generative AI takes this logic a crucial step further, using data not only to model the world, but to actively remake it.

Framing machine intelligence as something fundamentally distinct from, if not antithetical to, human understanding set a powerful precedent for replacing expert knowledge with data-driven approximation in computational modeling. Generative AI takes this logic a crucial step further, using data not only to model the world, but to actively remake it.

Large language models are both ignorant of and indifferent toward the substance of the statements they generate; they gauge only how likely it is for a sequence of text to appear. Which is to say, if the results pushed to our social media feeds are decided by algorithms that are intentionally designed to only predict patterns, but not to understand them, can the flourishing of misinformation really come as such a surprise?

A failure to recognize how such problems may be intrinsic to the very logic of data-driven machine learning inspires oft-misguided technical fixes, such as increased data collection and tracking, which can lead to harms such as predatory inclusion (in which outwardly democratizing schemes further exploit already marginalized groups). Such approaches are limited because they presume more machine learning to be the best recourse.

But the lens of history helps us break out of this circular thinking. The perpetual expansion of data-driven machine learning should not be seen as a foregone conclusion. Its rise to prominence was embedded in certain assumptions and priorities that became entrenched in its technical framework and normalized over time. Instead of defaulting to tactics that augment machine learning, we need to consider that in some circumstances the very logic of machine learning might be fundamentally unsuitable to our aims.

Ground Truths Are Human Constructions

Artificial intelligence algorithms are human-made, cultural constructs, something I saw first-hand as a scholar and technician embedded with AI teams for 30 months. Among the many concrete practices and materials these algorithms need in order to come into existence are sets of numerical values that enable machine learning. These referential repositories are often called “ground truths,” and when computer scientists construct or use these datasets to design new algorithms and attest to their efficiency, the process is called “ground-truthing.”

Understanding how ground-truthing works can reveal inherent limitations of algorithms—how they enable the spread of false information, pass biased judgments, or otherwise erode society’s agency—and this could also catalyze more thoughtful regulation. As long as ground-truthing remains clouded and abstract, society will struggle to prevent algorithms from causing harm and to optimize algorithms for the greater good.

Ground-truth datasets define AI algorithms’ fundamental goal of reliably predicting and generating a specific output—say, an image with requested specifications that resembles other input, such as web-crawled images. In other words, ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them

Ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them. 

Ground-truth datasets fall into at least two subsets: input data (what the algorithm should process) and output targets (what the algorithm should produce). In supervised machine learning, computer scientists start by building new algorithms using one part of the output targets annotated by human labelers, before evaluating their built algorithms on the remaining part. In the unsupervised (or “self-supervised”) machine learning that underpins most generative AI, output targets are used only to evaluate new algorithms.

Most production-grade generative AI systems are assemblages of algorithms built from both supervised and self-supervised machine learning. For example, an AI image generator depends on self-supervised diffusion algorithms (which create a new set of data based on a given set) and supervised noise reduction algorithms. In other words, generative AI is thoroughly dependent on ground truths and their socioculturally oriented nature, even if it is often presented—and rightly so—as a significant application of self-supervised learning.

Why does that matter? Much of AI punditry asserts that we live in a post-classification, post-socially constructed world in which computers have free access to “raw data,” which they refine into actionable truth. Yet data are never raw, and consequently actionable truth is never totally objective.

Algorithms do not create so much as retrieve what has already been supplied and defined—albeit repurposed and with varying levels of human intervention. This observation rebuts certain promises around AI and may sound like a disadvantage, but I believe that it could instead be an opportunity for social scientists to begin new collaborations with computer scientists. This could take the form of a professional social activity, people working together to describe the ground-truthing processes that underpin new algorithms, and so help make them more accountable and worthy.

The Question Isn’t Asset or Threat; It’s Oversight

As part of a research group studying generative AI with France’s Académie Nationale de Médecine, I was surprised by some clinicians’ technological determinism—their immediate assumption that this technology would, on its own, act against humans’ wishes. The anxiety is not limited to physicians. In spring 2023, thousands of individuals, including tech luminaries such as Elon Musk and Steve Wozniak, signed a call to “pause giant AI experiments” to deal with “profound risks to society.”

But the question is more complex than restraint versus unfettered technological development. It is about different ways to articulate ethical values and, above all, different visions of what society should be.

A double interview in the French journal Le Monde illustrates the distinction. The interviewees, Yoshua Bengio and Yann Le Cun, are friends and collaborators who both received the 2018 Turing Award for their contributions to computer science. But they have radically different views on the future of generative AI.

The solution is oversight of the corporations building AI.

Bengio, who works at a nonprofit AI think tank in Montreal, believes ChatGPT is revolutionary. That’s why he sees it as dangerous. ChatGPT and other generative AI systems work in ways that cannot be fully understood and often produce results that are simultaneously wrong and credible, which threatens news and information sources and democracy at large. His argument mirrors philosopher Hans Jonas’s precautionary principle: since humanity is better at producing new technological tools than foreseeing their future consequences, extreme caution about what AI can do to humanity is warranted. The solution is to establish ethical guidelines for generative AI, a task that the European Group on Ethics, the Organisation for Economic Co-operation and Development, UNESCO, and other global entities have already embraced.

Le Cun, who works for Meta, does not consider ChatGPT revolutionary. It depends on neural networks trained on very large databases—all technologies that are several years old. Yes, it can produce fake news, but dissemination—not production—is the real risk. Techniques can be developed to flag AI-generated outputs and reveal what text and images have been manipulated, creating something akin to antispam software today. For Le Cun, the way to quash the dangers of generative AI will rely on AI. It is not the problem but the solution—a tool humanity can use to make better decisions. But who defines what is a “better decision”? Which set of values will prevail? Here I see in Le Cun’s arguments parallels to the economist and innovation scholar Joseph Schumpeter, who argued that within a democracy, the tools humans use to institutionalize values are the law and government. In other words, regulation of AI is essential.

These radically disparate views land on solutions that are similar in at least one aspect: whether generative AI is seen as a technological revolution or not, it is always embedded within a wider set of values. When seen as a danger for humanity, ethics are mobilized. When social values are threatened, the law is brought in. Either way, the solution is oversight of the corporations building AI.

This opens a door for the public to weigh in on future developments of generative AI. A first step is to identify interests and stakeholders clustering in each position and draw them into how to better inform the development and regulation of AI. As with every other technological advance, humans obviously can decide things in their own way.

Protect Information Systems to Preserve Attention

Already, content generated by artificial intelligence populates the advertisements, news, and entertainment people see every day. According to OpenAI’s cofounder Greg Brockman, the technology could fundamentally transform mass culture, making it possible, for example, to customize TV shows for individual viewers: “Imagine if you could ask your AI to make a new ending … maybe even put yourself in there as a main character.”

Brockman meant this as a sort of paradise of customization, but it’s not hard to see how such tools could also spew misinformation and other content that would disrupt civic life and undermine democracy. Bad content would drive out good, enacting “Gresham’s Law”—the principle that “bad money drives out good”—on steroids. Even top AI executives are begging for regulation, albeit at the level of individual products and their potential dangers. I think a more productive way to frame regulation is as a means of protecting the shared information environment.

For democracy to function, people need to pay attention to matters of public import. In an information environment swamped with automatically generated content, attention becomes the scarce resource.

In decades past, the rationale for regulating the information space pivoted on the limited availability of broadcast channels, or “channel scarcity.” Public attention can also be considered a finite resource, rationed by what information theorist Tiziana Terranova describes as “the limits inherent to the neurophysiology of perception and the social limitations to time available for consumption.” For democracy to function, people need to pay attention to matters of public import. In an information environment swamped with automatically generated content, attention becomes the scarce resource.

A world in which attention is monopolized by an endless flow of personalized entertainment might be a consumers’ paradise—but it would be a citizen’s nightmare. The tech sector has already proposed a model for dispensing with public attention, one that is far from democratic. In 2016, a team at Google envisioned a “Selfish Ledger”—a data profile that would infer individuals’ goals and then prompt aligned behavior, such as buying healthier food or locally grown produce, and seek more data to tweak the customized model. Similarly, physicist César Hidalgo suggested providing every citizen with a software agent that could infer political preferences and act on their behalf. In such a world, the algorithm would pay attention for us: no need for people to learn about the issues or even directly express their opinions.

Such proposals show how important it is for citizens to actively regulate the information commons. Preserving scarce attention is essential to recapturing an increasingly elusive sense of shared, overlapping, and common interests. The world is moving toward a state where the data we generate can be used to further capture and channel our attention according to priorities that are neither our own, nor those of civic life. Software, and whoever it serves, cannot be allowed to substitute for citizenship, and the economic might of tech giants must be balanced by citizens’ ability to access the information they need to exercise their political power.

Needed: Ways for Citizens to Sound the Alarm About AI’s Societal Impacts

As part of my job, I give talks about how artificial intelligence affects human rights: to criminology experts, schoolteachers, retirees, union members, First Peoples, and more. Across these diverse groups, I hear common themes. One is that although AI programs could impact how they do their jobs and live their lives, people feel their experience and expertise are completely left out before programs are deployed. Some worry, legitimately, about facing legal action if they protest.

Plans and policies to regulate AI systems in Europe, Canada, and the United States are not likely to improve the situation. Europe plans to assign regulatory requirements based on application. For example, the high-risk category includes technology used in hiring decisions, police checks, banking, and education. Canadian legislation, still under review by the House of Commons, is based on the same risk assessment. The US president has outlined demands for rigorous safety testing, with results reported to the government. The problem is that these plans focus on laying out guardrails for anticipated threats without establishing an early warning system for citizens’ actual experiences or concerns.

The problem is that these plans focus on laying out guardrails for anticipated threats without establishing an early warning system for citizens’ actual experiences or concerns.

Regulatory schemes based on a rigid set of anticipated outcomes might be a good first step, but they are not enough. For one thing, some harms are only now emerging. And they could become most entrenched for marginalized, underserved groups because generative AI is trained on biased datasets that then generate new datasets that perpetuate the vicious cycle. A 2021 paper shows how prediction tools in education systems incorporate not just statistical biases (by gender, race, ethnicity, or language) but also understudied sociological ones such as urbanity. For instance, rural learners in Brazil are likely to differ from their urban counterparts with regards to fluency in the official state language and their access to relevant educational materials, up-to-date facilities, and teaching staff. But because there aren’t enough data on specific groups’ learning and schooling issues, their needs would be aggregated into a larger dataset and made invisible. Given the lack of knowledge, it would be difficult to even predict any kind of bias.

What’s needed are mechanisms that support citizens’ direct engagement with AI deployments to document, from the ground, potentially high-risk impacts on collective equity. There are democratic formats already in place to support citizens’ perspectives. In Canada, for example, the mandate of the general solicitor or privacy commissioner could be strengthened to review AI deployments in the public sector (audits of datasets, mandatory impact assessments, etc.). These mechanisms would provide transparent and accountable standards to keep citizens adequately informed about AI deployments, help balance the civic power dynamic, and strengthen social justice.

Citizens’ direct engagement could also be supported through access to courts. There are few (if any) direct legal recourses available for ordinary people to challenge algorithmic harms in current AI regulatory schemes. Access to courts—and implicitly to justice—could send a clear message about citizens’ power to corporations, governments, and, most importantly, to citizens themselves. In combination with other mechanisms to increase citizen oversight, legal suits would offer not only access to rightful reparations, but also give societal recognition of citizens’ rights.

Sometimes at my talks people tell me they feel illegitimate asking questions about AI’s impacts, given their lack of expertise. What I tell them is that they don’t need to be a mechanic to know how bad it would be to be hit by a car. Harms from AI are bound to be more subtle, but the point stands. Citizens are the ones primarily affected, so they must have an active role within AI governance. Emerging regulatory systems should highlight the role of citizens as social actors who contribute—as they should—to the collective good.

AI Aids the Pretense of Military “Precision”

Artificial intelligence is the latest promise of a technological solution to the intractable “fog of war.” In Ukraine and Gaza, enthusiasts have proclaimed the advent of AI-driven warfighting. In October 2023, Ukrainian technologists confirmed that AI-enabled drones identify and target 64 types of Russian “military objects” without a human operator; meanwhile the Israeli Defense Forces website states that an AI system generates recommended targets, reportedly at an unprecedented rate. Enormous questions arise regarding the validity of the assumptions built into these systems about who comprises an imminent threat and about the legitimacy of their targeting functions under the Geneva Conventions and the laws of war.

Considering military investments in AI as part of a sociotechnical imaginary is helpful here. Developed within the field of science and technology studies, the concept of sociotechnical imaginaries describes collectively imagined forms of social order as materialized by scientific and technological projects. These include aspirational futures that sustain investments in the military-industrial-academic complex. Iconic examples of AI-enabled warfighting in the present moment include battle management interfaces like Palantir’s AI platform.

We should be deeply skeptical of the promotion of AI as a solution to the fog of war, which imagines that the right technology will find the important signals amid the noise.

To function in the real world, these platforms require very large, up-to-date datasets (of labeled “military objects” or biometric profiles of “persons of interest,” for example), from which models can be developed. In the case of threat prediction and targeting, neither the US Department of Defense nor allied militaries make public the details necessary to assess validity. But in the case of predictive policing, an investigation by The Markup found that fewer than 1% of data-based predictions actually lined up with reported crimes. And generative AI introduces new uncertainties: both the provenance of the data and reliability of information are hard to check. That is particularly dangerous for “actionable military intelligence,” which is used for targeting and to designate imminent threats.

We should be deeply skeptical of the promotion of AI as a solution to the fog of war, which imagines that the right technology will find the important signals amid the noise. This faith in technology constitutes a kind of willful ignorance, as if AI is a talisman that sustains the wider magical thinking of militarism as a path to security. In the words of performance artist Laurie Anderson (quoting her meditation teacher), “If you think technology will solve your problems, then you don’t understand technology—and you don’t understand your problems.”

Critical inquiry into the realities of war can help challenge the logics through which militarism perpetuates its imaginary of rational and controllable state violence while obscuring war’s ungovernable chaos and unjustifiable injuries. Although there are valid reasons that military forces exist in today’s world, we should question the narratives that underwrite the billions of dollars funnelled into algorithmically based warfighting. We need to redirect resources to creative projects in de-escalation, negotiated settlements that offer true security for all, and eventual demilitarization. While the techno-solutionist imaginaries of militarism are longstanding, so are their limits as a basis for sustainable peace.

AI Lacks Ethic Checks for Human Experimentation

Following Nazi medical experiments in World War II and outrage over the US Public Health Service’s four-decade-long Tuskegee syphilis study, bioethicists laid out frameworks, such as the 1947 Nuremberg Code and the 1979 Belmont Report, to regulate medical experimentation on human subjects. Today social media—and, increasingly, generative artificial intelligence—are constantly experimenting on human subjects, but without institutional checks to prevent harm.

In fact, over the last two decades, individuals have become so used to being part of large-scale testing that society has essentially been configured to produce human laboratories for AI. Examples include experiments with biometric and payment systems in refugee camps (designed to investigate use cases for blockchain applications), urban living labs where families are offered rent-free housing in exchange for serving as human subjects in a permanent marketing and branding experiment, and a mobile money research and development program where mobile providers offer their African consumers to firms looking to test new biometric and fintech applications. Originally put forward as a simpler way to test applications, the convention of software as “continual beta” rather than more discrete releases has enabled business models that depend on the creation of laboratory populations whose use of the software is observed in real time.

Generative AI is an extreme case of unregulated experimentation-as-innovation, with no formal mechanism for considering potential harms.

This experimentation on human populations has become normalized, and forms of AI experimentation are touted as a route to economic development. The Digital Europe Programme launched AI testing and experimentation facilities in 2023 to support what the program calls “regulatory sandboxes,” where populations will interact with AI deployments in order to produce information for regulators on harms and benefits. The goal is to allow some forms of real-world testing for smaller tech companies “without undue pressure from industry giants.” It is unclear, however, what can pressure the giants and what constitutes a meaningful sandbox for generative AI; given that it is already being incorporated into the base layers of applications we would be hard-pressed to avoid, the boundaries between the sandbox and the world are unclear.

Generative AI is an extreme case of unregulated experimentation-as-innovation, with no formal mechanism for considering potential harms. These experiments are already producing unforeseen ruptures in professional practice and knowledge: students are using ChatGPT to cheat on exams, and lawyers are filing AI-drafted briefs with fabricated case citations. Generative AI also undermines the public’s grip on the notion of “ground truth” by hallucinating false information in subtle and unpredictable ways.

Much of current regulation places the responsibility for AI safety on individuals, whereas in reality they are the subjects of an experiment being conducted across society.

These two breakdowns constitute an abrupt removal of what philosopher Regina Rini has termed “the epistemic backstop,”—that is, the benchmark for considering something real. Generative AI subverts information-seeking practices that professional domains such as law, policy, and medicine rely on; it also corrupts the ability to draw on common truth in public debates. Ironically, that disruption is being classed as success by the developers of such systems, emphasizing that this is not an experiment we are conducting but one that is being conducted upon us.

This is problematic from a governance point of view because much of current regulation places the responsibility for AI safety on individuals, whereas in reality they are the subjects of an experiment being conducted across society. The challenge this creates for researchers is to identify the kinds of rupture generative AI can cause and at what scales, and then translate the problem into a regulatory one. Then authorities can formalize and impose accountability, rather than creating diffuse and ill-defined forms of responsibility for individuals. Getting this right will guide how the technology develops and set the risks AI will pose in the medium and longer term.

Much like what happened with biomedical experimentation in the twentieth century, the work of defining boundaries for AI experimentation goes beyond “AI safety” to AI legitimacy, and this is the next frontier of conceptual social scientific work. Sectors, disciplines, and regulatory authorities must work to update the definition of experimentation so that it includes digitally enabled and data-driven forms of testing. It can no longer be assumed that experimentation is a bounded activity with impacts only on a single, visible group of people. Experimentation at scale is frequently invisible to its subjects, but this does not render it any less problematic or absolve regulators from creating ways of scrutinizing and controlling it.

Generative AI Is a Crisis for Copyright Law

Generative artificial intelligence is driving copyright into a crisis. More than a dozen copyright cases about AI were filed in the United States last year, up severalfold from all filings from 2020 to 2022. In early 2023, the US Copyright Office launched the most comprehensive review of the entire copyright system in 50 years, with a focus on generative AI. Simply put, the widespread use of AI is poised to force a substantial reworking of how, where, and to whom copyright should apply.

Starting with the 1710 British statute, “An Act for the Encouragement of Learning,” Anglo-American copyright law has provided a framework around creative production and ownership. Copyright is even embedded in the US Constitution as a tool “to promote the Progress of Science and useful Arts.” Now generative AI is destabilizing the foundational concepts of copyright law as it was originally conceived.

Typical copyright lawsuits focus on a single work and a single unauthorized copy, or “output,” to determine if infringement has occurred. When it comes to the capture of online data to train AI systems, the sheer scale and scope of these datasets overwhelms traditional analysis. The LAION 5-B dataset, used to train the AI image generator Stable Diffusion, contains 5 billion images and text captions harvested from the internet, while CommonPool (a collection of datasets released by nonprofit LAION in April to democratize machine learning), offers 12.8 billion images and captions. Generative AI systems have used datasets like these to produce billions of outputs.

US courts are likely to find that training AI systems on copyrighted works is acceptable under the fair use exemption, which allows for limited use of copyrighted works without permission in some cases.

For many artists and designers, this feels like an existential threat. Their work is being used to train AI systems, which can then create images and texts that replicate their artistic style. But to date, no court has considered AI training to be copyright infringement: following the Google Books case in 2015, which assessed scanning books to create a searchable index, US courts are likely to find that training AI systems on copyrighted works is acceptable under the fair use exemption, which allows for limited use of copyrighted works without permission in some cases when the use serves the public interest. It is also permitted in the European Union under the text and data mining exception of EU digital copyright law.

Copyright law has also struggled with authorship by AI systems. Anglo-American law presumes that work has an “author” somewhere. To encourage human creativity, some authors need the economic incentive of a time-limited monopoly on making, selling, and showing their work. But algorithms don’t need incentives. So according to the US Copyright Office they aren’t entitled to copyright. The same reasoning applied to other cases involving nonhuman authors, including the case where a macaque took selfies using a nature photographer’s camera. Generative AI is the latest in a line of nonhumans deemed unfit to hold copyright.

Nor are human prompters likely to have copyrights in AI-generated work. The algorithms and neural net architectures behind generative AI algorithms produce outputs that are inherently unpredictable, and any human prompter has less control over a creation than the model does.

Where does this leave us? For the moment, in limbo. The billions of works produced by generative AI are unowned and can be used anywhere, by anyone, for any purpose. Whether a ChatGPT novella or a Stable Diffusion artwork, output now exists as unclaimable content in the commercial workings of copyright itself. This is a radical moment in creative production: a stream of works without any legally recognizable author.

This is a radical moment in creative production: a stream of works without any legally recognizable author.

There is an equivalent crisis in proving copyright infringement. Historically, this has been easy, but when a generative AI system produces infringing content, be it an image of Mickey Mouse or Pikachu, courts will struggle with the question of who is initiating the copying. The AI researchers who gathered the training dataset? The company that trained the model? The user who prompted the model? It’s unclear where agency and accountability lie, so how can courts order an appropriate remedy?

Copyright law was developed by eighteenth-century capitalists to intertwine art with commerce. In the twenty-first century, it is being used by technology companies to allow them to exploit all the works of human creativity that are digitized and online. But the destabilization around generative AI is also an opportunity for a more radical reassessment of the social, legal, and cultural frameworks underpinning creative production.

What expectations of consent, credit, or compensation should human creators have going forward, when their online work is routinely incorporated into training sets? What happens when humans make works using generative AI that cannot have copyright protection? And how does our understanding of the value of human creativity change when it is increasingly mediated by technology, be it the pen, paintbrush, Photoshop, or DALL-E?

It may be time to develop concepts of intellectual property with a stronger focus on equity and creativity as opposed to economic incentives for media corporations. We are seeing early prototypes emerge from the recent collective bargaining agreements for writers, actors, and directors, many of whom lack copyrights but are nonetheless at the creative core of filmmaking. The lessons we learn from them could set a powerful precedent for how to pluralize intellectual property. Making a better world will require a deeper philosophical engagement with what it is to create, who has a say in how creations can be used, and who should profit.

How Generative AI Endangers Cultural Narratives

Sometime last summer, I needed to install a new dryer in my home in Bergen, Norway. I opened a localized version of Google and typed a request for instructions in Norwegian. Everything the search engine returned was irrelevant—most results assumed my dryer relied on gas, which is not a thing in Norway. Even refining responses for electric dryers assumed configurations that do not exist in my country. I realized that these useless results must be machine-translated from elsewhere. They appeared Norwegian, but they couldn’t help me get a dryer running in Norway. In this case, the solution was trivial: a trip to a neighborhood hardware store got me wired in.

But my experience underscores an underappreciated risk that comes with the spread of generative artificial intelligence: the loss of diverse cultural narratives, content, and heritage. Failing to take the cultural aspects of generative AI seriously is likely to result in the streamlining of human expression into the patterns of the largely American content that these systems are trained on.

Failing to take the cultural aspects of generative AI seriously is likely to result in the streamlining of human expression into the patterns of the largely American content that these systems are trained on.

As generative AI is integrated into everyday tools such as word processors and search engines, it’s time to think about what kinds of stories it can generate—and what stories it will not generate. It’s no secret that AI is biased. Researchers recently asked the image generator Midjourney to create images of Black physicians treating impoverished white children, but the system would only return images depicting the children as Black. Even after several iterations, Midjourney failed to produce the specified results. The closest it got to the prompt was a shirtless medicine man with feathers, leather bands, and beads, gazing at a similarly garbed blond child.

Here’s something that hits close to home: the potential loss of Cardamom Town. Thorbjørn Egner’s Folk og røvere i Kardemomme by (When the Robbers Came to Cardamom Town) is a children’s book and musical well known to anyone who grew up in Norway or Denmark after 1955. The songs and stories have been played, read, and sung in homes and preschools for decades; there’s even a theme park inspired by the book in the city of Kristiansand. The story features three comical thieves who steal food because they are hungry and don’t understand that work is necessary. After being caught stealing sausages and chocolate, they are rehabilitated by the kind police officer and townsfolk, then end up saving the town from a fire.

This story is more than a shared cultural reference—it supports the Norwegian criminal justice system’s priority of rehabilitation over punishment. It is distinct from Disney movies, with their unambiguous villains who are punished at the end, and from Hollywood bank heists and gangster movies that glorify criminals. Generative AI might well bury stories like Cardamom Town by stuffing chatbot responses and search results worldwide with homogenized American narratives.

Narrative archetypes give us templates to live by. Depending on the stories we hear, share, and create, we shape possibilities for action and for understanding. We learn that criminals can be rehabilitated, or that they deserve to come to a bad end. The humanities and social sciences have studied and critiqued AI for a long time, but almost all development of AI has happened within quantitative disciplines: computer science, data science, statistics, and mathematics. The current wave of AI is based on language, narratives, and culture; unchecked, this wave threatens to impoverish the world’s cultural narratives. We have reached a point where AI development needs the humanities. Not just so I can figure out how to install my appliances, but so we don’t lose the stories that shape our communities.

An AI Society

Artificial intelligence is reshaping society, but human forces shape AI. In a collection of eleven essays, social scientists and humanities experts explore how to harness the interaction, revealing urgent avenues for research and policy.

Turning a Policy Idea into a Pilot Project

By day, Erica Fuchs is a professor of engineering at Carnegie Mellon University. However, for the past year she’s also been running a pilot project—the National Network for Critical Technology Assessment—to give the federal government the ability to anticipate problems in supply chains and respond to them. 

The trip from germ of a policy idea to pilot project in the National Science Foundation’s new Technology Implementation and Partnerships directorate has been a wild ride. And it all started when Erica developed her thoughts on the need for a national technology strategy into a 2021 Issues essay. Two years later, the network she called for, coordinating dozens of academic, industry, and government contributors to uniquely understand how different supply chains work, was a real, NSF-funded pilot project. In this episode of The Ongoing Transformation, Erica talks with Lisa Margonelli about how she took her idea from a white paper to the White House, and the bipartisan political support that was necessary to bring it to fruition.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering and Medicine, and by Arizona State University. I’m Lisa Margonelli, editor-in-chief at Issues.

By day, Erica Fuchs is a professor of engineering at Carnegie Mellon University. For the past year, though, she’s also been running a demonstration project to give the federal government the ability to anticipate problems in supply chains so that they can respond appropriately. The road from policy idea to demonstration project in the National Science Foundation’s new technology implementation and partnerships directorate, which is known as the TIP Directorate, has been a wild ride. We thought it would be fun to ask Erica to talk about how she accomplished this, especially since in 2021 she wrote a piece for Issues that laid out the very basic idea for the project. Ultimately, the demonstration project coordinated dozens of academic, industry, and government contributors to uniquely understand what could go wrong in different supply chains, giving policymakers new tools to respond. In this episode, I’ll ask her about the process of turning an academic inspiration into a government capacity.

Hi, Erica. Welcome.

Erica Fuchs: Thank you. Hi, Lisa. Great to be here.

Margonelli: We first met back in the summer of 2021, and since that time you have set up something called the National Network for Critical Technology Assessment, which is the project under or at the TIPS Directorate, which is the new directorate at the National Science Foundation for Technology Innovation and Partnerships. And what you’re doing is you’re exploring building a whole new sort of government capacity for understanding how technology works in the global situation and how to maximize the value of the money that taxpayers spend on technology and scientific investment in the building, supply chains, building jobs in the United States, all of these things. And I really want to talk to you about how you stood this up, how this happened.

Fuchs: Well, I would say that the desire to have analytics and better data and analytics to inform national technology strategy, as I had written in the Issues in Science and Technology article that we wrote together, really came from a place of frustration.

Margonelli: Why were you frustrated?

Fuchs: Well, I had come to a point in time where I believed that the government was flying blind on a number of issues. And if I were going to give two examples, I would give: (1) the example of masks and respirators during the pandemic and (2) the example of the semiconductor shortage.

Margonelli: Okay, so you were frustrated because the government seemed to be flying blind in trying to make decisions basically during the pandemic about how to deal with the semiconductor shortage and how to deal with the shortage of masks. What do you mean flying blind? Where were they flying to?

Fuchs: I dunno. (laughs) So first, I think during the pandemic, I had a fantastic student, Nikhil Kalathil, who realized that he could use publicly available beta and large language models scraping that publicly available data. And that data was specifically small, medium-sized enterprises posting on a B2B site their capability to produce masks.

Margonelli: Okay, so this is small manufacturers in the US who were posting on a site about how they could produce masks and your student went in and scraped it.

Fuchs: Yup, and what he found is that while the government doing the classic textbook thing of bringing together the big five companies that produced masks thought that they had half the capacity they needed, that if you looked at the small, medium-sized enterprises pivoting into this across the country, we had almost twice the capacity that the government at that time thought. And so the government needed that information to make good policy and also to know what those small and medium sized firms challenges were.

And funnily enough, if I take that then and go to the semiconductor example because it’s so importantly different, the government came back to us, and this is such an important piece of this problem, they came back and they said, could you do what you did in masks and respirators in semiconductors? We’ve got this shortage. And the government was worried about the semiconductor industry being not transparent about their actual capacity.

Meanwhile, the semiconductor industry was upset with the automotive industry for essentially doing just-in-time manufacturing. And the automotive industry was upset with government and the semiconductor. Everybody was with each other. And what we came back and said to the government, we brought together an integrated interdisciplinary team that had social scientists and technologists. And I want to double underline “and technologists” because it really was the engineers who helped us reframe the problem. And what we said to government is you’re asking the wrong question. The problem was that designs had been designed to single lines and single fabs. And so these supply chain was so rigid that even if they had extra capacity, they couldn’t move that design to another line and produce it because the lines were tailored to single design. So their problem, and that’s what the engineers helped us understand, was the rigidity of the supply chain leading to supplier monopolies and absolute inability to flex. And what they needed to do is think about design commonalities.

Margonelli: Let’s stop here because this is a really interesting thing. So the government asked, you answered two questions. First you answered on masks, you were like, okay, actually you have more capacity to make masks. And the thing about that all of us noticed during the pandemic was that there were many different kinds of masks that you could use. Whereas in semiconductors, what we realized during the pandemic is that each chip is headed for one thing. It’s headed for one single little thing on your GM car and a different chip is headed for the same thing on your Ford car. And that means that there’s no wiggle room. All the wiggle room in the masks came from the interchangeability. And then you had a completely different set of questions in semiconductors. So let’s go back to flying blind. So you’re frustrated. The government doesn’t know how to ask the questions, how to formulate the questions, and that affects the ability to make decisions and to move resources around.

Fuchs: Can I add one? Ask the questions, formulate the questions, and get the right data?

Margonelli: Yeah. So, sometime in the middle of 2021, I got an email from you and that email was that you had an idea and where was your idea at that point?

Fuchs: So I had been starting to testify in discrete contexts. First, before the Ways and Means Subcommittee on Trade around the masks work, and then before the House Research and Innovation Subcommittee on Technology. And I had just been asked to write an article, a short article, in response to someone else’s views on industrial policy in the United States. And I felt so, again, frustrated that they weren’t even asking the right questions. And in particular, I was pulling on these two examples I had just given you about the importance of the right questions, the right data, technological depth, but also how different government’s questions are than a firm’s. A firm is maximizing profit and the government has multiple objectives. And so how do we really think about national technology strategy when we have multiple missions. We have security. We have the economy. We have societal wellbeing. And there’s going to be both win-wins and trade-offs and I felt like I couldn’t, in response to this one article, say what needed to be said. I needed a fresh sheet of paper and I needed to get this down.

Margonelli: The piece you eventually wrote is, “What a National Technology Strategy Is—and Why the United States Needs One.” And in that you started to argue that the United States needed to build some kind of government-private-public partnership thing that was able to look into the future and ask these complicated questions.

Fuchs: Yes, and for a matter of fact, the very concept that we didn’t have to wait until we had a semiconductor shortage or until we had an infant formula shortage. That some of these, we could see coming. And not only could we see them coming, we could analyze scenarios and the vulnerabilities associated with those scenarios and potentially, solutions to reduce those vulnerabilities. And that there even was an opportunity to quantify the value to different missions, the value to security, the value to the economy, the value to population health and societal wellbeing. And that then legislators could themselves decide what their values were and the trade-offs, but that we could put those trade-offs and the win-win opportunities in front of them so that you could get past some of these bottlenecks with data.

Margonelli: So this was a good idea. And you published the piece in Issues in early September (2021).

Fuchs: Am I allowed to joke how good an idea it is remains to be seen? (laughs) It was a bold idea.

Margonelli: It was a bold idea. (laughs) And what struck me actually was how much work you were doing on it. Because one morning you called me to make some last minute corrections and you were like, “I have to call you on my watch.” And I thought, okay, well, she’s calling on her watch. And then you said, “Because I accidentally packed my phone into my toddler’s backpack and sent him off to preschool.” And you were getting all these phone calls at the time from the White House. So as you were doing these policy memos, the sort of air around you was heating up, things were starting to boil, you were getting tons of phone calls, you were up all night, you were doing these things, you were keeping the policy memos going forward, and all sorts of testimony and conversations with people. How did you build support for this idea?

Fuchs: First, I would like to say that this past year, but also the period leading up to this year, helped me realize as a professor, you have maybe your research group, but the importance of coalitions, of number of people who see a common vision, are involved. So I think honestly that my VP of Government Relations at Carnegie Mellon, while I know officially they’re registered as a lobbyist, I remember at one point in time when I got to my third testimony, his comment was, I said to him, “Oh my God, third time, what am I going to say?” And he said, “We didn’t get this far for me to tell you what to say.” So just people who saw this as meaningful and wanted to come in behind it. And the fact that as we built these demonstrations, the mask demonstration, the semiconductor demonstration, that people in government across the White House, across agencies were starting to say, “We need this.”

And for a matter of fact, sometimes the answer was, we just don’t have the capacity inside. We just don’t have enough resources, enough people. But also then, the belief of the House Science staff on both sides, the majority and the minority that we need this. Here’s an example. Our semiconductor policy brief on what needed to happen went from the National Economic Council and the Council of Economic Advisors to OSTP, the Office of Science and Technology Policy, also at the White House. The OSTP then passed, including the National Security Council, which is also at the White House. They then passed the policy brief, I believe to the Department of Defense Microelectronics Cross-Functional Team, and to DARPA, the Defense Advanced Research Projects Agency. I felt like it was like a hot cake. It was going around. People were finding it helpful, that reframing it, and of course the Department of Commerce.

Margonelli: To give a little context, part of the reason that they were excited about what you were doing and what you were suggesting about having this standup capacity was that one of the alternatives was to set, say, 10 target technologies that the US was going to work on and have a list. And what you were talking about was something that was much more expandable and responsive. This plan had this adaptability, and so it was attractive to lots of people across the government who were worried about multiple different problems having to do with their own agency’s missions as well as cross-cutting missions.

Fuchs: I think in the beginning on the policy brief around semiconductors, how that went hotcakes through the various agencies kind of being passed along, I think was a lot just about this is useful. It was literally just, this is something different than anyone else has told us. I think that that separate comment of, “I wish I had this capability already now. We don’t have this more broadly.” Some of the things we were saying we were trying wanted to do or try to do, and what you’re raising about the set 10 target technologies and adaptability, I would actually see that as potentially even complementary, right? There’s a difference between saying, this may be important for our country and here’s an easy list of experts thinking this may be important and what to do. Where’s the bottlenecks? Where’s the opportunities for investment? Do we actually need investment in R&D or do we need to get regulation out of the way to have this have impact or get regulation in the way? So the action orientation and the quantification of impacts for different missions, that was really new and helpful.

Margonelli: So it was going around within the government, you have testimony that you’re giving, you’re publishing things.

Fuchs: I had testified twice before the House and the House staffers on the Space, Science, and Technology Committee had gotten this into legislation in the CHIPS and Science Act on the House side. And it then goes into what’s called conference between the House and the Senate, and they decide on what goes forward. We made it even into conference in that version, and that required support from the White House as well as from staffers saying, “We want this to go into legislation.” But then in conference between the House and the Senate, we actually submitted with a series of luminaries, a letter to Congress signed by university presidents and Norm Augustine and John Hennessy, people like that. This would be a good thing. So we got some coalition building going and then it unfortunately did not make it through a conference.

Margonelli: So that must’ve been incredibly disappointing because here you are, you’re rocketing through that fall and things are going on and the sort of tension is going up and up and up, and then it turns out, okay, you’re not in the CHIPS and Science bill. So what happens next? How do you regroup?

Fuchs: Well, it happened all so fast. So in the CHIPS and Science legislation, it has this incredible, really unprecedented mandate. One is for the science advisor to have a national technology strategy. And of course I had written with Issues in Science and Technology about what is and why does the US need a national technology strategy? And then the second is the mandate for NSF TIP with an inter-agency working group to one, identify five societal geostrategic national challenges. Two, identify 10 key emerging technologies, but three, identify how investments in technology could potentially be used to address those national societal inequality geostrategic challenges. And when I looked at that, no one really knows how to do that. And that was so close to what we were talking about. And I had throughout, interestingly enough, been talking to both the science and the commerce side of the agencies. And so I’m not even sure if I could backtrack how it happened, but Erwin Gianchandani said to me, “Why don’t you submit to this BAA — broad agency announcement?” So NSF has these open-ended announcements of how you would do this.

Margonelli: So you submitted.

Fuchs: Yes! And he had some things that he thought were important. This shouldn’t be about a center for a single university. How could we bring the best minds together in the country to think about what the country could do in this area?

Margonelli: So that’s decentralized by definition. So that’s a really interesting idea right there to begin with.

Fuchs: There was no time to be depressed. No time to breathe. (laugh) It was like, hey, the phone rings and you’re like, why don’t you submit your ideas over here? And it shouldn’t be like a center. It should be how do you bring together the best minds in the nation to demonstrate what we can do today? What are our gaps and what a vision would be for how the country should do this? And then he mentioned, oh, by the way, you have four weeks to submit.

Margonelli: And how many pages is this thing?

Fuchs: Well, the original proposal was 10 pages, 22 PIs spanning 13 universities across the country. I would laugh that we had about a week to search, two weeks to write and a week to submit.

Margonelli: Wow! That’s fast.

Fuchs: And the money showed up four weeks later.

Margonelli: The money showed up four weeks later for doing this. That’s amazing. That hardly ever happens.

Fuchs: No, never happens.

Margonelli: You also wrote a big piece for Brookings. So the Brookings is a think tank. And somewhere in this process, you stepped out and you developed the idea even more. Because the idea that you first published in Issues in Science and Technology was really the start of, there were a whole bunch of ideas.

Fuchs: So, interestingly enough, Brookings had approached me about building on the Issues in Science and Technology piece directly. Like, okay, well how would you do this? And as an academic, I have to tell you, I almost didn’t want to write the paper. I was like, well, I don’t know.

The questions were hard and uncomfortable and not the ones I would have naturally asked from an academic institution beyond putting the theoretical concept out there. And it was in the White House at the time, said, Hey, write this up like this. And it was so helpful. And then likewise, Wendy and Estee at the Hamilton Project sort of pushing me into that uncomfortable zone of, well, exactly how would this happen? And what was amazing is that I was working on that, sort of dragging my feet trying to do that when the TIP opportunity came. And so by the time the paper came out, we had just won TIP. And so thank goodness they pushed me.

Margonelli: Okay. Alright. So you got the finances a month after you wrote the BAA application, and then did you get extra funding or is it all just TIP funding?

Fuchs: So we had reason to believe that TIP had about $3 to $4 million to do this. And I think my eyes are easily bigger than my stomach, and I really had certain people I wanted at the table. And those were academics who I knew could demonstrate in specific areas, capabilities that I thought really mattered for this capability for the nation. But what we didn’t have from NSF TIP was they were all academics and you have to have government and industry at the table in making these types of assessments. And so I had been in dialogue with the Sloan Foundation and Danny Goroff at the Sloan Foundation, and then afterwards Sloan came through to convene academia with industry and government so that we could transparently have an open dialogue about the analytics needed and a multilateral conversation and influencing throughout the process.

Margonelli: And just even having those people in the room allows you to get to different levels of information and also different levels of dissemination of what you’re doing. It’s not just pulling the information in, it’s also pushing the analysis out.

Fuchs: Absolutely. And getting early stakeholder feedback so important because for example, as we started doing the analytics in semiconductors, we had early results and we had no idea what the stakeholders were going to think. And we actually thought companies like Intel would be opposed. And when they were like, “we are on board, we think this is what’s needed,” we were shocked. And so knowing that we had stakeholder alignment for what our analyses were suggesting was incredibly important. When you think about transition and change in DC thanks to the Sloan funding, we could have this transparent, open, back and forth dialogue with industry, academia, and government.

Margonelli: You could really pull all the people together. So what you’ve done basically is for the past year. you’ve stood up something that’s like what you want to create, and you’ve done kind of a test case, or do you have a word for what you’ve done?

Fuchs: I would argue that in certain ways we did. Whereas in the beginning, what we talked about, I might’ve done some demonstration examples of specific analytics that could be helpful out of teams at CMU or our research group. We did a demonstration of how you could do this, leveraging the distributed capability of the nation at scale, and then evaluated with those people and the stakeholders we had brought together both what were gaps. So here’s some demonstrations of what we can do, here’s some gaps, what we can’t do and what we should be working on to make this better. And then here’s a vision for how this should go forward.

Margonelli: And you’re now sort of at the vision stage. You released a really big report in September, which will be linked to in the show notes to this podcast. Give me a little sense of where you see it going.

Fuchs: I think looking back at what was truly a herculean year, that was insanely fast. I mean doing demonstrations in six months, we then ran the entire demonstrations through a review. So we had 21 roughly reviewers of spanning academia, industry and government of each of the area demonstrations for research integrity, and then a review of the entire vision and the entire report that spanned academia, industry, and communication in DC policy readiness. This was fast, and it’s one thing at the beginning of the year or in testimony to say, this is what we should do, and it’s another thing to try to implement it. And I guess to your question of what’s that vision? I think what I want to say first is that insane speed with which we implemented this past year demonstrated that academia is an under-leveraged capacity for the US government, but they’re going to have to be bent. There’s a whole bunch of orchestration around it, to bend academia to the government’s problems in a way that could inform national technology strategy and to bring together academia, industry, and government in a way that would lead to fruitful outcomes. And then we can answer your question.

Margonelli: There were many things that struck me as I was reading the latest report. One of them was the recurrence of the word disruptive, and that this needs to take things out of their comfort zone. Academics have to come out of their comfort zone. The people from industry need to come in, government needs to be thinking around corners in a way that it hasn’t been thinking. And we have a story of how we do innovation in this country, which we call linear. And the idea is that you sort of put money into basic science in academia and things trundle along and they gradually become products out in the marketplace. And of course, over the years, we know that that’s not actually accurate. What we also have is a highly chaotic, globalized R&D and translation system. It’s wild. And you’ve been looking at this on its own terms, which is really interesting because a lot of times what gets proposed is that the US adopt an industrial policy model that’s a little bit more like something like what China might do. China has a very high level look at things and then says, okay, pull four more factories over here and do this. Although they also have a very chaotic system. So that story that we tell about that isn’t actually accurate, but that is the story that we tell. You’ve chosen a modified chaos system to meet a very chaotic system and see it on its own terms. I hope you see that as a compliment.

Fuchs: I love it. I love it. I’m going to modify chaos to meet chaos. I just love it. I argue and we argue as this national network in the report that there’s a possibility in the same way that DARPA orchestrates technology outcomes, that it is possible to orchestrate, to have an analytic ARPA to orchestrate the diverse and rich variety of institutions at the frontier of analytic capabilities across disciplines, across academia, FFRDCs (federally funded research and development centers), and government and industry that in the same way DARPA does that for creating technology, we can use that type of program management to synthesize and orchestrate the analytic capacity of the country to inform national technology strategy in a way that is trusted. And I don’t know if anything is objective, but at least a trusted third party.

Margonelli: Any sort of new entity in government has to ultimately have political support to carry on. NASA has worked for 40 years to build political support to survive and is beloved. All new capacities need to have political support. How do you do that?

Fuchs: Well, I think that that is particularly challenging in this moment because as we write in the report, we lack today the intellectual foundations really for how to do this. And so there is a science of various different disciplines, but it can often be hard for those disciplines to talk to each other. So for example, the number of people who can be multilingual and both understand what is being done across the social sciences and engineering and then pair them to national problems is small. And so interestingly enough, I think that one of the political steps is literally taking this report and going across universities and saying, there is this thing. So there were only two pairs, four people, who had ever co-authored together. 80% of the people in the national network had never met each other before this year. And that’s because they came from psychology and data science and sociology and engineering. There’s no reason they should have talked together before.

So step one is saying there’s something different than what exists right now in economics by itself or in engineering by itself. Step two is I actually would argue that I never dreamed that the agencies and the White House and also Congress, in terms of bipartisan, would be as receptive to the need for this. And I think there is a really important need to help the community understand how they can come together so that they fight for this together, so that actually academia and the FFRDCs, RAND and SRI and MITRE, are stronger together, and we’ll move together in a way more that the country needs if we had this type of ARPA-like entity. And so all of us selling together that to government, I think is really important.

And the last I would say is finding a home. So I think that we started this story with Commerce and then NSF, the White House. We were talking to the White House. And so in the long term, where does this capacity for the country belong? I had been told by many people that by being a BAA on the outside, we were able to do a lot of things you can’t normally do inside government. I kept having people inside government saying, “Oh, leverage that. Wow. We could have never have done that.” And at the same time, there is a certain protection of government. There can be questions about why this leadership or why this, who gets to run this? And if it has to be orchestrated, it has to be somewhere. You can’t continually submit BAAs. So I think figuring that out is going to be a dialogue inside government, and I do have ideas about that.

Margonelli: Well, I’m so excited to see whatever happens next. So we should stay tuned and we hope to interview you in a year and see what’s happened since then. Thank you so much, Erica. It’s been a great pleasure to talk to you.

Fuchs: This has been fantastic, Lisa. Thank you for your great questions.

Margonelli: If you would like to learn more about Erica’s work, check out the resources on our show notes. We have links to all of her white papers. You can subscribe to The Ongoing Transformation wherever you get your podcast. Thanks to our podcast producers, Sydney O’Shaughnessy and Kimberly Quach, and our audio engineer Shannon Lynch. I’m Lisa Marelli, editor-in-chief at Issues in Science and Technology. Thank you for listening.

A Road Map for Sustainable Chemistry

In January 2021, Congress enacted the Sustainable Chemistry Research and Development Act to better coordinate federal and private sector investments in sustainable chemistry research and development, commercialization, and scaling. Since passage of the act, the federal landscape for sustainable chemistry has changed dramatically, providing important strategic opportunities to advance US leadership in the field. Notably, through legislation that includes the Inflation Reduction Act and the CHIPS and Science Act, the federal government has made massive investments in decarbonization, resilient domestic manufacturing, and job creation, and environmental justice has become a national priority. Additionally, new initiatives at global, state, and market levels are putting pressure on firms to find solutions that reduce both climate impacts and chemical pollution. Rapidly advancing sustainable chemistry can contribute substantially to all these goals, but it requires an ambitious, focused, and coordinated strategy at the federal level.

With little fanfare, the National Science and Technology Council’s interagency Strategy Team on Sustainable Chemistry published its first report in August 2023, entitled Sustainable Chemistry Report: Framing the Federal Landscape. A two-year effort that engaged more than 14 federal agencies and was cochaired by the White House Office of Science and Technology Policy, the National Institute of Standards and Technology, and the National Science Foundation, the report is a laudable survey of the range of sustainable chemistry activities across the federal government. The interagency team is now beginning work on a federal strategic plan for advancing sustainable chemistry in the United States. This plan should provide an actionable road map with a clear and measurable direction for innovation, links to government priorities as well as business and societal needs, and incentives for adoption in the marketplace. Given that chemistry is a major driver of US gross domestic product and plays a central role in solving many of the country’s most pressing environmental challenges, any federal strategy on sustainable chemistry will need clear leadership and coordination to be successful in achieving its goals.

As a starting point, a road map should give funding agencies, investors, businesses, and others clear ideas of how to direct their investments. Although aspirational, the 87-word definition of sustainable chemistry (see box) detailed in the Strategy Team on Sustainable Chemistry’s report misses the mark. On the one hand, it is too restrictive in requiring the use of renewable feedstocks, renewable power, and “optimal” efficiency—a standard that few major chemical projects in the United States could meet today. On the other hand, it is too permissive in failing to exclude activities that create risks to human health and the environment, despite meeting climate-focused criteria. For example, benzene could be produced using renewable power and feedstocks operating at optimal efficiency, without regard for the fact that it is carcinogenic and harmful to the communities where it is produced. Despite its length, the definition is followed by the caveat that “advancement in one of these areas should not be at the detriment of another area,” and provides some criteria for measuring sustainable chemistry. However, the definition is too complicated to utilize in a policy or investment context.

For comparison, the Expert Committee on Sustainable Chemistry proposed a much clearer, shorter, working definition: “Sustainable chemistry is the development and application of chemicals, chemical processes, and products that benefit current and future generations without harmful impacts to humans or ecosystems.” More importantly, the definition ties to specific criteria for which metrics and tools can be used to guide investments that clearly advance sustainable chemistry and do not lead to regrettable solutions or shift impacts to communities that have previously been harmed. Businesses and investors require this type of clarity. Such definitions should be designed to interact with other efforts, such as the European Commission’s criteria for “safe and sustainable by design” chemicals.

Secondly, sustainable chemistry investments must be tied to the ongoing priorities of the Biden administration and Congress, as well as those of voters and consumers. In addition to the passage of the Inflation Reduction and CHIPS and Science Acts, the Infrastructure Investment and Jobs Act and the Executive Order on Advancing Biotechnology and Biomanufacturing Innovation for a Sustainable, Safe, and Secure American Bioeconomy together represent a once-in-a-generation opportunity to invest in sustainable chemistry. Additional administration priorities related to environmental justice, supply chain resilience, and domestic manufacturing are also inextricably linked to chemistry and the chemical industry. The federal government must explicitly incorporate sustainable chemistry into implementation of these new laws and initiatives.

As a starting point, a road map should give funding agencies, investors, businesses, and others clear ideas of how to direct their investments.

Progress in sustainable chemistry has already been identified as key to addressing climate change, because the chemical sector is the largest domestic industrial source of greenhouse gases. The recent report from a Department of Energy cross-sectoral roundtable (cohosted by Change Chemistry) notes sustainable chemistry investments can simultaneously support decarbonization of chemical production as well as environmental justice through “detoxification” of chemistry. Similarly, the administration’s high-profile Bold Goals for US Technology and Biomanufacturing report calls for the United States to produce at least 30% of its chemical demand, as well as 90% of recyclable-by-design polymers, via sustainable and cost-effective biomanufacturing pathways within 20 years—which will be nearly impossible to achieve without massive investments in sustainable chemistry. Sustainable chemistry investments can also play a role in ending and remedying the disproportionate impacts of pollution on marginalized communities, as outlined in the administration’s Justice40 Initiative, while creating new economic opportunities for them.

Achieving these goals by transitioning to a safer and more sustainable chemical sector will require coordinated action across agencies and clear integration into priority administration programs. Specifically, the Qualifying Advanced Energy Project Credit (also known as 48C), the Department of Energy’s Loan Programs Office, the $6 billion Industrial Demonstrations Program, and the Greenhouse Gas Reduction Fund are programs that could support commercial-scale sustainable chemistry manufacturing projects.

Considering chemistry’s large footprint—which spans many federal agencies—and a decades-long transition timeline, a dedicated champion is needed to coordinate government action across agencies, the private sector, investors, research and education institutions, workers’ organizations, and advocates. While the creation of the interagency strategy team is a good first step, it is insufficient given the range of agencies involved and the small number of people who have the broad cross-agency and cross-sectoral knowledge required. Only a comprehensive and highly coordinated approach across agencies and industrial sectors can simultaneously identify needs for safer and more sustainable alternatives; communicate with researchers, investors, and manufacturers; evaluate hazards from potential alternatives; and target funding, research, recognition, and incentive efforts to promote safer, more sustainable chemistries.

The National Nanotechnology Coordination Office provides an example of how a strong coordinating body can bring together federal and industry stakeholders to speed investment and advance broader societal goals while shaping an emerging sector. Similar strong federal coordination strategies have also been used in semiconductors and with the so-called climate czars who have coordinated climate change actions under the Obama and Biden administrations.

Importantly, a federal coordinating body could assimilate the emerging state and European policies, as well as market and investor demands for eliminating chemicals of concern and finding safer and more sustainable alternatives. For example, it is necessary to address scientific, market, and administration concern about contamination from PFAS, or “forever chemicals,” with a coordinated response. Simply cleaning up PFAS contamination is not enough; these high-performing chemistries—which are now used for many essential purposes, from electronics to health care—must be quickly replaced with safer alternatives. Coordinating a rational substitution strategy while considering the evolving global regulatory landscape will take a deliberate, concerted effort; it cannot be left to chance or managed as a purely “environmental” issue. An executive branch coordinating body will be able to bring stakeholders and resources to bear on the complex challenges posed by a chemical transition and carry that work on across multiple presidential administrations. 

Considering chemistry’s large footprint—which spans many federal agencies—and a decades-long transition timeline, a dedicated champion is needed to coordinate government action.

Finally, the sustainable chemistry strategy cannot rely entirely on voluntary commitments. Commercialization, adoption, and scale of sustainable chemistry solutions faces significant incumbency barriers as existing chemistry is optimized, capitalized, and integrated into complex supply chains. According to the International Monetary Fund, fossil fuels are directly subsidized at more than $1.3 trillion per year globally (or $7 trillion, if external costs are included), putting sustainable chemistry at a disadvantage. To be competitive, investments need to be linked to subsidies and incentives that accelerate pathways to market, adoption, and scale as well as policies that disincentivize business as usual.

For example, successes in decarbonizing US electricity production and electrifying the transportation sector over the past decade were driven primarily by federal tax credits that reduced the cost difference between new, cleaner technologies and incumbent technologies, as well as procurement guidelines that drove demand. A coordinated approach for sustainable chemistry that includes production or investment tax credits; incentives for adoption that ensure faster market approvals, recognition for demonstrated safety, or both; more sustainable chemicals and products; and federal procurement requirements would help drive investment in and adoption of safe and sustainable chemicals and materials. 

Given innovation and capital cycles, transforming the chemical sector toward sustainable chemistry will require a clear and compelling strategic road map and coordination to pace actions over the decades needed to transition the industry. This road map must not be a purely aspirational document, but should outline a federal commitment to ambitious goals, establish strong market signals, and align finance, regulatory policy, and industrial strategies. 

This sounds audacious, but today’s generation of chemistries was launched in part by a similar program over the course of a few years during World War II. Just as rubber became increasingly necessary for the war effort, the United States lost access to 90% of its rubber suppliers in Southeast Asia. In response, President Roosevelt established the Rubber Reserve Program in 1940. As it became clear that stockpiling rubber supplies was insufficient, the program incentivized the creation of synthetic rubber and engaged the four largest rubber companies in the quest. By creating coordinating bodies to manage research and development across academia, industry, and government, the collaboration produced synthetic rubber—as well as what we now know as the petrochemical industry—within a few years. A similarly expedited all-of-government technology approach today could guide the development of a new generation of more resilient, equitable, and sustainable chemicals that addresses some of the nation’s most pressing needs while launching new industries.