Combining Tradition and Technology

In “Reform Federal Policies to Enable Native American Regenerative Agriculture” (Issues, Spring 2024), Aude K. Chesnais, Joseph P. Brewer II, Kyle P. Whyte, Raffaele Sindoni Saposhnik, and Michael Kotutwa Johnson provide a useful baseline into the history of regenerative agriculture and its use on tribal lands. Inherent in tribal producers, regenerative agriculture continues to be practiced since time immemorial using traditional ecological knowledge (TEK), which works with ecosystem function through place-based innovation.

Often, TEK is dismissed despite thousands of years of responsive adaptation. Many methods used by tribal producers yield equivalent or higher outcomes than the practices stipulated for reimbursement by the US Department of Agriculture’s Natural Resources Conservation Service, yet they are not always eligible for the same payments because they are based on methods instead of equivalent outcomes.

Native systems recognize that soil health cannot be siloed from water quality, habitat preservation, or any other element because of the impact of its interconnectedness across all parts of the ecosystem.

TEK is a living adaptive science that uses traditional knowledge fused with current conditions and new technologies to create innovative Indigenous land stewardship. Not only do Native systems of regenerative agriculture assist in carbon sequestration, but they also focus on whole ecosystem function and interaction. This creates a more long-term sustainable regenerative system. Native systems recognize that soil health cannot be siloed from water quality, habitat preservation, or any other element because of the impact of its interconnectedness across all parts of the ecosystem.

The right to data sovereignty, resource protection, and cultural information is integral for the progress of regenerative agriculture on tribal lands. From historical inequities to current complex jurisdictional issues, Native producers face challenges not faced by other producers. Most tribal land is marginalized, contaminated, less productive, and thought to be less desirable. Tribes have experienced historical barriers that have led to problems in accessing or controlling their own data and to misuse of their data by outside entities. Moving in the right direction, tribally-focused data networks will help tribal nations combine tradition and technology for optimal land stewardship.

Natural Resources Director

Intertribal Agriculture Council

The Anthropocene: Gone But Not Forgotten

In “A Fond Farewell to the Anthropocene” (Issues, Spring 2024), Ritwick Ghosh advances an insightful—and often neglected—analysis. The long-running controversies around the geophysical science of the Anthropocene have not only demonstrated the political nature of the scientific enterprise itself. More importantly, as Ghosh attests, they have illustrated how the political questions that define society-nature relations tend to be covered up or suppressed by the very attempt to displace political conflict to the assumedly neutral terrain of science.

Thus, the nonrecognition of the Anthropocene as a geological epoch by the International Union of Geological Sciences is to be truly welcomed. It formally ends the inherently fraudulent attempt to base decisions about the fate and future of Earth and its inhabitants on a “scientific” notion rather than on a proper political basis. Indeed, I would argue that the rejection makes it possible to foreground the political itself as the central terrain on which to debate and act on policies to protect and perhaps even improve the planet. Politicizing such questions does not depend on the inauguration of a geophysical epoch. We already know that some forms of human action have profound terra-transforming impacts, with devastating consequences for all manner of socio-ecological constellations.

The socio-ecological sciences have systematically demonstrated how social practices radically transform ecological processes and produce often radically new socio-physical assemblages. The most cogent example of this is, of course, climate change. The social dimensions of geophysical transformations demonstrate beyond doubt the immense inequalities and social power relations that constitute Earth’s geophysical conditions.

The nonrecognition of the Anthropocene as a geological epoch by the International Union of Geological Sciences is to be truly welcomed.

The very notion of the Anthropocene decidedly obfuscated this uncomfortable truth. Most humans have no or very limited impact on Earth’s ecological dynamics. Rather, a powerful minority presently drives the planet’s future and shapes its decidedly uneven socio-ecological transformation. Humanity, in the sense that the Anthropocene (and many other cognate approaches) implies, does not exist. It has in fact never existed. As social sciences have systematically demonstrated, it is the power of some humans over others that produces the infernal socio-environmental dynamics that may threaten the futures of all.

Abandoning the Anthropocene as a scientific notion opens, therefore, the terrain for a proper politicization of the environment, and for the potential inauguration of new political imaginaries about what kind of future world can be constituted and how we can arrange human-nonhuman entanglements in mutually nurturing manners. And this is a challenge that no scientific definition can answer. It requires the courage of the intellect that abandons any firm ontological grounding in science, nature, or religion and embraces the assumption that only political equality and its politicization can provide a terraforming constellation that would be supportive for all humans and nonhumans alike.

Professor of Geography

University of Manchester, United Kingdom

Ritwick Ghosh closes the door on this newly named period of geological time—without fully understanding the scientific debate. Let me make it very clear, we are in the Anthropocene. Science shows that we are living in a time of unprecedented human transformation of the planet. How these manifold transformations of Earth’s environmental systems and life itself are unfolding is messy, complex, socially contingent, long-term, and heterogeneous. Most certainly, they cannot be reduced to a single thin line in the “mud” dividing Earth’s history into a time of significant human transformation and a time before. This is why geologists have rejected the simplistic approach of defining an Anthropocene Epoch beginning in 1952 in the sediments of Crawford Lake, in Canada.

Science shows that we are living in a time of unprecedented human transformation of the planet.

Geologically, the Anthropocene is better understood as an ongoing planetary geological event, extending through the late Quaternary: a broad general definition that captures the diversity, complexity, and spatial and temporal heterogeneity of human societal impacts. By ending the search for a narrow epoch definition in the Geologic Time Scale and building instead on the more inclusive common ground of the Anthropocene Event, attention can be turned toward more important and urgent issues than a start date.

The Anthropocene has opened up fertile ground for interdisciplinary advances on crucial planetary issues. Understanding the long-term processes of anthropogenic planetary transformation that have resulted in the environmental and climate crises of our times is critical to help guide the societal transformations required to reduce and reverse the damage done—while enhancing the lives of the planet’s 8 billion people. The Anthropocene is as much a commentary on societies, economic theory, and policies as it is a scientific concept. So I say in response to Ritwick Ghosh: welcome to the Anthropocene. The Anthropocene Epoch may be dead, but the Anthropocene Event and multiple other interpretations of the Anthropocene are alive and developing—and continually challenging us to do something about the polycrisis facing humanity.

Professor of Earth System Science, Department of Geography

University College London

Global Diplomacy for the Arctic

The Arctic was long known as a region where the West and Russia were able to find meaningful ways to collaborate, motivated by shared interests in concerns in environmental protection and sustainable development. The phenomenon even had a name: Arctic exceptionalism. Most of that was lost when Russia invaded Ukraine in February 2022.

There has been much hand-wringing about the extent to which the severing of political and economic ties should apply to scientific collaboration. In the Arctic, science has often persevered even when other forms of engagement were cut off. The International Polar Year 1957–58, the 1973 Agreement on the Conservation of Polar Bears, and the 1991 Arctic Environmental Protection Strategy are prominent examples.

A culture of Arctic scientific collaboration has also defined the work of the Arctic Council, the region’s main intergovernmental forum. Incremental efforts to resume collaboration have focused on allowing the council’s Working Groups—populated largely by experts and researchers, and focused on scientific projects—to resume their work, if virtually rather than in person. There have been many discussions on whether the Arctic Council should continue without Russia; the conventional wisdom is that climate change and other issues are so important that we can’t afford to cut ties completely.

In the Arctic, science has often persevered even when other forms of engagement were cut off.

Academics and scientists were often encouraged to collaborate with Russians on Arctic research between 1991 and 2021. Now it is becoming taboo. As Nataliya Shok and Katherine Ginsbach point out in “Channels for Arctic Diplomacy” (Issues, Spring 2024), “The invasion prompted many Western countries to impose a range of scientific sanctions on Russia … The number of research collaborations between Russian scientists and those from the United States and European countries has fallen.” In fact, a colleague of mine was terminated from the University of Lapland for attending a conference in Russia where he spoke about climate change cooperation.

Shok and Ginsbach do an admirable job of framing this context. But they go beyond that, reminding us of the importance of scientific collaboration on human health in the Arctic region. Some of us may recall, and the authors recount, when a heat wave in 2016 resurfaced anthrax bacteria long buried in permafrost in Russia’s Arctic Yamal Peninsula. The outbreak killed thousands of reindeer and affected nearly a hundred local residents. It had us asking, what else will a warmer Arctic bring back into play? A study conducted by a team of German, French, and Russian scholars before the invasion of Ukraine sought to help answer this, identifying 13 new viruses revived from ancient permafrost.

This type of research is now under threat. There’s a case to be made that regional collaboration on infectious disease is even more urgent than that on melting permafrost or other consequences of climate change. It’s not a competition; but in general, better understanding Arctic sea ice melt or Arctic greening won’t prevent climate change from happening. Understanding the emergence of new Arctic infectious diseases, by contrast, can be used to prevent outbreaks. Shok and Ginsbach recommend, at a minimum, that we establish monitoring stations in the high-latitude Arctic to swiftly identify pathogens in hot spots of microbial diversity, such as mass bird-nesting sites.

There is no easy answer to the question of whether or how to continue scientific collaboration with Russia in the wake of the illegal invasion of Ukraine. But it is undoubtedly a subject that needs contemplation and debate. Shok and Ginsbach provide a good start at that.

Managing Editor, Arctic Yearbook

Director of Energy, Natural Resources and Environment, Macdonald-Laurier Institute, Ottawa, Ontario

The COVID-19 pandemic powerfully demonstrated the importance of international scientific cooperation in addressing a serious threat to human health and the existence of modern society as we know it. Diplomacy in the field of science witnessed a surge in the race to find a cure for the SARS-CoV-2 virus. The thawing Arctic region is at risk of giving rise to a new virus pandemic, and scientific collaboration among democratic and authoritarian regimes in this vast geographical area should always be made possible.

International science collaboration and science diplomacy, however altruistic, risks being run over by the global big power rivalry between players such as Russia, China, and the United States, with the European Union and the BRIC countries acting as players in between. In its recent report From Reluctance to Greater Alignment, the German Marshall Fund argues that Russia’s scientific interests in the Arctic, beyond security considerations, are mostly economic with a focus on hydrocarbon extraction and development of the Northern Sea Route, trumping any environmental or health considerations.

The thawing Arctic region is at risk of giving rise to a new virus pandemic, and scientific collaboration among democratic and authoritarian regimes in this vast geographical area should always be made possible.

Russian and Chinese scientific cooperation in the Arctic has increased significantly since their first joint Arctic expedition in 2016. China was Russia’s top partner for research papers in 2023, and scientists from both countries have downplayed the military implications of their scientific collaborations in the Arctic, emphasizing their focus on economic development. However, many aspects of this collaboration, such as the Arctic Blue Economy Research Center, include military or dual-use applications in space and deep-sea exploration, and have proven links to the Chinese defense sector.

Given the scientific isolation of Russia after its invasion of Ukraine in 2022, scientific collaboration in the areas of health and environmental concerns within the auspices of the Arctic Council and other international organizations seem to be the last benign avenues for Russian scientific collaboration with other Arctic powers and the West at large. Russia’s pairing up with China on seabed and mineral exploration in the Arctic does not, however, strengthen trust and confidence of Russian efforts in health and environmental issues being free of military and security policy.

The last frontier of scientific cooperation for the benefit of health and environmental stability in the Arctic region stands to be overrun by global power politics, with science diplomacy being weaponized as a security policy tool, among others. This is a sad reality acknowledged by the seven countries—Canada, Denmark (Greenland), Finland, Iceland, Norway, Sweden, and the United States—that exercise sovereignty over the lands within the Arctic Circle. (Russia was voted out of the oversight Arctic Council after its invasion of Ukraine.) It is therefore worth considering whether Russia’s Arctic research interest in the fields of health and environment would benefit from a decoupling from China and any other obvious military or dual-use application.

Senior Fellow, Transatlantic Defense and Security

Center for European Policy Analysis, Washington DC

Boosting Hardware Start-ups

In “Letting Rocket Scientists Be Rocket Scientists: A New Model to Help Hardware Start-ups Scale” (Issues, Spring 2024), John Burer effectively highlights the challenges these companies face, particularly in the defense and space industries. The robotics company example he cites illustrates the pain points of rapid growth coupled with physical infrastructure, demonstrating the different dynamics of hardware enterprises as compared with software.

However, I believe the fundamental business issue for hardware start-ups is generating stable, recurring revenue when relying on sales of physical items that bring in a one-time influx of revenue, but bear no promise of future revenue. Consider consumer companies such as Instant Pot and Peloton, which serve as cautionary tales that rode a wave of virality to high one-time sales and suffered with the failure to create follow-on products to fill production lines and pay staff salaries.

Further analysis of the issues Burer raises would benefit from exploring how the American Center for Manufacturing and Innovation’s (ACMI) industry campus model or other solutions directly address this core problem of revenue stability that any hardware company faces. Does another successful product have to follow the first? Is customer diversity required? Even hardware companies focusing solely on national security face this problem.

While providing shared infrastructure is valuable, more specifics are needed on how ACMI bridges the gap to full-scale production beyond just supplying space. Examining the broader ecosystem of hardware-focused investors, accelerators, and alternative models focused on separating design and manufacturing is also important. The global economy has undergone significant reconfiguration, with much of the manufacturing sector organizing as either factoryless producers of goods or providers of production-as-a-service, focusing on core competencies of product invention and support, or supply chain management and pooling demand. This highly digitally-coordinated model can’t work for every product, but the world looks very different from the golden age of aerospace, when it made sense to make most things in-house or cluster around a local geographic sector specialized in one industry.

Overall, Bruer identifies key challenges, but the hardware innovation community needs a broader conversation on business demands, especially around revenue stability, a wider look at the hardware start-up ecosystem, and concrete evidence of the ACMI model’s impact. I look forward to seeing this important conversation continue to unfold.

Senior Fellow, Center for Defense Concepts and Technology, Hudson Institute

Executive Partner, Thomas H. Lee (THL) Partners

The author is a former program manager and office deputy director of the Defense Advanced Research Projects Agency

John Burer eloquently describes a new paradigm to strategically assemble and develop hardware start-up companies to enhance their success within specific industrial sectors. While the article briefly mentions the integration of this novel approach into the spaceflight marketplace, it does not fully describe the tremendous benefits that a successful space systems campus could provide to the government, military, and commercial space industries, as well as academia. Such a forward-thinking approach is critical to enable innovative life sciences and health research, manufacturing, technology, and other translational applications to benefit both human space exploration and life on Earth.

The advantages of such an approach are clearly beneficial to many research areas, including space life and health sciences. These research domains have consistently shown that diverse biological systems, including animals, humans, plants, and microbes, exhibit unexpected responses pertinent to health that cannot be replicated using conventional terrestrial approaches. However, important lessons learned from previous spaceflight biomedical research revealed the need for new approaches in our process pipelines to accelerate advances in space operations and manufacturing, protect the health of space travelers and their habitats, and translate these findings back to the public on Earth.

A well-integrated, holistic space campus system could overcome many of the current gaps in space life sciences and health research by bringing together scientists and engineers from different disciplines to promote collaboration; consolidate knowledge transfer and retention; and streamline, simplify, and advance experimental spaceflight hardware design and implementation. This type of collaborative approach could disrupt the usual silos of knowledge and experience that slow hardware design and verification by repeatedly requiring reinvention of the same wheel.

A well-integrated, holistic space campus system could overcome many of the current gaps in space life sciences and health research.

Indeed, the inability of current spaceflight hardware design and capabilities to perform fully automated and simple tasks with the same analytical precision, accuracy, and reproducibility achieved in terrestrial laboratories is a major barrier to space biomedical research—and creates unnecessary risks and delays that impact scientific advancement. In addition, the inclusion and support of manufacturing elements in a space campus system can allow scaled production to meet the demands and timelines required for the success of next-generation space life and health sciences research.

The system described by Burer has clear potential to optimize our approach to such research and can lead to new medical and technological advances. By strategically nucleating our knowledge, resources, and energy into a single integrated and interdisciplinary space campus ecosystem, this approach could redefine our concept of a productive space research pipeline and catalyze a much-needed change to advance the burgeoning human spaceflight marketplace while “letting rocket scientists be rocket scientists.”

Professor, School of Life Sciences

Biodesign Center for Fundamental and Applied Microbiomics, Biodesign Institute

Arizona State University

Aerospace Technologist, Life Sciences Research, Biomedical Research and Environmental Sciences Division

NASA Johnson Space Center, Houston, Texas

The Naval Surface Warfare Center Indian Head Division (NSWC IHD) was founded more than 130 years ago as the proving ground for naval guns, and later shifted focus to the research, development, and production of smokeless powder. We continue as a reliable provider of explosives, propellants, and energetic materials for ordnance and propulsion systems for every national conflict, leading us to be recognized as the Navy’s Arsenal.

But this arsenal now needs rebuilding to strengthen and sustain the nation’s deterrence against the growing power of the People’s Republic of China, while also countering aggression around the world.

At the 2024 Sea-Air-Space Exposition, the Navy’s chief of operations, Admiral Lisa Franchetti, discussed how supporting the conflict in Ukraine and the operations in the Red Sea is significantly depleting the US ordnance inventory. NSWC IHD is an aging facility but has untapped capacity, and the Navy is investing in infrastructure upgrades to restore wartime readiness of its arsenal. This investment will modernize production, testing, and evaluation capabilities to allow for increased throughput while maintaining current safety precautions.

Having nearby cooperative industry partners would reduce logistical delays and elevate the opportunity for collaborations and successful technology demonstrations.

NSWC IHD believes that an industrial complex of the type that John Burer describes is worth investigating. While our facility is equipped to meet current demand for energetic materials, we anticipate increased requests for a multitude of products, including precision-machined parts and composite materials. Having nearby cooperative industry partners would reduce logistical delays and elevate the opportunity for collaborations and successful technology demonstrations.

Such a state-of-the-art campus would also provide a safe virtual training environment for energetic formulations, scale-up, and production processes, eliminating the risks inherent with volatile materials and equipment. This capability would allow for the personnel delivering combat capability, to paraphrase Burer, to continue to be rocket scientists and not necessarily trainers.

The Navy recognizes the need to modernize and expand the defense industrial ecosystem to make it more resilient. This will require working in close contact with its partners, including Navy laboratories and NSWC IHD as its arsenal. We must entertain smart, outside-the-box concepts in order to outpace the nation’s adversaries. With these needs in mind, exploring the creation of an industrial campus is a worthwhile endeavor.

Technical Director

Naval Surface Warfare Center Indian Head Division

The growth of the commercial space sector in the United States and abroad, coupled with the increasing threat of adversarial engagement in space, is rapidly accelerating the need for fast-paced development of innovative technologies. To meet the growing demand for these technologies and to maintain the US lead in commercial space activities while ensuring national security, new approaches tackling everything from government procurement processes to manufacturing and deployment at scale are required. John Burer directly addresses these issues and suggests a pathway forward, citing some successful examples including the new initiative at NASA’s Exploration Park in Houston, Texas.

Indeed, activities in Houston, and across the state, provide an excellent confluence of activities that can be a proving ground for the proposed industry campus model in the space domain. The Houston Spaceport and NASA’s Exploration Park are providing the drive, strategy, and resources for space technology innovation, development, and growth. These efforts are augmented by $350 million in funds provided by the state of Texas under the auspices of the newly created Texas Space Commission. The American Center for Manufacturing and Innovation (ACMI), working with the NASA Johnson Space Center, is a key component of the strategy for space in Houston, looking to implement the approach that Burer proposes.

To maintain the US lead in commercial space activities while ensuring national security, new approaches tackling everything from government procurement processes to manufacturing and deployment at scale are required.

There is a unique opportunity to bring together civil, commercial, and national security space activities under a joint technology development umbrella. Many of the technologies needed for exploration, scientific discovery, commercial operation, and national security have much in common, often with the only discriminator being the purpose for which they are to be deployed. An approach that allows knowledge exchange among the different space sectors while protecting proprietary or sensitive information will significantly improve the technology developed, provide the companies with multiple revenue streams, and increase the pace at which the technology can be implemented.

Going one step further and creating a shared-equipment model, which Burer briefly alludes to, would allow small businesses and start-ups access to advanced equipment that would normally be prohibitively expensive, with procurement, installation, and management wasting time and money and limiting the ability to scale. A comprehensive approach such as the proposed industry campus would serve to accelerate research and development, foster more innovation, promote a rapid time to market, and save overall cost to the customer, all helping create a resilient space industrial ecosystem to the benefit of the nation’s space industry and security.

Director, Rice University Space Institute

Executive Board Member, Texas Aerospace Research and Space Economy Consortium

John Burer outlines how the American Center for Manufacturing & Innovation (ACMI) is using an innovative approach to solve an age-old problem that has stifled innovation—how can small businesses go from prototype scale to production when there is a very large monetary barrier to doing so?

The Department of Defense has particularly struggled with this issue, as the infamous “valley of death” has halted the progress of many programs due to lack of government or company funding to take the technology to the next step. This leaves DOD in a position where it may not have access to the most advanced capabilities at a time when the United States is facing multiple challenges from peer competitors.

How can small businesses go from prototype scale to production when there is a very large monetary barrier to doing so?

ACMI is providing a unique solution set that not only tackles this issue but creates an entire ecosystem in which companies can join forces with other companies in the same industrial base sector in a campus-like setting. Each campus focuses on a critical sector of the defense supply chain (critical chemicals, munitions, and space systems) and connects government, industry, and academia together, providing shared access to state-of-the-art machinery and capabilities and creating environments that support companies through the scaling process.

For many small businesses and start-ups, this can be a lifeline. Oftentimes, small companies can’t afford to have personnel with the business acumen to raise capital and build infrastructure and are forced to have their technical experts try to fill these roles—which is not the best model for success. ACMI takes on these roles for those companies, and as Burer states, “lets rocket scientists be rocket scientists”—a much more efficient and cost-effective use of their talent.

One of the most important aspects of the ACMI model is that the government is providing only a small amount of the funding for each campus to get things started, and then ACMI is leveraging private capital—up to a 25 to 1 investment ratio—for the remainder. If this isn’t a fantastic use of taxpayer money, I don’t know what is. At a time when the United States is struggling to regain industrial capability and restore its position as a technology leader, and where it is competing against countries whose governments subsidize their industries, the ACMI model is exactly the kind of innovative solution the nation needs to keep charging ahead and provide its industry partners and warfighters with an advantage.

Founder and CEO, MMR Defense Solutions

Former Chief Technology Officer, Office of the Secretary of Defense, Industrial Base Policy

Given the global competition for leading-edge technology, innovation in electronics-based manufacturing is critical. John Burer describes the US innovation ecosystem as a “vibrant cauldron” and offers an industry campus model that can possibly harness the ecosystem’s energy and mitigate its risks. However, the barriers for an electronics hardware start-up company to participate in the innovation ecosystem are high and potentially costly. While Burer’s model is a great one and can prove effective—witness Florida’s NeoCity and Arizona State University’s SkySong, among others—it does require some expansion in thought.

To build an electronics production facility, start-up costs can run $50 million to $20 billion over the first few years for printed circuit boards and semiconductors, respectively. It can take 18 to 48 months before the first production run can generate revenue. For electronics design organizations, electronics CAD software can range from $10,000 to $150,000 per annual license depending on capability needs. Start-up companies in the defense sector must additionally account for costs where customers have rigorous requirements, need only low-volume production, and expect manufacturing availability for decades. This boils down to a foundational question: How does an electronics hardware start-up with a “rocket scientist” innovative idea ensure viability given the high cost and long road ahead?

How does an electronics hardware start-up with a “rocket scientist” innovative idea ensure viability given the high cost and long road ahead?

One solution for electronics start-ups is to use the campus model, but it may be slightly different from what Burer describes. Rather than a campus, I see a need for what I call a “playground community.” They are similar in that they provide a place for people to interact and use shared resources. But as an innovator, I like the idea of a playground that promotes vibrant interactions between individuals or organizations with a common goal, be it discovery or play. Along with this version of an expanded campus, electronics companies will require community and agility to achieve success.

Expanded campus. A virtual campus concept can be valuable given the high capital expenditure costs for electronics manufacturing. This idea partners companies that have complementary capabilities or manufacturing, regardless of geolocation proximity. Additional considerations in logistics, packaging, or custody for national security are also needed.

Community. Scientists of all types need a community of supporting companies and partners that have common values and goals and capitalize on each other’s strengths. This cross-organizational teaming will allow them to move fast and overcome any challenge together.

Agility. Given the rapid pace of the electronics industry, agility is vitally important. This will require the company and its team community to be able to shift and move together, considering multiple uses of the technology, dual markets, adoption of rapid prototyping and demonstration, modular systems design and reuse, and significant use of automation in all aspects of the business.

Fostering innovative communities in technology development, prototyping, manufacturing, and business partnerships will be required for the United States to maintain competitiveness in the electronics industry as well as other science and technology sectors. As the leader of an electronics hardware security start-up, I am fortunate to have played a role as the allegorical rocket scientist with good ideas, but I am even more glad to be surrounded by a community of businesses and production partners in my playground.

CEO and Founder

Rapid Innovation & Security Experts

Having founded, operated, and advised hardware start-ups for more than 25 years, I applaud the American Center for Manufacturing & Innovation and similar initiatives that aim to bring greater efficiency and effectiveness to one of the most important and challenging of all human activities: the development and dissemination of useful technology. The ACMI model, designed to support hardware start-ups, particularly those in critical industries, offers several noteworthy benefits.

First, the validation by the US government of the problems being solved by campus participants is invaluable. Showing the market that such a significant customer cares about these companies provides credibility and encourages other stakeholders to invest in and engage with them.

Second, the “densification” of resources on an industry-focused campus can yield significant cost benefits. Too often, I have seen early-stage hardware companies fail when key people and vital equipment were too expensive or inconveniently located.

Third, the finance arm of the operation, ACMI Capital, can leverage initial government funding and mitigate the “valley of death” that hardware start-ups typically face. This support should offer a smoother transition from government backing to broader engagement with the investment community, a perennial challenge for companies funded by the Small Business Innovation Research program and similar federal sources. Such funding ensures that promising technologies can scale and be efficiently handed off to government customers.

The “densification” of resources on an industry-focused campus can yield significant cost benefits.

However, while the ACMI model offers significant benefits, it also has potential limitations when applied to industries without the halo effect provided by government funding and customers. When the government seeks to solve a problem, it can move mountains. It is much more challenging to coordinate problem validation and investment in early-stage innovation by multiple nongovernment market participants, with their widely varying priorities, resources, and timelines.

Another potential issue is the insufficient overlap in resource and infrastructure needs that may occur among campus innovators in any given industry. If the needs of these start-ups diverge too widely, the benefits of colocation may diminish, reducing the overall efficiency of the campus model.

Finally, there is the challenge of securing enough capital to fund start-ups through the hardware development valley of death. Despite ACMI’s efforts, the financial demands of scaling hardware technologies are substantial, and without a compelling financial story and the enthusiastic support of key customers, securing sustained investment throughout development remains a critical hurdle.

Given these concerns, some care will be needed when selecting which industries, problems, customers, and start-ups will most benefit from this approach. In this vein, I cannot emphasize enough the need for additional experimentation and “speciation” of entities seeking to commercialize technology.

Still, the ACMI model has already demonstrated success and achieved important progress in enhancing the nation’s defense posture. And the lessons learned will undoubtedly inform future efforts, with successful strategies being replicated and scaled, thus enriching the nation’s technology commercialization toolbox.

I look forward to seeing the continued evolution and impact of this and other such models, as they are vital in bridging the gap between innovation and practical application, ultimately driving technological progress and economic growth.

Managing Director

Interface Ventures

The American Center for Manufacturing & Innovation’s (ACMI) industry campus-based model, as John Burer details in his Issues essay, is an innovative approach to addressing the critical challenges faced by hardware start-up companies in scaling production and establishing secure supply chains. At Energy Technology Center, we feel that the model is particularly timely and essential given the current munitions production crisis confronting the US Department of Defense and the challenges traditionally associated with spurring innovation in a mature technical field. As global tensions rise and the need for advanced defense technologies intensifies, the ability to rapidly scale up production of critical materials and systems becomes a national security imperative. This model has the potential to diversify, expand, and make more dynamic the manufacturing base for energetic materials and the systems that depend on them. By fostering a collaborative environment, these campuses can accelerate innovation, reduce production bottlenecks, and enhance the resilience of the defense industrial base.

From a taxpayer’s perspective, the value of ACMI’s model is immense. By attracting private capital to complement government funding, the model maximizes the impact of public investment. As Burer points out, ACMI’s Critical Chemical Pilot Program, funded through the Defense Production Act Title III Program, has already achieved a private-to-public funding ratio of 16 to 1, demonstrating the efficacy of leveraging different pools of investment capital. Such a strategy not only accelerates the development of critical technologies but also ensures that public funds are used more efficiently than ever, fostering a culture of innovation and modernization within the defense sector.

By fostering a collaborative environment, these campuses can accelerate innovation, reduce production bottlenecks, and enhance the resilience of the defense industrial base.

However, to fully realize the potential of this model, we must be mindful of the risks and pitfalls in the concept. Private investment follows the promise of a return. Challenges that must be addressed include the requirement for steady capital investment, dependency on government support, bureaucratic hurdles, market volatility, intellectual property concerns, scalability issues, and the need for effective collaboration. Ensuring sustained financial support from diverse sources, streamlining the bureaucratic processes in which DOD procurement is mired, developing robust and adaptable infrastructure, maintaining strong government-industry partnerships, protecting intellectual property, diversifying market applications, and fostering a collaborative ecosystem are all essential steps toward overcoming these challenges.

Challenges notwithstanding, ACMI’s industry campus-based model is a timely and innovative solution to the current dilemmas of the US defense manufacturing sector. By creating specialized campuses that foster collaboration and leverage both private and public investments, this model can significantly enhance the scalability, resilience, and dynamism of the manufacturing base for energetic materials and defense systems. Burer is to be applauded for bringing a healthy dose of old-fashioned American ingenuity and entrepreneurship to the nation’s defense requirements.

Founder and CEO

Energy Technology Center

As John Burer observes, start-up companies working on hardware, especially those with applications for national security, face substantial challenges and competing demands. These include not only developing and scaling their technology, but also simultaneously addressing the needs of their growing business, such as developing supply chains, securing manufacturing space that can meet their growing needs, and navigating the intricate maze of government regulations, budget cycles, contracting processes, and the like. This combination of challenges and demands requires a diverse and differentiated set of skills, which early-stage hardware companies especially struggle to obtain, given their limited resources and focus on developing their emerging technology. A better model is needed, and the one Burer identifies and is employing, with its emphasis on building regional manufacturing ecosystems through industry campuses, has significant merits.

Historically, the Department of Defense was the primary source of funding for those working on defense-related technologies. That is no longer the case. As recently noted by the Defense Innovation Unit, of the 14 critical technology areas identified by the Pentagon as vital to maintaining the United States’ national security, 11 are “primarily led by commercial entities.” While this dynamic certainly brings several challenges, there are also important opportunities to be had if the federal government can adapt its way of doing business in the commercial marketplace.

Of the 14 critical technology areas identified by the Pentagon as vital to maintaining the United States’ national security, 11 are “primarily led by commercial entities.”

The commercial market operates under three defining characteristics, and there is opportunity to leverage these characteristics to benefit national security. First, success in the commercial sector is defined by speed to market, and the commercial market is optimized to accelerate the transition from research to production, successfully traversing the infamous “valley of death.” Second, market penetration is a fundamental element of any commercial business strategy, with significant financial rewards for those who succeed; consequently, the commercial market is especially suited to rapidly scale emerging technologies. And third, the size of the commercial market dwarfs the defense market; leveraging this size not only offers a force-multiplier to federal funding, but also creates economies of scale that enable the United States and its allies to compete against adversarial nations that defy the norms of free trade and the rule of international law.

Industry campuses apply the proven model of innovation clusters to establish regional manufacturing ecosystems. These public-private partnerships bring together the diverse range of organizations and assets needed to build robust, resilient industrial capability and capacity. The several programs Burer identifies have already demonstrated the value of this model in harnessing the defining characteristics of the commercial market, including speed, scale, and funding. By incorporating this approach, the federal government is able to amplify the value of taxpayer dollars to improve national and economic security, creating jobs while accelerating the delivery of emerging technologies and enhancing industrial base resilience.

Pathfinder Portfolio Lead (contractor)

Manufacturing Capability Expansion and Investment Prioritization Directorate

US Department of Defense

John Burer presents an innovative approach to supporting manufacturing hardware start-ups. I ran the Oregon Manufacturing Innovation Center for the first six years of its life, and have firsthand experience with hundreds of these types of companies. I can attest: hardware start-ups face distinct challenges with few ready-made avenues to address them.

Expecting “rocket scientists” to navigate these challenges without specialized business support can hinder a start-up’s core technical work and jeopardize its overall success. It is rare, indeed, to find the unicorn that is a researcher, inventor, entrepreneur, negotiator, businessperson, logistician, marketer, and evangelist. Yet the likelihood of success for a start-up often depends on those abilities being present in one or a handful of individuals.

To ensure that the United States can maintain a technological advantage in an increasingly adversarial geopolitical landscape, it is imperative to improve hardware innovation and start-up company success rates. In addition to the ideas that Burer presents, my experience in manufacturing research and innovation suggests the need for open collaboration and comprehensive workforce development. These elements are critical to ensure a cross-pollination of ideas and the availability of trained technicians to scale these businesses.

Hardware start-ups face distinct challenges with few ready-made avenues to address them.

The American Center for Manufacturing and Innovation’s (ACMI) model represents a very promising solution. Burer’s emphasis on colocating start-ups within a shared infrastructure is a significant step forward. Incorporating spaces for cross-discipline and cross-company collaborative working groups and project teams, along with providing regular networking opportunities, will allow them to share knowledge, resources, and expertise and to cultivate a culture of cooperation. This is best enabled through a nonprofit applied research facility that can address the intellectual property-sharing issue, making problem-solving more efficient and empowering those involved to do what they do best. It not only allows scientists to be scientists, but also helps the investor, the government customer, the corporate development professional, and other critical participants understand their importance within a shared outcome.

The shared infrastructure within ACMI campuses can be further expanded by developing shared research and development labs, prototyping facilities, and testing environments. By pooling resources and with government support, start-ups can access high-end technology and equipment that might otherwise be beyond their reach, thus reducing costs and barriers to innovation. Additionally, open innovation platforms can allow companies to post challenges and solicit solutions from other campus members or external experts, harnessing a broader pool of talent and ideas. Think of this as a training ground to head-start companies that would scale more independently within this ecosystem, while allowing corporate and government stakeholders to more effectively scout for solutions. Such an approach can accelerate the development of new technologies and products, benefiting all stakeholders involved.

The ACMI model thus offers a potent opportunity. It can be applied to any sector where hardware innovation is needed to advance the nation’s capabilities. Incorporating open collaboration will be crucial to enable the best outcomes for technology leadership and economic growth. By incorporating these additional elements, the ACMI model can become an even more powerful engine for driving the success of hardware start-ups, ultimately benefiting the broader economy and national security.

Advisor to the President on Manufacturing Innovation

Oregon Institute of Technology

Former Executive Director of the Oregon Manufacturing Innovation Center, Research & Development

John Burer highlights the challenges facing start-ups providing products to the defense and space sectors. More specifically, he lays out the challenges for companies building complex physical objects to obtain the appropriate infrastructure for research, development, and manufacturing. Additionally, he notes the importance of small businesses in accelerating the deployment of new and innovative technologies for the nation’s defense. The article comes on the heels of a Pentagon report that found the US defense industrial base “does not possess the capacity, capability, responsiveness, or resilience required to satisfy the full range of military production needs at speed and scale.”

The imperative is clear. Developing increased domestic research, development, prototyping, and manufacturing capabilities to build a more robust and diversified industrial base supporting the Department of Defense is one of the nation’s most critical national security challenges. Equally clear is that unleashing the power of nontraditional defense contractors and small business is a critical part of tackling the problem.

So how do we do it?

The US defense industrial base “does not possess the capacity, capability, responsiveness, or resilience required to satisfy the full range of military production needs at speed and scale.”

We increase collaboration between government, industry, and academia. Expanding the industrial base to deliver the technologies warfighters need is too large a task for any one of these groups to address alone. It will take the combined power, ingenuity, and know-how of the government, industry, and academia to build a more resilient defense industrial base that can rapidly develop, manufacture, and field the technologies required to maintain a decisive edge on the battlefield.

There is a proven way to increase such collaborative engagements, via the use of consortia-based Other Transaction Authority (OTA), the mechanism the DOD uses to carry out certain research and prototype projects. OTAs are made separate from the department’s customary procurement contracts, cooperative agreements, or grants, and provide a greater degree of flexibility.

Consortia bring to bear thousands of small, innovative businesses and academic institutions that are developing cutting-edge technologies in armaments, aviation, energetics, spectrum, and more for the DOD. They are particularly effective at recruiting nontraditional defense contractors into the industrial base, educating them on how to work with the DOD, and lowering the barriers to entry. This provides an established avenue to tap into innovative capabilities to solve the complex industrial base and supply chain challenges the nation faces.

A George Mason University study highlighted the impact that small businesses and nontraditional defense contractors are having on the DOD’s prototyping effort via consortia-based OTAs. Researchers found that more than 70% of prototyping awards made through consortia go to nontraditional defense contractors, providing a proven track record of effective industrial base expansion. Critically, the OTA statute also offers a path to follow-on production to help bridge the proverbial valley of death.

Consortia-based OTAs are an incredibly valuable tool for government, industry, and academia to increase collaboration, competition, and innovation. They should be fully utilized to drive even greater impact to build a more robust, innovative, and diverse defense industrial base and address critical challenges. Nothing less than the nation’s security is at stake.

Executive Committee Chair

National Armaments Consortium

The consortium, with 1,000-plus member organizations, works with the DOD to develop and transition armaments and energetics technology

I am known in the real estate world as The Real Estate Philosopher, and my law firm is one of the largest real estate law practices in New York City. John Burer’s brainchild, the American Center for Manufacturing & Innovation (ACMI), is one of our clients—and one of the most exciting.

To explain, let’s take look at what Burer is doing. He looked at the US defense industry and saw a fragmented sector with major players and a large number of smaller players struggling to succeed. He also saw the defense industry in need of innovation and manufacturing capacity to stay ahead of the world. Burer then had an inspiration about how to bring it all together. As he explained to me early on, it would be kind of like creating miniature Silicon Valleys.

Silicon Valley started out as a think tank surrounding Stanford University. The thinkers, professors, and similar parties attracted more talented people—and ultimately turned into the finest aggregation of tech talent and successful organizations the world has ever seen.

Smaller players will benefit from being part of an ecosystem focused on a single industry.

Why not, mused Burer, do the same thing in the defense industry? In other words, create a campus (or multiple campuses) where the foregoing would come together: thinkers, at universities, as centers of creation; major industry stalwarts to anchor activities; and a swarm of smaller players to interact with the big players. Voila, a mini-Silicon Valley would be born on each campus.

It sounds simple, but this is a tricky thing to put together. Fortunately, Burer is not just a dreamer, but also solid on the nuts and bolts, so he proceeded with logical steps.

The first step was gaining governmental backing. In landing a $75 million contract from the Department of Defense, Burer picked up both dollars and credibility to jump-start his venture. This became ACMI Federal, the first prong of the ACMI business.

The second step was acquiring and building the campuses. These are estate deals and, as real estate players know all too well, you don’t just snap your fingers and a campus appears. You need a viable location, permits, deals with anchor tenants, lenders and investors, and much more. So Burer created another prong for the business, called ACMI Properties.

In the third step, Burer realized that many of the smaller occupants of the campuses would be start-ups, which are routinely starved for cash. So he created yet another prong for the business, called ACMI Capital. This is essentially a venture capital fund to back the smaller players.

Now Burer had it all put together: a holistic solution for scaling manufacturing. The campuses will spearhead innovation, critical to US defense. Smaller players will benefit from being part of an ecosystem focused on a single industry. And investors will be pleased that their investments will have both solid upside coupled with strong downside protection as well.

Adler & Stachenfeld

The author is a member of the ACMI Properties’ Advisory Board

Let’s be very clear: the US government, including the Department of Defense, does not manufacture anything. However, what the government does do is establish the regulatory frameworks that allow manufacturing to flourish or flounder.

In this regard, John Burer eloquently argues that the DOD needs new acquisition strategies to meet the logistical needs of the military services. Fortunately, at the insistence of Congress, the DOD is finally taking action to strengthen and secure the defense industrial base. In February 2021, President Biden signed an executive order (EO 14017) calling for a comprehensive review of all critical supply chains, including the defense industrial base. In February 2022, the DOD released its action plan titled Securing Defense-Critical Supply Chains.

At the insistence of Congress, the DOD is finally taking action to strengthen and secure the defense industrial base.

The American Center for Manufacturing & Innovation (ACMI) is working to address two of the critical recommendations in the action plan, focused on strengthening supply chain vulnerabilities in critical chemical supply, and growing the industrial base for developing and producing hypersonic missiles and other hypersonic weapons. As Burer describes, the center’s approach uses an industry campus model. The approach is not new to the DOD. It is being quite successfully used in two other DOD efforts that I am very familiar with: the Advanced Regenerative Manufacturing Institute, which is working to advance biotechnology, and AIM Photonics, which is devoted to advancing integrated photonic circuit manufacturing technology. Each are one of nine manufacturing innovation institutes established by the DOD to create an “industrial common” for manufacturing critical technologies.

A key to the success of ACMI and these other initiatives is that the DOD invests in critical infrastructure that allows shared use by small companies, innovators, and universities. This allows for collaboration across all members of the consortium, ensuring that best practices are shared, shortening development timelines, and ultimately driving down risk by having a common regulatory and safety environment. Anything that drives down program or product risk is a winner in the eyes of the DOD.

ACMI is still somewhat nascent as an organization. While it has been successful in securing DOD funding for its Critical Chemical Pilot Program and subsequently for its munitions campus, only time will tell if ACMI will be able to address the confounding supply chain issues surrounding explosive precursors, explosives, and propellants that are absolutely critical to the nation’s national defense.

Department of Chemistry and Biochemistry, University of South Carolina

The author has 35 years of military and civilian service with the US Army, is a retired member of the Scientific and Professional cadre of the federal government’s Senior Executive Service, and served as the US Army Deputy Chief Scientist

Promethean Sparks

Inspired by the National Academy of Sciences (NAS) building, which turns 100 this year, sixth-grade students at the Alain Locke School in West Philadelphia created the Promethean Sparks mural. The students collaborated with artist and educator Ben Volta to imagine how scientific imagery in the NAS building’s Great Hall—from the Prometheus mural by Albert Herter and the golden dome by Hildreth Meière—might look if recreated in the twenty-first century. Their vibrant mural is exhibited alongside a timeline of the NAS building, which depicts the accomplishments of the Academy in the context of US and world events over the past century.

Working with Mural Arts Philadelphia, students merged diverse scientific symbols to create new imagery and ignite new knowledge insights. Embodying a collective exploration of scientific heritage, this project empowered the students as creators. The students’ collection of unique designs reflects a journey of experimentation, learning, and discovery. Embracing roles beyond their student identities, they engaged as artists, scientists, and innovators.

Embodying a collective exploration of scientific heritage, this project empowered the students as creators.

Ben Volta works at the intersection of education, restorative justice, and urban planning. He views art as a catalyst for positive change in individuals and the institutions surrounding them. After completing his studies at the University of Pennsylvania, Volta began collaborating with teachers and students in Philadelphia public schools to create participatory art that is both exploratory and educational. Over nearly two decades, he has developed this collaborative process with public schools, art organizations, and communities, receiving funding for hundreds of projects in over 50 schools.

Mural Arts Philadelphia, the nation’s largest public art program, is rooted in the belief that art ignites change. For 40 years, Mural Arts has brought together artists and communities through a collaborative process steeped in mural-making traditions, creating art that transforms public spaces and individual lives.

The Power of Space Art

One of the remarkable qualities of space art is its ability to amplify the mysterious intangibility of the cosmos (as with the late-nineteenth-century French artist Étienne Trouvelot) and at the same time make the unrealized technologies of the future and the worlds beyond our reach seem to be within our grasp (as with the mid-twentieth-century American artist Chesley Bonestell). As Carolyn Russo demonstrates in “How Space Art Shaped National Identity” (Issues, Spring 2024), art has played an important role in making space seem both meaningful and familiar.

Its appeal has not been limited to the United States. In the Soviet Union, the paintings of Andrei Sokolov and Alexei Leonov made the achievements of their nation visible to its citizens, while also showing them what a future in space could look like. The iconography developed by graphic designers for Soviet-era propaganda posters equated spaceflight with progress toward socialist utopia.

Outside of the US and Soviet contexts, space art from other nations didn’t necessarily align with either superpower’s vision. The Ghana-born Nigerian artist Adebisi Fabunmi, in his 1960s woodcut City in the Moon, provided a vision influenced by the region’s Yoruba people of community life on the moon. The idea of home and community may have appealed to the artist during an era of decolonization and civil war more than utopian aspirations or futuristic technologies. Meanwhile, in Mexico, the artist Sofía Bassi composed surrealist dreamscapes that ponder the connection between outer space and the living world. Bassi’s Viaje Espacial includes neither flags nor space heroics.

Contemporary space art is as likely to question the human future in space as it is to celebrate it. The Los Angeles-based Brazilian artist Clarissa Tossin’s work is critical of plans for the moon and Mars that she worries continue colonial projects or threaten to despoil untouched worlds. Tossin’s digital jacquard tapestry The 8th Continent reproduces NASA images of the moon in a format associated with the Age of Exploration, reminding viewers that our medieval and Renaissance antecedents similarly sought new worlds to conquer and exploit.

Space is also a popular setting or subject matter in the works of Afrofuturist and Latino Futurist artists. These works often seek to recover and reclaim past connections as they chart new future paths. The American artist Manzel Bowman’s collages combine traditional African imagery and ideas with space motifs and high technology to produce a new cosmic imaginary unconstrained by the history of colonialism. The Salvadoran artist Simón Vega’s work reframes the Cold War space race via the perspective of Latin America. Vega reconstructs the space capsules and stations of the United States and the Soviet Union using found materials in ways that make visible the disparities between the nations who used space to stage technological spectacles and those who were left to follow these beacons of modernization.

The many forms that space art has taken over these past decades are surprising, but the persistence of space in art is not. From the moon’s phases represented in the network of prehistoric wall paintings in Lascaux Cave in southwestern France to the images of the heavenly spheres captured by medieval and later painters across many nations, art chronicles our impressions of the universe and our place within it perhaps better than any other cultural form.

Curator of Earth and Planetary Science

Smithsonian’s National Air and Space Museum

Lead curator of the museum’s new Futures in Space gallery

A Space Future Both Visionary and Grounded

In “Taking Aristotle to the Moon and Beyond” (Issues, Spring 2024), G. Ryan Faith argues that space exploration needs a philosophical foundation to reach its full potential and inspire humanity. He calls for NASA to embrace deeper questions of purpose, values, and meaning to guide its long-term strategy.

Some observers would argue that NASA, as a technically focused agency, already grapples with questions of purpose and meaning through its scientific pursuits and public outreach. Imposing a formal “philosopher corps” could be seen as redundant or even counterproductive, diverting scarce resources from more pressing needs. Additionally, if philosophical approaches become too academic or esoteric, they risk alienating key stakeholders and the broader public. There are also valid concerns about the potential for philosophical frameworks to be misused to justify unethical decisions or to shield space activities from public scrutiny.

Yet despite these challenges, there is a compelling case for why a more robust philosophical approach could benefit space exploration in the long run. By articulating a clear and inspiring vision, grounded in shared values and long-term thinking, space organizations can build a sturdier foundation for weathering political and economic vicissitudes. Philosophy can provide a moral compass for navigating thorny issues such as planetary protection, extraterrestrial resource utilization, and settling other celestial bodies. And it may not be a big lift if small steps are taken. For example, NASA could create an external advisory committee on the ethics of space and fund collaborative research grants—NASA’s Office of Technology Policy and Strategy is already examining ethical issues in the Artemis moon exploration program, and the office could serve as one place within NASA to take point. In addition, NASA could bring university-based scholars and philosophers to the agency on a rotating basis, expand public outreach to include philosophical discussions, and host international workshops and conferences on space ethics and philosophy.

By articulating a clear and inspiring vision, grounded in shared values and long-term thinking, space organizations can build a sturdier foundation for weathering political and economic vicissitudes.

Ultimately, the key is to strike a judicious balance between philosophical reflection and practical action. Space agencies should create space for pondering big-picture questions, while remaining laser-focused on scientific, technological, and operational imperatives. Philosophical thinking should be deployed strategically to inform and guide, not to dictate or obstruct. This means fostering a culture of openness, humility, and pragmatism, where philosophical insights are continually tested against real-world constraints and updated in light of new evidence.

As the United States approaches its return to the moon, we have a rare opportunity to shape a future that is both visionary and grounded. By thoughtfully harnessing the power of philosophy while staying anchored in practical realities, we can chart a wiser course for humanity’s journey into the cosmos. It will require striking a delicate balance, but the potential rewards are immense—not just for space exploration, but for our enduring quest to understand our place in the grand sweep of existence. The universe beckons us to ponder big questions, and to act with boldness and resolve.

Former Associate Administrator for Technology Policy and Strategy

Former (Acting) Chief Technologist

National Aeronautics and Space Administration

G. Ryan Faith’s emphasis on ethics in space exploration is well-met given contemporary concerns regarding artificial intelligence and the recent NASA report on ethics in the Artemis program. As we know from decades of study, the very technologies we hope will be emancipatory more often carry our biases with them into the world. We should expect this to be the case in lunar and interplanetary exploration too. Without clear guidelines and mechanisms for ensuring adherence to an ethical polestar, humans will certainly reproduce the problems we had hoped to escape off-world.

Yet, as a social scientist, I find it strange to assume that embracing a single goal, or “telos,” might supersede political considerations, especially when it comes to funding mechanisms. NASA is a federal agency. The notion of exploration “for all humankind” certainly illuminates and inspires, but ultimately NASA’s mandate is more mundane: to further the United States’ civilian interests in space. The democratic process as practiced by Congress requires annual submission of budgets and priorities to be approved or denied by committee, invoking the classic time inconsistency problem. In such a context, telic and atelic virtues alike are destined to become embroiled and contested in the brouhaha of domestic politics. Until we agree to lower democratic barriers to long-term planning, the philosophers will not carry the day.

The notion of exploration “for all humankind” certainly illuminates and inspires, but ultimately NASA’s mandate is more mundane: to further the United States’ civilian interests in space.

Better grounding for a philosophy of space exploration, then, might arise from an ethical approach to political virtues, such as autonomy, voice, and the form of harmony that arises from good governance (what Aristotle calls eudaimonia). In my own work with spacecraft teams and among the planetary science community, I have witnessed many grounded debates as moments of statecraft, some better handled than others. All are replete with the recognizable tensions of democracy: from fights for the inclusion of minority constituents, to pushback against oligarchy, to the challenge of appropriately managing dissenting opinions. It is possible, then, to see these contestations at NASA over its ambitions not as compulsion “to act as philosophers on the spot,” in Faith’s words, but as examples of virtues playing out in the democratic polis. In this case, we should not leapfrog these essential debates, but ensure they give appropriate voice to their constituents to produce the greatest good for the greatest number.

Additionally, there is no need to assume an Aristotelian frame when there are so many philosophies to choose from. The dichotomies that animate Western philosophies are anathema to adherents of several classical, Indigenous, and contemporary philosophies, who find ready binaries far too reductive. We might instead imagine a philosophy of space exploration that enhances our responsibility to entanglements and interconnectivities: between Earth and moon, human and robotic explorers, environments terrestrial and beyond. Not only would this guiding philosophy be open to more people, cultures, and nations, and better hope to escape “terrestrial biases” by rejecting a ready distinction between Earth and space. It would also hold NASA accountable for maintaining an ethical approach to Earth-space relations throughout its exploration activities, regardless of the inevitable shifts in domestic politics.

Associate Professor of Sociology

Princeton University

G. Ryan Faith succinctly puts his finger on exactly what ails NASA’s human spaceflight program—a lack of telos, the Greek word for purpose. In this concept, you are either working toward a telos, or your efforts are atelic. In the case of the Apollo program, NASA had a very specific teleological goal: to land a man on the moon and return him safely to Earth (my personal favorite part of President Kennedy’s vision) by the end of 1969.

This marked a specific goal, or “final cause.” The Hubble Space Telescope, on the other hand, is very much atelic. That is, there is no defined endpoint; you could literally study the universe forever.

This philosophical concept is well and good for the Ivory Tower, but it also has a very practical application at the US space agency.

NASA has gone through several iterations of its human moon exploration program since it was reincarnated during the George W. Bush administration as Project Constellation. I cannot tell you how many times someone has asked me, “Now, why are we going to the moon again? Haven’t we been there? Don’t we have enough problems here on Earth? And don’t we have a huge deficit?”

Why yes, we do have a huge deficit. And the world does feel fraught with peril these days, given the situations in Russia, China, and the Middle East. If NASA is to continue to receive significant federal funding for its relatively expensive human exploration program, it needs to have a crisp answer for why exactly we should borrow money to send people to the moon (again).

Ryan brings up an interesting paradox of the Apollo program’s success, namely that “going to the moon eliminated the reason for going to the moon.” And he reminds us that “failing to deliberately engage philosophical debates about values and vision.… risks foundering.”

If NASA is to continue to receive significant federal funding for its relatively expensive human exploration program, it needs to have a crisp answer for why exactly we should borrow money to send people to the moon (again).

There are certainly many technical issues the agency needs to grapple with. Do we build a single base on the moon or land in various locations? Do we continue with the Space Launch System rocket, built by Boeing, or switch to the Starship rocket or the much cheaper Falcon Heavy rocket, both built by SpaceX?

But the most important question NASA has to answer is why: why send humans to the moon, risking their lives? Should it be to “land the first woman and first person of color” on the moon, as NASA continuously promotes? Why not explore with robots that are much cheaper and don’t complain nearly as much as astronauts do?

I believe there are compelling answers to these questions. Humans can do things that robots cannot, and sending humans to space is in fact very inspirational. The moon can serve as an important testing ground for flying even deeper into the solar system. But first, the problematic question why demands an answer.

The author would say that JFK’s reasoning was compelling: “We choose to go the moon and do the other things…not because they are easy, but because they are hard.” A great answer in the 1960s. But in the twenty-first century, NASA’s leadership would be well-served to consider Ryan’s article and unleash, in the words of Tom Wolfe, the “power of clarity and vision.”

Senior Fellow, National Center for Energy Analytics

Colonel, USAF (retired)

Former F-16 pilot, test pilot, and NASA astronaut

G. Ryan Faith provides a thoughtful examination of the philosophical foundations for human space exploration—or rather the lack of such foundations. Human space exploration is where this lack is most acute. Commercial, scientific, and military missions have reasons grounded in economic, research, and national security imperatives. They are grounded in particular communities with shared values and discourse. Supporters of human space exploration are found in diffuse communities with many different motivations, interests, and philosophical approaches.

The end of the Apollo program was a shock to many advocates of human space exploration as they assumed, wrongly, that going to the moon was the beginning of a long-term visionary enterprise. It may yet be seen that way by history, but the Apollo landings resulted from US geopolitical needs during the Cold War. They were a means to a political end, not an end in themselves.

Former NASA administrator Mike Griffin gave an insightful speech in 2007 in which he described real reasons and acceptable reasons for exploring space. Real reasons are individual, matters of the heart and spirit. Acceptable reasons typically involve matters of state, geopolitics, diplomacy, and national power, among other more practical areas. Acceptable reasons are not a façade, but critical to large-scale collective action and the use of public resources. They are the common ground upon which diverse individuals come together to create something bigger than themselves.

Real reasons are individual, matters of the heart and spirit. Acceptable reasons typically involve matters of state, geopolitics, diplomacy, and national power, among other more practical areas.

It is more than our machines or even astronauts that we send into space, but our values as well. As Faith’s article makes clear, there is value in thinking about philosophy as part of sustainable support for human space exploration. At the same time, the desire for a singular answer can be a temptation to tell others what to do or what to believe. The challenge in space is similar to that of the Founders of the United States: how to have a system of ordered liberty that allows for common purposes while preserving individual freedoms.

As humanity expands into space, I hope the philosophical foundations of that expansion include the values of the Enlightenment that inspired the Founders. In this vein, the National Space Council issued a report in 2010 titled A New Era for Deep Space Exploration and Development that concluded: “At the frontiers of exploration, the United States will continue to lead, as it has always done, in space. If humanity does have a future in space, it should be one in which space is the home of free people.”

Director, Space Policy Institute, Elliott School of International Affairs

George Washington University

Catalyzing Renewables

In “Harvesting Minnesota’s Wind Twice” (Issues, Spring 2024), Ariel Kagan and Mike Reese discuss their efforts targeting green ammonia production using water, air, and renewable electricity to highlight the role of community-led efforts in realizing a just energy transition. The effort showcases an innovative approach to spur research and demonstrations for low-carbon ammonia production and its use as a fertilizer or for other energy-intensive applications such as fuel for grain drying. Several themes stand out: the impact that novel technologies can have on business practices, communities, and most importantly, the environment, and the critical policies needed to drive change.

The market penetration of renewables in the United States is anticipated to double by 2050, to 42% from 21% in 2020, according to the US Energy Information Administration. However, a report by the Lawrence Berkeley National Laboratory finds that rapid deployment of renewables has been severely impeded in recent years because it takes, on average, close to four years for new projects to connect to the grid. Therefore, technologies such as low-carbon ammonia production catalyze the deployment of renewables by creating value from “islanded” sources—that is, those that are not grid-connected. They also reduce the energy and carbon intensity of the agriculture sector since ammonia production is responsible for 1% of both the world’s energy consumption and greenhouse gas emissions.

US Department of Energy programs such as ARPA-E REFUEL and REFUEL+IT have been instrumental in developing and showcasing next-generation green ammonia production and utilization technologies. Pilot-scale demonstrations, such as the one developed by Kagan and Reese, significantly derisk new technology to help convince early adopters and end users to pursue commercial demonstration and deployment. These programs have also created public-private partnerships to ensure that new technologies have a rapid path to market. Other DOE programs have been driving performance enhancements of enabling technologies such as water electrolyzers to reduce the cost of zero-carbon hydrogen production and further expanding end uses to include sustainable aviation fuels and low-carbon chemicals.

Pilot-scale demonstrations, such as the one developed by Kagan and Reese, significantly derisk new technology to help convince early adopters and end users to pursue commercial demonstration and deployment.

The leap from a new technology demonstration to deployment and adoption is often driven by policy. In their case, the authors cite a tax credit that provides up to $3 per kilogram of clean hydrogen produced. But uncertainties remain: the US government has not provided full guidance on how this and other credits will be applied. Moreover, the production tax credit expires after 10 years, lower than typical amortization periods of capital-intensive projects. Our primary research with stakeholders suggests that long-term power purchase agreements with the renewable energy producer and an ammonia (or other product) producer could help overcome barriers to market entry.

Although their article focuses on the United States, the lessons that Kagan and Reese are gaining might also prove deeply impactful worldwide. In sub-Saharan African countries such as Kenya and Ethiopia, crop productivity can be directly correlated with fertilizer application rates that are lower than global averages. However, these countries have abundant renewable resources (geothermal, hydropower, wind, and solar) and favorable policy environments to encourage green hydrogen production and use. Capitalizing on the technology being demonstrated in Minnesota, as well as in DOE’s Regional Clean Hydrogen Hubs program, could enable domestic manufacturing, increase self-reliance, and improve food security in these regions and beyond.

Director, Renewable Energy

Technology Advancement and Commercialization

RTI International

How to Procure AI Systems That Respect Rights

In 2002, my colleague Steve Schooner published a seminal paper that enumerated the numerous goals and constraints underpinning government procurement systems: competition, integrity, transparency, efficiency, customer satisfaction, best value, wealth distribution, risk avoidance, and uniformity. Despite evolving nomenclature, much of the list remains relevant and reflects foundational principles for understanding government procurement systems.

Procurement specialists periodically discuss revising this list in light of evolving procurement systems and a changing global landscape. For example, many of us might agree that sustainability should be deemed a fundamental goal of a procurement system to reflect the increasing role of global government purchasing decisions in mitigating the harms of climate change.

In reading “Don’t Let Governments Buy AI Systems That Ignore Human Rights” by Merve Hickok and Evanna Hu (Issues, Spring 2024), I sense that they are basically advocating for the same kind of inclusion—to make human rights a foundational principle in modern government procurement systems. Taxpayer dollars should promote human rights and be used to make purchases with an eye toward processes and vendors that are transparent, ethical, unbiased, and fair. In theory, this sounds wonderful. But in practice … it’s not so simple.

Hickok and Hu offer a framework, including a series of requirements, designed to ensure human rights are considered in the purchase of AI. Unsurprisingly, much of the responsibility for implementing these requirements falls to contracting officers—a dwindling group, long overworked and under-resourced yet subject to ever-increasing requirements and compliance obligations that complicate procurement decisionmaking. A framework that imposes additional burdens on these individuals is doomed to fail, despite the best intentions.

The authors’ suggestions also would inadvertently erect substantial barriers to entry, dissuading new, innovative, and small companies from engaging in the federal marketplace. The industrial base has been shrinking for decades, and burdensome requirements not only cause existing contractors to forego opportunities, but deter new entrants from seeking to do business with the federal government.

A framework that imposes additional burdens on these individuals is doomed to fail, despite the best intentions.

Hickok and Hu brush aside these concerns without citing data to bolster their assumptions. Experience cautions against this cavalier approach. These concerns are real and present significant challenges to the authors’ aspirations.

Still, I sympathize with the authors, who are clearly and understandably frustrated with the apparent ossification of practices and the glacial pace of innovation. Which leads me to a simple, effective, yet oft-ignored, suggestion: rather than railing against the existing procurement regime, talk to the procurement community about your concerns. Publish articles in industry publications. Attend and speak at the leading government procurement conferences. Develop a community of practice. Meet with procurement professionals and policymakers to help them understand the downstream consequences of buying AI without fully understanding its potential to undermine human rights. Most importantly, explain how their extensive knowledge and experience can transform not only which AI systems they procure, but how they buy them.

This small, modest step may not immediately generate the same buzz as calls for sweeping regulatory reform. But engaging with the primary stakeholders is the most effective way to create sustainable, long-term gains.

Associate Dean for Government Procurement Law Studies

The George Washington University Law School

Merve Hickok and Evanna Hu stage several important interventions in artificial intelligence antidiscrimination law and policy. Chiefly, they pose the question of whether and how it might be possible to enforce AI human rights through government procurement protocols. Through their careful research and analysis, they recommend a human rights-centered process for procurement. They conclude the Office of Management and Budget (OMB) guidance on federal government’s procurement and use of AI can effectively reflect these types of oversight principles to help combat discrimination in AI systems.

The authors invite a critical conversation in AI and the law: the line between hard law (e.g., statutory frameworks with enforceable consequences), and soft law (e.g., policies, rules, and procedures) and other executive and agency action that can be structured within the administrative state. Federal agencies, as the authors note, are now investigating how best to comply with the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110), released by the Biden administration on October 30, 2023. Following the order’s directives, the OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence, published on March 28, 2024, directs federal agencies to focus on balancing AI risk mitigation with AI innovation and economic growth goals.

Both E.O. 14110 and the OMB Policy reflect soft law approaches to AI governance. What is hard law and soft law in the field of AI and the law are moving targets. First, there is a distinction between human rights law and human rights as reflected in fundamental fairness principles. Similarly, there is a distinction between civil rights law and what is broadly understood to be the government’s pursuit of antidiscrimination objectives. The thesis that Hickok and Evanna advance involves the latter in both instances: the need for the government to commit to fairness principles and antidiscrimination objectives under a rights-based framework.

What is hard law and soft law in the field of AI and the law are moving targets.

AI human rights can be viewed as encompassing or intersecting with AI civil rights. The call to address antidiscrimination goals with government procurement protocols is critical. Past lessons on how to approach this are instructive. The Office of Federal Contract Compliance Programs (OFCCP) offers a historical perspective on how a federal agency can shape civil rights outcomes through federal procurement and contracting policies. OFCCP enforces several authorities to ensure equal employment opportunities, one of the cornerstones of the Civil Rights Act of 1964. OFCCP’s enforcement jurisdiction includes Executive Order 11246; the Rehabilitation Act of 1973, Section 503; and the Vietnam Era Veterans’ Readjustment Assistance Act of 1974. OFCCP, in other words, enforces a combination of soft and hard laws to execute civil rights goals through procurement. OFCCP is now engaged in multiple efforts to shape procurement guidance to mitigate AI discriminatory harms.

Finally, Senators Gary Peters (D-MI) and Thom Tillis (R-NC) recently introduced a bipartisan proposal to provide greater oversight of potential AI harms through the procurement process. The proposed Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act mandates several evaluative protocols prior to the federal government’s procuring and deploying of AI systems, underscoring the need to test AI premises, one of the key recommendations advanced by Hickok and Hu. Preempting AI discrimination through federal government procurement protocols demands both soft law, such as E.O. 14110 and the OMB Policy, as well as hard law, such as the bipartisan bill proposed by Senators Peters and Tillis.

Professor of Law

Director, Digital Democracy Lab

William & Mary Law School

Merve Hickok and Evanna Hu propose a partial regulatory patch for some artificial intelligence applications via government procurement policies and procedures. The reforms may be effective in the short term in specific environments. But a broader perspective, which the AI regulatory wave generally lacks, raises some questions about widespread application.

This is not to be wondered at, for AI raises considerations that make it especially difficult for society to respond effectively. Eight problems in particular stand out:

  1. The definition problem. Critical concepts such as “intelligence,” “agency,” “free will,” “cognition,” “consciousness,” and even “artificial intelligence” are not well understood, involve different technologies from neural networks to rule-based expert systems, and have no clear and accepted definitions.
  2. The cognitive technology problem. AI is part of a cognitive ecosystem that increasingly replicates, enhances, and integrates human cognition and psychology into metacognitive structures at scales from the relatively simple (e.g., Tesla and Google Maps) to the highly complex (e.g., weaponized narratives and China’s social credit system). It is thus uniquely challenging in its implications for everything from education to artistic creation to crime to warfare to geopolitical power.
  3. The cycle time problem. Today’s regulatory and legal frameworks lack any capability to match the rate at which AI is evolving. In this regard, Hickok and Hu’s suggestion to add additional layers of process onto bureaucratic systems that are already sclerotic, such as public procurement, would only exacerbate the decoupling of regulatory and technological cycle times.
  4. The knowledge problem. No one today has any idea of the myriad ways in which AI technologies are currently being used across global societies. Major innovators, including private firms, military and security institutions, and criminal enterprises, are not visible to regulators. Moreover, widely available tool sets have democratized AI in ways that simply couldn’t happen with older technologies.
  5. The scope of effective regulation problem. Potent technologies such as AI are most rapidly adapted by fringe elements of the global economy, especially the pornography industry and criminal enterprises. Such entities pay no attention to regulation anyway.
  6. The inertia problem. Laws and regulations once in place are difficult to modify or sunset. They are thus particularly inappropriate when the subject of their action is in its very early stages of evolution, and changing rapidly and unpredictably.
  7. The cyberspace governance problem. International agreements are unlikely because major players manage AI differently. For example, the United States relies primarily on private firms, China on the People’s Liberation Army, and Russia on criminal networks.
  8. The existential competition problem. AI is truly a transformative technology. Both governments and industry know they are in a “build it before your competitors, or die” environment—and thus will not be limited by heavy-handed regulation.

AI raises considerations that make it especially difficult for society to respond effectively.

This does not mean that society is powerless. What is required is not more regulation on an already failing base, but rather new mechanisms to gather and update information on AI use across all domains; enhance adaptability and agility of institutions rather than creating new procedural hurdles (for example, eschewing regulations in favor of “soft law” alternatives); and encourage creativity in responding to AI opportunities and challenges.

More specifically, two steps can be taken even in this chaotic environment. First, a broad informal network of AI observers should be tasked with monitoring the global AI landscape in near real time and reporting on a regular basis without any responsibility to recommend policies or actions. Second, even if broad regulatory initiatives are dysfunctional, there will undoubtedly be specific issues and abuses that can be addressed. Even here, however, care should be taken to remember the unique challenges posed by AI technologies, and to try to develop and retain agility, flexibility, and adaptability whenever possible.

President’s Professor of Engineering

Lincoln Professor of Engineering and Ethics

Arizona State University

Decolonize the Sciences!

In “Embracing the Social in Social Science” (Issues, Spring 2024), Rayvon Fouché covers the full range of racialized phenomena in science, from criminal use of Black bodies as experimental subjects to the renaissance he maps out for new anti-racism networks, programs, and fellowships. His call for “baking in” the social critique, rather than adding it as mere diversity sprinkles on top, could not be clearer and more compelling.

Yet I know from my experience on National Science Foundation review boards, at science and engineering conferences, and in conversations with all sorts of scientific professionals that this depth is almost always mistranslated, misidentified, and misunderstood. Fouché is calling for creating a transformation, but most organizations and individuals are hearing only the elimination of bias. What is the difference?

The distinction is perhaps most obvious in my own field of computing. For example, loan algorithms tend to create higher interest rates for Black home buyers. Ethnicity is not a variable: data that correlate merely with “being Black” can be inferred by computing, even without human directives to do so. So it is difficult to oppose using the legal system, but tempting to solve as an algorithm problem.

As important as the elimination of bias truly is, it creates the illusion that if we could only eliminate bias, the problem would be solved. Bias does not address the more significant problem: in this case, that homes and loans are extremely expensive to begin with. The costs of loans and dangers of defaulting have destroyed working-class communities of every color; and “too big to fail” means that our algorithmic banking system turns risk for the entire nation’s economy into profits for banks’ own making. And that is not just the case for banking. In health, industry, agriculture, and science and technology in its many forms, eliminating bias merely creates equal exploitation for all, equally unsustainable lives, and forms of wealth inequality that “see no color.”

As important as the elimination of bias truly is, it creates the illusion that if we could only eliminate bias, the problem would be solved.

My colleagues will often conclude at this point that I am pointing toward capitalism, but I have spent my career trying to point out that communist nations generally show the same trends: wealth inequality, pollution, failure to support civil rights. And that is, from my point of view, largely because they use the same science and engineering, formulated around the principles of optimization for extracting value. Langdon Winner, the scholar known for his “artifacts have politics” thesis, was wrong, but only in that the destructive effects of technological artifacts occur no matter what the “politics” is. Communists extract value to the state, and capitalists extract value to corporations, but both alienate it from the cycles of regeneration that Indigenous societies were famously dedicated to. If we want a just and sustainable future, a good place to start is to decolonize our social sciences, not just critique science for failing to embrace them, and perhaps develop that as mutual inquiries across the divide.

What would it take to create a science and technology dedicated not to extracting value, but rather to nurturing its circulation in unalienated forms? Funding from NSF, the OpenAI Foundation, and others have kindly allowed our research network to explore these possibilities. We invite you to examine what regenerative forms of technoscience might look like at https://generativejustice.org.

Professor, School of Information

University of Michigan

Rayvon Fouché argues that social science, especially those branches that study inequity, must become integral to the practice of science if we want to both address and avoid egregious harms of our past and present. Indeed, methodologies and expertise from the social sciences are rarely included in the shaping and practice of scientific research, and when they are, they are only what Fouché likens to “sprinkles on a cupcake.”

Metaphors are essential to describing abstract processes, and every gifted science teacher I ever had excelled at creating them to help students understand how invisible forces can create such visible effects and govern the behavior of the things that we can feel and see. As the noted physicist Alan Lightman famously noted, “metaphor is critical to science.” The metaphors that we use matter, perhaps especially in regard to scientific understanding and education, and I find the metaphor of “science” as a cupcake and the social sciences as sprinkles very useful.

The many disasters and broken promises that have destroyed Black and other marginalized peoples’ trust in the medical establishment might have been averted had experts from other fields been empowered to produce persuasive arguments against their use beforehand. To move to another metaphor, Fouché describes the long-standing effects of “scientific inequity” as practiced upon Black populations as a “residue,” a sticky trace that persists across historical time. The image vividly explains why it is that some people of colors’ mistrust of science and unwillingness to engage with it as a career can be understood as an empirically informed and rational decision.

Some people of colors’ mistrust of science and unwillingness to engage with it as a career can be understood as an empirically informed and rational decision.

As Fouché shows, science becomes unethical and uncreative when it excludes considerations of the social and the very real residues of abuse and disregard that produce disaffection and disengagement. At the same time, the “social” has itself been the object of mistrust and cynicism, with some observers asserting that governments ought not be responsible for taking care of people, but rather that individuals and families need to rely upon themselves. Such ideas helped fuel the systematic defunding of public higher education and other “social” services. STEM fields and occupational fields such as business became more popular because they were seen as the best routes for students to pay off the sometimes life-long debt of a necessary college education. Correspondingly, the social sciences and the humanities have become luxury goods. The state’s unwillingness to support training in these fields is one reason that nonscientific expertise is viewed as a sprinkle, sometimes even to those of us who practice it and teach it to others.

At the same time, this expertise has never been needed more: the fascination, excitement, and horror of artificial intelligence’s breakneck and generally unregulated and unreflective adoption suggests that we greatly need experts in metaphor, meaning, inference, history, aesthetics, and style to “tune” these technologies and make them usable, or even to convincingly advocate for their abandonment. In a bit of good news, recent study of “algorithmic abandonment” demonstrates that users and institutions will stop using applications once they learn that they consistently produce harmful effects. At the same time, it’s hard to “embrace the social” when there is less of it to get our arms around. The scientific community still needs what Fouché calls a “moral gut check,” akin to Martin Luther King Jr.’s 1967 encouragement to “support the sustenance of human life.” For to care about the social is to care for each other rather than just for ourselves.

Gwendolyn Calvert Baker Collegiate Professor, Department of American Culture and Digital Studies Institute

University of Michigan, Ann Arbor

Rayvon Fouché’s call to “lean into the social” and to reckon with science’s “residues of inequity” must be answered if scientists are to help create more equitable and just societies. Achieving this goal, he admits, will require the difficult work of transforming the “traditions, beliefs, and institutions” of science. I concur.

Yet I want to clarify that held within this science that requires transformation are the social sciences themselves. After World War II, the natural, physical, and social sciences all were reconstructed from the same conceptual cloth, one that assumed that truth and justice depended upon the separation of science from society.

For Fouché, this separation must end. His reasons are compelling: without fundamental knowledge of and engagement with the communities and societies out of which sciences arise, scientists operate in moral and social vacuums that too often lead to harm, when what is meant is good. Yet the idea that science should exist in a “pure” space apart from society is deeply baked into today’s scientific institutions.

It could have been otherwise. After the US bombing of Nagasaki and Hiroshima, some prominent politicians and scientists called for an end to the purposeful seclusion of science from society that the Manhattan Project embodied. However, a countervailing force emerged in a troubling form, pseudoscience. At the same time the United States funded physicists to create atom bombs, Germany and the Soviet Union—building on efforts begun in the United States—bluntly directed genetics into policies of racial purification. In response, most geneticists argued that the murderous force of resulting racial hygiene laws lay not in their science, but rather in its perversion by political power. As a result, many geneticists retreated from their political activism of the 1920s and ’30s.

The idea that science should exist in a “pure” space apart from society is deeply baked into today’s scientific institutions.

For their part, prominent social scientists, including the pioneering sociologist Robert K. Merton, argued that science was a wellspring of ethos that democracies needed, and to ensure these ethos survived, science should exist in an autonomous space. Just like markets of classic political economy, science ought to be left alone. This argument expanded to become a central tenet of the West during the Cold War.

In a matter of a few short years, then, science transformed from a terrifying destructive force that needed to be held in check by democratic institutions to one that would itself protect democracies. The natural, physical, and social sciences all embraced this idea of being inherently good and democratic—and thus to be protected from abuse by the unjust concentration of government power. This historically and institutionally entrenched illusion has left contemporary sciences, including the social sciences, poorly equipped to recognize and respond to the many and consequential ways in which their questions inextricably entangle with questions of power.

I agree with Fouché that more trustworthy sciences require addressing these entanglements. The critical question is how. My colleagues and I are currently seeking answers through the Leadership in the Equitable and Ethical Design (LEED) of Science, Technology, Engineering, Mathematics, and Medicine initiative. After decades of building institutional practices and protocols designed to separate science from society, this task will not be easy. Those of us involved with LEED of STEMM look forward to working with Fouché and other visionary scientific leaders to rebuild scientific institutions not around Cold War visions of security and separation, but rather around contemporary critical needs to forge the common grounds of collaboration.

Professor of Sociology

Founding Director, Science and Justice Research Center

University of California, Santa Cruz

What Can Artificial Intelligence Learn From Nature?

Refik Anadol Studio, "Living Archive: Nature"
Living Archive: Nature showcases the output from the Large Nature Model (LNM) by Refik Anadol Studio.

Refik Anadol Studio in Los Angeles maintains a research practice centered around discovering and developing novel approaches to data narratives and machine intelligence. Since 2014, the studio has been working at the intersection of art, science, and technology to advance creativity and imagination using big data while also investigating the architecture of space and perception.

To explore how the merging of human intuition and machine precision can help reimagine and even restore environments, the studio’s generative artificial intelligence project, the Large Nature Model (LNM), gathered more than a half billion data points about the fauna, flora, fungi, and landscapes of the world’s rainforests. These data are ethically sourced from publicly available archives in collaboration with the Smithsonian Institution, National Geographic Society, and London’s Natural History Museum.

In addition to working with existing image and sound archives in public domains and collaborating with renowned institutions, studio director Refik Anadol and his team ventured into rainforests in Amazonia, Indonesia, and Australia. They employed technologies such as LiDAR and photogrammetry, and captured ambisonic audio and high-resolution visuals to represent the essence of these ecosystems. With support from Google Cloud and NVIDIA, the team is processing this vast amount of data and plans to visit thirteen more locations around the world, developing a new understanding of the natural world through the lens of artificial intelligence. 

The team envisions generative reality as a complete fusion of technology and art, where AI is used to create immersive environments that integrate real-world elements with digital data. “Our vision for the Large Nature Model goes beyond being a repository or a creative research initiative,” says Anadol. “It is a tool for insight, education, and advocacy for the shared environment of humanity.” The LNM seeks to promote awareness about environmental concerns and stimulate inventive solutions by blending art, technology, and nature. The team trains the models to produce realistic artificial sounds and images, and showcases these outputs in art installations, educational programs, and interactive experiences.

Anadol sees the LNM’s potential to enrich society’s understanding and appreciation of nature as well as to supplement existing art therapy methods. Making the calming effects of nature available to people, even when they are unable to access natural environments directly, can be particularly beneficial in urban settings or for people with limited mobility.

In the future, the intersection of technology, art, and nature will become increasingly vital. Projects like the LNM exemplify how artificial intelligence might serve as a powerful tool for environmental advocacy, education, and creative expression. As the integration of sensory experiences and generative realities continues to push the boundaries of what is possible, the studio hopes to inspire collective action and a deeper appreciation for the environment.

REFIK ANADOL STUDIO, Living Archive: Nature.
The LNM transforms more than 100 million images of the Earth’s diverse flora, fauna, and fungi into breathtaking visuals.
Refik Anadol Studio, "Living Archive: Nature"
Processing extensive datasets from rainforest ecosystems, the LNM enables the creation of hyperrealistic environmental experiences.
Refik Anadol Studio, "Living Archive: Nature"
The development of the LNM is grounded in extensive interdisciplinary research and collaboration.
Refik Anadol Studio, "Living Archive: Nature"
Generative AI sets a new benchmark for how technology can be used to promote a deeper engagement with the planet’s ecosystems.

A Look at Differential Tuition

In “Tools That Would Make STEM Degrees More Affordable Remain Unexamined” (Issues, Spring 2024), Dominique J. Baker makes important points regarding the state of college affordability for students pursuing STEM majors. As a fellow scholar of higher education finance, I wish to elaborate on the importance of disaggregating data within the broad fields of STEM due to differences in tuition charges and operating costs based on individual majors.

First, Baker notes that differential tuition is prevalent at public research universities, citing data indicating that just over half of all institutions charged differential tuition for at least one field of study in the 2015–16 academic year. I collected data on differential tuition policies across all public universities for 20 years and found that 56% of research universities and 27% of non-research universities charged differential tuition in engineering in the 2022–23 academic year, up from 23% and 7%, respectively, in 2003–04.

Differential tuition policies primarily affect programs located within engineering departments or colleges, with computer science programs also being frequently subject to differential tuition. There are two likely reasons why these programs most often charge higher tuition. The first is because student demand for these majors is strong and the market will bear higher charges. This is often why business schools choose to adopt differential tuition, and likely contributes to decisions to charge differential tuition in engineering and computer science.

Differential tuition policies primarily affect programs located within engineering departments or colleges, with computer science programs also being frequently subject to differential tuition.

The other reason is because engineering is the field with the highest instructional costs per student credit hour, based on research by Steven W. Hemelt and colleagues. They have estimated that the costs for electrical engineering are approximately twice as much as for mathematics and approximately 50% more than for STEM fields such as biology and computer science. Add in high operating expenses for research equipment and facilities, and it is not surprising that engineering programs often operate at a loss even with differential tuition.

The higher education community has become accustomed to detailed data on the debt and earnings of graduates by field of study, which has shown substantial variations in student outcomes within the broad umbrella of STEM fields. Yet there is also substantial variation by major in both the prices that students pay and the costs that universities face to educate students. Both of these areas deserve further attention from policymakers and researchers alike.

Professor and Head, Department of Educational Leadership and Policy Studies

University of Tennessee, Knoxville

Enhancing Regional STEM Alliances

A 2011 report from the National Research Council, Successful K–12 STEM Education, identified characteristics of highly successful schools and programs. Key elements of effective STEM instruction included a rigorous and coherent curriculum, qualified and knowledgeable teachers, sufficient instructional time, assessment that supports instruction, and equal access to learning opportunities. What that report (which I led) did not say, however, was how to create highly effective schools and programs. A decade later, the National Academies’ 2021 Call to Action for Science Education: Building Opportunity for the Future helped answer that challenge.

In “Boost Opportunities for Science Learning With Regional Alliances” (Issues, Spring 2024), Susan Singer, Heidi Schweingruber, and Kerry Brenner elaborate on one of the key strategies for creating effective STEM learning opportunities. Regional STEM alliances—what the authors call “Alliances for STEM Opportunity”—can enhance learning conditions by increasing coordination among the different sectors with interests in STEM education, including K–12 and postsecondary schools, informal education, business and workforce development, research, and philanthropy.

Coordination is valuable because of the alignment it promotes. For example, aligning school experiences with workforce opportunities creates a better fit between schooling and jobs; aligning K–12 with postsecondary learning, including through dual enrollment, gives students a boost toward productive futures; and aligning research with practice means that research may actually make a difference for what happens in classrooms.

Working together on mutual aims helps us find common ground instead of highlighting divisions.

In calling for regional alliances, the authors are building on the recent expansion of education research-practice partnerships (RPPs), which are “long-term, mutually beneficial collaborations that promote the production and use of rigorous research about problems of practice.” In RPPs, research helps to strengthen practice because the investigations pursued are jointly determined and the findings are interpreted with a collaborative lens. The National Network of Education Research-Practice Partnerships now includes over 50 partnerships across the country. The Issues authors have expanded the partnership notion by embedding it in the full education ecosystem, including educational institutions, communities, and the workforce.

In these polarized times, alliances that surround STEM education are particularly important. Working together on mutual aims helps us find common ground instead of highlighting divisions. Allied activities help to build social capital, that is, relations of trust and shared expectations that serve as a resource to foster success. Regional alliances can help create both “bridging social capital,” in which members of different constituencies forge ties based on interdependent interests, and “bonding social capital,” in which connections among individuals within the same organizations are strengthened as they work together with outside groups. In these ways, regional alliances can help defuse the tensions that surround education so that educators can focus on the core work of teaching and learning.

While workforce development is a strong rationale for regional alliances, Singer, Schweingruber, and Brenner note that this is not their only goal. Effective STEM education is essential for all students, whatever their future trajectories. Once again reflecting the times we live in, young people need scientific literacy to understand the challenges and opportunities of daily life, whether in technology, health, nutrition, or the environment. Alliances for STEM Opportunity can promote a pathway to better living as much as an avenue to productive work.

President

William T. Grant Foundation

Building on the many salient points that Susan Singer, Heidi Schweingruber, and Kerry Brenner raise, I would like to emphasize the unique potential of community colleges to respond to the challenge of creating a robust twenty-first-century STEM workforce and science-literate citizenry. The authors rightfully point out how regional alliances can boost dual enrollment and improve the alignment of community college programs. And I applaud their mention of Valencia College in Orlando, Florida, an Aspen Institute College Excellence Prize-winning institution that many others could continue to learn from.

I would add that embracing the “community” dimensions of community colleges would accelerate the nation on the path to the authors’ goals. A growing set of regional collective impact initiatives ask colleges to be community-serving partners in efforts to build thriving places to live for young people and their families. An emphasis on alleviating student barriers, exacerbated by the COVID-19 pandemic, has put pressure on these institutions to build out basic needs services (e.g., food supports, counseling, benefit navigation) for students and community members. Incidentally, I hope we don’t soon forget the thousands of lifesaving COVID shots delivered at these schools.

Many community colleges have mission statements that are community-oriented, such as Central Community College in Nebraska, whose mission is to maximize student and community success. Moreover, because students of color disproportionally enroll in community colleges, these institutions often play an outsize role in advancing racial equity, offering paths to upward mobility that must overcome longstanding structural barriers.

Despite these many roles, community colleges are judged—and funded—primarily based on enrollment and the academic success of their students. These measures miss key benefits that these colleges provide to communities and don’t encourage colleges to focus their efforts on community well-being, including the cultivation of science literacy.

Underneath this misalignment lies the opportunity. While open access schools typically can’t compete on traditional completion, earnings, and selectivity metrics that four-year colleges are often judged on, they can compete much better on community measures because their primary audience and dollars stay more local. By highlighting how valuable they truly are locally through regional alliances, these schools could secure more sustained public investment and support more students and community members in a virtuous cycle.

While open access schools typically can’t compete on traditional completion, earnings, and selectivity metrics that four-year colleges are often judged on, they can compete much better on community measures because their primary audience and dollars stay more local.

Additionally, emerging leaders of community colleges who have risen through the ranks during the student success movement of the past 20 years are eager for “next level” success measures to drive their institutions forward. Instead of prioritizing only enrollment and completion rates, institutional leaders could set goals with regional alliance partners for scaling science learning pathways from kindergarten through college, then work together to address unmet basic needs through partnerships with local community-based organizations, ultimately helping more BIPOC (Black, Indigenous, and People of Color) students obtain meaningful and family sustaining careers—in STEM and other high demand fields.

If we truly aspire to have a STEM workforce that is more representative of the country and equity in STEM education more broadly, regional alliances must intentionally engage and support the institutions where students of color are enrolling—and for many, that is community colleges.

Director, Building America’s Workforce

Urban Institute

It has long been observed that collaborations, alliances, and strategic partnerships are able to accomplish greater systemic change related to science, technology, engineering, and mathematics (STEM) education and research. There is an imperative for the nation’s competitiveness that we cultivate and harness the talent of individuals with a breadth of knowledge, backgrounds, and expertise.

The American Association for the Advancement for Science has spearheaded the development of a national strategy referred to as the STEMM Opportunity Alliance—the extra M refers to medicine—to increase access and enhance the inclusion of all the nation’s talent to accelerate scientific and medical innovations and discoveries. AAAS collaborates with the Doris Duke Foundation and the White House Office of Science and Technology Policy in this effort. The alliance’s stated goal, set for 2050, is to “bring together cross-sector partners in a strategic effort to achieve equity and excellence in STEMM.”

There is an imperative for the nation’s competitiveness that we cultivate and harness the talent of individuals with a breadth of knowledge, backgrounds, and expertise.

Susan Singer, Heidi Schweingruber, and Kerry Brenner offer a similar approach. What is compelling about their essay is not only the delineation of the positive impact of different cross-sector collaborations across the nation on outcomes for science teaching and learning, but also the focus on the local community or region. The authors advocate for “Alliances for STEM Opportunity” along with a coordinating hub to ensure strong connections, a clear (consistent) understanding of regional and local priorities, and a collaborative action plan for addressing the needs of the community through effective and integrated science education.

This recommendation is reminiscent of the National Science Foundation’s Math and Science Partnerships program, started in 2002 but now discontinued. One of its focal areas, “Community Enterprise for STEM Learning,” was designed to expand partnerships “in order to provide and integrate necessary supports for students.” Singer, Schweingruber, and Brenner make a strong case and provide evidence for why regional alliances could lead (and have led) to improvements, which include enhanced teacher preparation, increased scores on standardized tests, a more knowledgeable workforce with relevant skills for industry, and a stronger STEM infrastructure in the region. Not only does this approach make sense; it has also shown to be effective. I know firsthand the significant benefits of alliances and partnerships from my former role as an NSF program officer, where I served as the co-lead of the Louis Stokes Alliances for Minority Participation Program and a member of the inaugural group of program officers that implemented the INCLUDES program, a comprehensive effort to enhance US leadership in STEM discovery and innovation.

As a member of the executive committee for the National Academies of Sciences, Engineering, and Medicine’s Roundtable on Systemic Change in Undergraduate STEM Education, I have engaged in wide discussions about the various factors that have been shown to contribute to the transformation of the STEM education ecosystem for the benefit of the students we are preparing to be STEM professionals, researchers, innovators, and leaders. Systemic change does not occur in silos; it occurs through intentional collaborations and a commitment from all stakeholders to transform infrastructure and culture.

Vice Provost for Research

Spelman College

It is a delight to see Alliances for STEM Opportunity highlighted by Susan Singer, Heidi Schweingruber, and Kerry Brenner. Over the past three years, serving as the executive director of one of the nation’s first STEM Learning Ecosystems (a term coined by the Teaching Institute for Excellence in STEM), in Tulsa, Oklahoma, I’ve witnessed the Tulsa Regional STEM Alliance address enduring challenges in STEM education—issues that surpass local reforms and political shifts.

The authors rightly highlight that alliances are uniquely positioned to address persistent problems, even as reforms, politics, and priorities fluctuate. Improving learning pathways, reducing teacher shortages, increasing access to teacher resources and evidence-based teaching, promoting internal accountability, and supporting continuous improvement are all issues that might be partially resolved at the local level. However, these solutions require an infrastructure that allows for their dissemination and scaling to achieve systemic equity.

This vision represents a shift from workforce-centric thinking toward holistic youth development thinking.

At the Tulsa Regional STEM Alliance—our iteration of the Alliances for STEM Opportunity—we agree that articulating a shared vision is the first step. Ours has evolved over the past decade, and we have found great alignment around our stated quest to “inspire and prepare all youth for their STEM-enabled future.” This vision represents a shift from workforce-centric thinking toward holistic youth development thinking.

To reach our goal, we collaborate with 300 partners to ensure all youth have access to excellent STEM experiences in school, out of school, and in professional settings. This entails numerous collaborations; funding and resourcing educators and partners; leading or hosting professional learning; supporting program planning and evaluation; and creating youth, family, and community events that ensure all stakeholders understand and truly feel connected to our motto: “STEM is Everywhere. STEM is Everyone. All are Welcome.”

By continually defining our shared work around excellent experiences and how they feed into our shared vision, we raise awareness and support an ambitious view of STEM education that advances learning in its individual and integrated disciplines. This enables us to advocate more effectively for funding, development, implementation, and improvement efforts from a principled and consistent position—both of which are increasingly needed in education.

With clarity on the value of STEM as a vehicle for ensuring foundational disciplinary understandings, we can carefully align stakeholders around a simple idea: STEM aims to address the issue of too few students graduating with competence in the STEM disciplines, confidence in themselves, and a pathway to the STEM workforce. STEM cannot meet this demand if the experiences in which we invest our time, talent, and resources do not advance our excellent experiences (shared work) and move us closer to inspired and prepared youth (our shared vision).

I echo the authors’ call for expanded funding and research into this evolving infrastructure and encourage others to connect with their local alliances by visiting https://stemecosystems.org/ecosystems.

Executive Director

Tulsa Regional STEM Alliance

Susan Singer, Heidi Schweingruber, and Kerry Brenner describe the importance of local collaborations among schools, postsecondary institutions, informal education, businesses, philanthropies, and community groups for improving science education from kindergarten through postsecondary education. Regional alliances bring together diverse stakeholders to improve science education in a local context, which is a powerful strategy for achieving both workforce development and goals for science literacy. As the authors also note, regional alliances contribute to the development of a better civic society. These alliances provide a venue for people to find common ground so that progress does not get lost to political polarization.

Opening pathways to STEM careers through alliances has broad societal benefits beyond just creating more scientists—it makes science more accessible and relevant to students’ lives, which is crucial for individual and societal well-being and effective participation in democracy. Science education emphasizes the importance of critical thinking, questioning assumptions, and evidence-based conclusions. These skills are essential for effective civic participation, as they enable individuals to evaluate claims, consider multiple perspectives, and engage in constructive dialogue.

Regional alliances can contribute to the development of a better civic society that fosters informed, engaged, and socially responsible citizens.

Regional alliances can promote the integration of these skills throughout a school’s science curriculum and in community-based learning experiences. They can engage students in authentic, community-based science projects that address local issues, such as environmental conservation, public health, or sustainable development. By participating in these projects, students can develop a sense of agency, empathy, and social responsibility, as well as practical skills in problem-solving, collaboration, and communication. I want to highlight three ways regional alliances can contribute to the development of a better civic society that fosters informed, engaged, and socially responsible citizens.

First, regional alliances can bring together schools, businesses, government agencies, and community organizations to collaborate on science-based initiatives that enhance community resilience. For example, alliances can work on projects related to disaster preparedness, climate change adaptation, or public health emergencies. These partnerships can strengthen social capital, trust, and collective problem-solving capacity, which are essential for a thriving civic society.

Second, regional alliances can demonstrate ways to engage in respectful, evidence-based dialogue around controversial issues. This can include providing professional learning for teachers on facilitating difficult conversations, hosting community forums that model constructive discourse, and encouraging students to practice active listening and perspective-taking.

Third, regional alliances can create opportunities for students to take on leadership roles, express their ideas, and advocate for change in their communities. For example, alliances can support student-led science communication campaigns, development of policy recommendations, or community service projects. By empowering youth to be active participants in shaping their communities, alliances can contribute to the development of a more vibrant and participatory civic society.

Regional alliances focused on all levels of science education can play a vital role in building a better civic society by fostering scientific literacy, critical thinking, community engagement, and lifelong learning. By preparing students to be informed, engaged, and socially responsible citizens, these alliances can contribute to a more resilient, inclusive, and democratic society.

Program Director, Education

Carnegie Corporation of New York

Susan Singer, Heidi Schweingruber, and Kerry Brenner’s essay and theory regarding regional alliances resonate within the funder community. In 2014, several STEM funders helped launch the STEM Learning Ecosystems Community of Practice (SLECoP). These leaders recognized the value of collective impact and the tenets of a regional model. Fast forward, philanthropic commitments in regionalized initiatives continue today. Singularly, funders cannot support all aspects of a regional alliance. However, hybrid investment portfolios or philanthropic collaboratives can illuminate the interdependencies throughout the continuum from kindergarten through career and collectively support various aspects of a centralized regional model.

The authors’ assessment offers a compelling response to the National Center for Science and Engineering Statistics 2019 data that illustrated the status of the education-to-labor market pipelines throughout the country. The state-specific labor force data reflect exemplars and chasms in the continuum. The data indicate that 24 states lack a high concentration of STEM workers relative to the total employment within their respective states. Concentration is measured by those in the skilled technical workforce or those in the STEM workforce with a bachelor’s degree or above. The data also reveal that only 13 states have workforce in which 11.2% to 15% of participants have STEM bachelor’s degrees. Such regional inequalities threaten the nation’s capacity to close education, opportunity, and poverty gaps; meet the demands of a technology-driven economy; ensure national security; and maintain preeminence in scientific research and technological innovation.

Regional inequalities threaten the nation’s capacity to close education, opportunity, and poverty gaps; meet the demands of a technology-driven economy; ensure national security; and maintain preeminence in scientific research and technological innovation.

Many socioeconomically disadvantaged communities lie within the lowest educational and workforce STEM concentrations. Implementing regionalized STEM pathway models would close opportunity gaps. The labor needed by 2030 dictates the need for collective impact, thought partners, and strategic alliances. Regional alliances would enable an inversion of the current STEM pathway status. Regional partnerships that begin with early education; ensure STEM teacher growth, support, and retention; guarantee equitable access; and end with industry engagement will ensure that the nation’s workforce supply outpaces its workforce demand.

As strategic partners, corporate and private philanthropy can fortify structural needs and build capacity for regional alliances. If the authors’ recommendations hold, consistent philanthropy can guarantee the sustainability of the principles of a regional model. I appreciate the authors’ emphasis for regional engagement. National centralization is always valued, but regional implementation has a greater propensity for viable execution. Regional activation allows local partners to tailor solutions and address the specific STEM workforce needs in their geography. Localized assessments will yield the best and wisest practices.

However, the key to bringing the authors’ recommendations to fruition is mutual interest and motivation by the constituents within a region to do so. Similar to regionalized interests and constituencies, most philanthropic investments are also regionalized. Regional funding partners can catalyze impetus for synergizing their STEM ecosystem allies. Therefore, as we consider the fate of the nation, I hope regional leaders and philanthropists will continue to take stock of the value and promise of the authors’ justified theory.

Executive Director

STEM Funders Network

Susan Singer, Heidi Schweingruber, and Kerry Brenner provide current examples and evidence to support and advance the central theme of the National Academies’ 2021 report Call to Action for Science Education: Building Opportunity for the Future. In reading the essay, the familiar saying that “all politics is local” came to mind as I thought about how broad national priorities—such as the report’s push for “better, more equitable science education”—can be used in the development of systems, practices, and supports that are focused regionally and locally. It also made me think about classroom connections and some of the recent instructional changes that foreground locality.

Imagine how empowering it is to begin to answer questions that have personal and communal relevance and resonance.

Over the past few years, the science education community has continued to make shifts in teaching and learning to center students’ ideas, communities, and culture as means to reach that Call to Action goal. Many of the educational resources published lately offer students and teachers the opportunity to consider a phenomenon, an observable event or problem, to begin the science learning experience. Students are provided with current data and information in videos and articles, then given the opportunity to ask questions that can be investigated. In the process of answering the students’ questions, science ideas are developed and explained that underlie the phenomenon being considered. Imagine how empowering it is to begin to answer questions that have personal and communal relevance and resonance. This type of science teaching and learning connects with the types of partnerships and experiences essential in the local and regional alliances, and serves to enrich and enliven the relevance and relatability to science as a career opportunity and civic necessity.

Additionally, it would be great to find ways to connect these local and regional alliances to make them even stronger and more common, by identifying ways to scale and sustain efforts, celebrate accomplishments, and share resources. One possibility might be some type of national convening that would provide the time and space where representatives from local and regional alliances could discuss what is working, seek support to solve challenges, and create other types of alliances through cooperation and collaboration. Science Alliance Opportunity Maps could be created to ensure that all students and their communities are being served and supported. The only competition would be the numerous and varied ways to make equitable science education a reality for every student, from kindergarten through the undergraduate years, in every region and locale of the nation. This would be a major step toward achieving Singer, Schweingruber, and Brenner’s hope for “not just a competitive workforce, but also a better civic society.”

Associate Director for Program Impact

Senior Science Educator

BSCS Science Learning

Marie Curie Visits the National Academy of Sciences Building

A photograph captures a historic moment on the back steps of the National Academy of Sciences building: Marie Curie, codiscoverer of radium and polonium, stands alongside President Herbert Hoover in the fall of 1929. The president had presented her with a gift of $50,000, earmarked for purchasing a gram of radium for her oncology institute in Warsaw, Poland. The gift was the result of a fundraising campaign led by American journalist Marie Meloney, after her article in The Delineator, a popular women’s magazine, reported that Curie could not continue her groundbreaking research without more of the expensive element.

Curie, a Polish-born physicist and chemist, is renowned for her work on radioactivity. Not only was she the first woman to win a Nobel Prize, but she was also the first person to win a Nobel Prize twice in two scientific fields—physics in 1903 and chemistry in 1911. Her research led to the development of nuclear energy and radiotherapy for cancer treatment. Five years after her visit to the National Academy of Sciences, Curie died from leukemia, likely the direct result of her prolonged radiation exposure. Her life, while marked by tragic irony, continues to inspire generations with her unwavering dedication to science.

Principles for Fostering Health Data Integrity

Almost every generation is confronted with the effects of its past and must adapt. In his 1962 “We choose to go to the Moon” speech, President Kennedy juxtaposed the challenges of his postwar era—intelligence vs. ignorance, good vs. evil, leadership vs. fear-fueled passivity—and harnessed the national animus to achieve a lunar landing.

Today, our challenge categories are similar. We are confronted with the effects and portents of concurrent changes in medicine, science, and technology, which in turn change how we educate scientists, manage the implementation of new technology, and respond to the effects, both planned and unforeseen, of the application of our discoveries.

Computational and data science technologies, some rooted in JFK’s ’60s, have entered all facets of life at breakneck speed. Our understanding of the societal effects of emerging technologies is lagging. When data, data transfer, and artificial intelligence meet medicine, game-changing implementation effects—positive or negative—are imminent.

In “How Health Data Integrity Can Earn Trust and Advance Health” (Issues, Winter 2024), Jochen Lennerz, Nick Schneider, and Karl Lauterbach tackle this complex landscape and identify pivotal decisions needed to create a system that equitably benefits all stakeholders. They highlight a requisite culture shift: an international ethos of probity for everyone involved with health data at any level. They propose, in effect, a modern-day Hippocratic Oath for health data creation, utilization, and sharing—a framework that would simultaneously allow advancement in science and population health while adhering to moral and ethical standards that respect individuals, their privacy, and their medical needs.

Without this health data integrity framework, the promise of medical discovery through big data will be truncated.

When data, data transfer, and artificial intelligence meet medicine, game-changing implementation effects—positive or negative—are imminent.

Within this framework, we open new horizons for medical advancement, and we augment the safety of data and of tools such as artificial intelligence. AI is an oxymoron: it is neither artificial nor intelligence. AI determinations derive from real data scrutinized algorithmically and, at least currently, they appear intelligent only as the data evaluation is iterative and cumulative—temporally updated evaluations of compounding data sets—a heretofore quasi-definition of intelligence. These data serve us—patients, health care providers, researchers, epidemiologists, industry, developers, or regulators. With greater harmonization and data integrity, data utilization becomes globalized. Wider use of data sets can lead to more discoveries and reduce testing redundancies. Global data sharing can limit the biases of small numbers and identify populations of low prevalence (e.g., rare diseases), allowing the creation of larger, global cohorts.

Pathologists, like the article’s coauthor Jochen Lennerz, are physician specialists trained to understand data; we are responsible for the generation of roughly 70% of all medical data. Pathologists, along with ethicists, data scientists, data security specialists, and various other professionals, must be at the table when a health data integrity framework is being created.

Within this framework, we will benefit from a system of trust that recognizes and respects the rights of patients; understands, and supports, medical research; and ensures the safe, ethical, transfer and sharing of interoperable, harmonized medical data.

We must ensure the steps we take with health data are not just for a few “men,” to borrow again from the lunar-landing lexicon. Rather, we must create a health data ecosystem of integrity—a giant step for humankind.

Vice President for Medical Affairs, Sysmex America

Governor, College of American Pathologists (CAP)

Chair, CAP Council on Informatics and Pathology Innovation

Jochen Lennerz, Nick Schneider, and Karl Lauterbach report how efforts to share health data across national borders snag on legal and regulatory barriers and suggest that data integrity will help advance health.

In today’s digital transformation age, with our genomes fully sequenced and widely deployed electronic health record systems, addressing collaborative digital health data use presents a variety of challenges. There is, of course, the need to ensure data integrity, which will demand addressing such issues as the relative lack of well-defined data standards, poor implementation, and adherence, as well as the asymmetry of digital knowledge and innovation adoption in our society. But a more complex challenge arises from the propensity of humans to push major inventions beyond their benefits—and into the abyss. Therefore, we must engage together for human integrity in collaborative health data use.

Yet another challenge—one that the authors cite and I agree with—arises from deep-rooted conflicts of interest among all stakeholders (patients, health care professionals, the health management industry, payors, and governments) in health care. There also are generational differences between tech-savvy younger health care professionals, who are generally more open to structured data collection and documentation, and more senior ones, who struggle with technology and contribute health data that is more difficult to process.

There is, though, overall agreement among health care professionals that their foremost task is to serve as their patients’ advocate and go above and beyond to help them overcome or manage their medical problems using every available resource, which today would clearly include taking full advantage of digital health innovations, health data, and associated technologies such as artificial intelligence.

A more complex challenge arises from the propensity of humans to push major inventions beyond their benefits—and into the abyss. Therefore, we must engage together for human integrity in collaborative health data use.

However, since medicine has become such a complex profession, health professionals often practice in large care facilities embedded in organizations operated by corporations that seek profits, and where payors strictly regulate access to and extent of utilization of care on behalf of governments that struggle with expenditures. Unsurprisingly, the goals of nonpatients, administrators, and others outside of health care might not be what health professionals would view as ethical and responsible in terms of health data collection and use.

Among still other obstacles to the protection of health care data, cybercrime risks with hackers who either for personal gain or on behalf of third parties attack our increasingly digital world are a major threat. And then there is the important matter of people’s individual freedom, which at least in most Western democracies includes the right to informational self-determination and privacy. Ensuring these rights needs to be balanced with the societal goal of fostering increasingly data driven medical and scientific progress and health care delivery.

Once all stakeholders in medicine, health care, and biomedical research realize that our traditional approach to diagnosis, prognosis, and treatment can no longer process and transform the enormous volume of information into therapeutic success, innovative discovery, and health economic performance, we can join forces to unite for precision health. For details, I’ve laid out a vision for collaborative health data use and artificial intelligence development in the Nature Portfolio journal Digital Medicine.

Put briefly, precision health is the right treatment, for the right person, at the right time, in the right place. It is enabled through a learning health system in which medicine and multidisciplinary science, economic viability, diverse culture, and empowered patient’s preferences are digitally integrated and conceptually aligned for continuous improvement and maintenance of health, well-being, and equity.

Professor of Medicine and Adjunct Professor of Computing Science

University of Alberta

Director, Collaborative Research and Training Experience “From Data to Decision

Natural Sciences and Engineering Research Council of Canada

Needed: A Vision and Strategy for Biotech Education

It is consistently true that as new career fields and business centers arrive, a portion of the population is left on the sidelines. This holds especially true in the biotechnology, medical technology, genomics, and synthetic biology investments we see today. Urban centers, which often have a high concentration of university graduates, are primed for success in the emerging bioeconomy. But even there, career and educational opportunities are often out of reach for young women and people of color. In rural communities and in regions that have traditionally supported fishing, forestry, farming, and mining, all residents are less likely to track into careers in science, technology, engineering, or mathematics.

In “A Great Bioeconomy for the Great Lakes” (Issues, Winter 2024), Devin Camenares, Sakti Subramanian, and Eric Petersen report on some targeted and hyperlocal interventions that stimulated a bioinnovation community in the Midwest and Great Lakes areas. They found that connecting students in regional high schools and local colleges with experts in industry and community labs increased the students’ appetites for further involvement. What a boon for the educators and young innovators who successfully discovered this opportunity.

In our work through the BioBuilder Educational Foundation, we can attest to the need for deliberate actions to overcome specific regional obstacles. Since 2019, BioBuilder has been engaged with high schools in East Tennessee. After several years laying a foundation in this rural region, BioBuilder is now integrated every year into biology classes in secondary schools spanning several counties. It is also integrated into some of the region’s BioSTEM pathways that Tennessee uses to bring early-college access and relevant work experience into career and technical education classrooms statewide. BioBuilder has built partnerships with local and federal funders to expand this work, and the success has spurred a much larger set of activities in the region, including post-secondary tracks at East Tennessee State University and local business opportunities such as the development of the Valleybrook Research Campus.

It must be recognized, however, that such hyperlocal approaches to building bioeconomies is not an ultimate solution. Regional approaches must be complemented with systemic educational change if the nation is to achieve the “holistic, decentralized, and integrated bioeconomy” that Camenares, Subramanian, and Petersen aim for.

Regional approaches must be complemented with systemic educational change if the nation is to achieve the “holistic, decentralized, and integrated bioeconomy” that Camenares, Subramanian, and Petersen aim for.

The K–12 public school system in the United States is an underutilized lever of change in this regard. With over 3 million students graduating each year, the nation is failing our children and collective future by not offering an on-ramp to sophisticated job sectors without the need for higher education. Public schools fulfilled the nation’s workforce needs in the past, diversifying the talent pool with an equitable geographic and racial distribution. Public schools fully reflect the nation’s diversity, and high school is the last formal education received by between one-third and one-half of all residents. Public schools operate in every state and so provide an established infrastructure for engaging every community.

With respect to the emerging bioeconomy, a vision and strategy for public education is needed. And it could be simple: providing easy-to-implement content that modernizes the teaching of life science, and then millions of young people can graduate high school with enough content knowledge and skills to join the workforce, spurring development of the bioeconomy everywhere.

Founder and Executive Director

BioBuilder Educational Foundation

National Program Coordinator

BioBuilder Educational Foundation

The “one-size-fits-all” curriculum common in many regions of the United States may fall short of capitalizing on local differences when building a successful bioeconomy, argue Devin Camenares, Sakti Subramanian, and Eric Petersen. The authors highlight the extent of programmatic structure that may or may not be helpful in seeding locally specialized educational initiatives. In this model, the authors propose that the uniqueness of a region is the key to unlocking local bioeconomic growth, turning current challenges into future opportunities.

This approach has proven fruitful in the Great Lakes region and beyond. For example, Beth Conerty at the University of Illinois Integrated Bioprocessing Research Laboratory takes advantage of its Midwest location to offer bioprocessing scale-up opportunities. Similar to the approach the authors propose, the facility couples science with educational opportunities for its students. Also, Scott Hamilton-Brehm of Southern Illinois University Carbondale founded a program called Research, Engagement, and Preparation for Students, which promotes accessibility, outreach, and communication in science, technology, engineering, and mathematics. The program’s strong student engagement grew into a company called Thermaquatica that converts biomass to value-added products including biostimulants and biofuels.

The uniqueness of a region is the key to unlocking local bioeconomic growth, turning current challenges into future opportunities.

Elsewhere, Ernesto Camilo Zuleta Suárez led several outreach and educational programs to prepare leaders for the future bioeconomy through the Consortium for Advanced Bioeconomy Leadership Education, based at Ohio State University. In Tennessee, the Oak Ridge Site Specific Advisory Board serves as a more policy-focused example, wherein student board members are strategically invited to take part in maintaining the local environment of the Oak Ridge Reservation, which still faces challenges from legacy wastes. Additionally, the Bredesen Center at the University of Tennessee established a strong program to teach students to incorporate outreach and public engagement into their scientific career.

Once established, these locally cultivated STEM programs can gain traction through science communication, which is an integral component in the field of synthetic biology (SynBio) and a determinative step of the scientific method. To highlight some examples, we have the International Genetically Engineered Machine (iGEM) and BioBuilder podcasts by Zeeshan Siddiqui and his team, the Mastering Science Communication course led by Larissa Markus, and the iGEM Digest authored by Hassnain Qasim Bokhari and Marissa Sumathipala. More recently, Tae Seok Moon has launched the SynBYSS: SynBio Young Speaker Series. And the Science for Georgia nonprofit hosts free science communication workshops and offers opportunities to share science with the community. Science communication not only educates the current generation but also transfers knowledge to future generations, thereby ensuring the sustainability of science.

Perhaps most important, these efforts are built on a student-centered approach designed to offer increasingly accessible means for students to participate in STEM education and related activities. The Global Open Genetic Engineering Competition and BioBuilder are already increasing accessible means for students to participate. Spurring interest and engagement in STEM, even at the middle or high school levels, can accelerate the development of career interests, especially in a field as interdisciplinary as synthetic biology. Such experiences may even spark interests beyond typical STEM careers and help catalyze a scientifically literate society. This educational proposition invites a people-focused approach as opposed to a project-focused one—the former of which is the key ingredient that will make the difference.

Mentor

iGEM

Innovative, Opportunistic, Faster

It is safe to say that research into the production, distribution, and use of energy in the United States has emphasized the technological over the social. Let’s be clear: this focus has had its successes. We see physical improvements today in our homes and offices and in the growth of renewable sources in large part due to research and development investments begun in the 1970s. In some cases, these efforts were paired with inquiries into the economic, demographic, and behavioral contexts surrounding the technology in question. But this kind of comprehensive, multidisciplinary approach to our energy system has been rare—at least until recently.

As Evan Michelson and Isabella Gee demonstrate by example in “Lessons From a Decade of Philanthropy for Interdisciplinary Energy Research” (Issues, Winter 2024), the questions that social scientists, policymakers, the media, and consumers might have about the energy system extend far beyond resistors and wires. These questions are unwieldy. They are also challenging for researchers accustomed to working in their siloes. For example, many energy scholars are unfamiliar with our complex housing, property, utility, and household practices and their regulatory history. Likewise, social scientists have been sidelined not just due to their disciplinary silos and inability to engage with the engineers and scientists but because of the historical underinvestment in their methods.

Unfamiliarity has practical implications, such as not knowing which data are available, how to collect them, and whether indicators represented by these data are the most valid and aligned to the underlying concept in question. Put simply, humans—or more specifically, our understanding of humans and their energy use—are a missing link in energy research.

The questions that social scientists, policymakers, the media, and consumers might have about the energy system extend far beyond resistors and wires.

Enter philanthropy. Michelson and Gee rightfully point out the critical role of philanthropic funders based on their universal mission to improve social conditions. But they also note how philanthropy offers a unique vehicle compared with the public sector’s statutory restrictiveness and private sector’s profit motivation. Philanthropy can be innovative (funding risky propositions with potentially large societal benefit), opportunistic (targeting questions and researchers that have been excluded from methods and institutions), and, quite frankly, faster and nimbler, along with being more altruistic.

But philanthropy and, in turn, philanthropy’s reach is limited. In the broad and still-murky field of energy and its socioeconomic soup, there are few philanthropic energy R&D funders, often with very limited budgets in competition with foundations’ other pressing social program allocations. Federal funding’s crowding out of foundation contributions might convince some funders to simply stay out of the business altogether.

For the few funders that stay in the race, there can be real rewards. The subject matter and researcher pools supported by the two largest federal energy research funders—the National Science Foundation and US Department of Energy—have expanded. In some cases, this has been made explicit through interdisciplinary research calls as well as stated research questions that require collaboration across silos. Anecdotally, every energy conference I have attended in the last five years has consciously discussed the integration of social sciences as a fundamental component of energy research. While each philanthropic entity rightfully evaluates its impact—and in the Alfred P. Sloan Foundation’s case, quantitative indicators of those effects—we can see that these efforts have already had a massive qualitative effect.

Director of Remodeling Futures

Harvard Joint Center for Housing Studies

Inviting Civil Society Into the AI Conversation

Karine Gentelet’s proposals for fostering citizen contributions to the development of artificial intelligence, outlined in her essay, “Get Citizens’ Input on AI Deployments” (Issues, Winter 2024), are relevant to discussions on the legal framework for AI, and deserve to be examined. For my part, I’d like to broaden the discussion on ways of encouraging the contribution of civil society groups to the development of AI.

The amplification or emergence of new social inequalities is one of the fears of those calling for more effective supervision of AI. How can we prevent AI from having a negative impact on inequalities, and why not encourage a positive one instead?

Involvement of civil society groups, notably from the community sector, that work with impoverished, discriminated, or vulnerable populations in consultations or deliberations about AI and its governance is currently very marginal, at least in Quebec. The same holds true for the involvement of individuals within these populations. But civil society groups, just like people, can be affected by AI—and as drivers of social innovation, they can also make positive contributions to the evolution of AI.

Even more concretely, the expertise of civil society groups can be called upon at various stages in the development of AI systems. This may occur, for example, in analyzing development targets and possible biases in algorithm training data, in testing technological applications against the realities of marginalized populations, and in identifying priorities to help ensure that AI systems benefit society. In short, civil expertise can help identify issues that those guiding AI development at present fail to raise because they are far too remote from the realities of marginalized populations.

The expertise of civil society groups can be called upon at various stages in the development of AI systems.

Legal or ethical frameworks can certainly make more room for civil society expertise. But for them to play their full role, civil society groups must have the financial resources to develop their expertise and dedicate time to studying certain applications. Yet very often, these groups are asked to offer in-kind contributions before being allowed to participate in a research project!

And beyond financial challenges, some civil society groups remain out of the AI conversation. For example, the national charitable organization Imagine Canada found that 61% of respondents to a survey of charities indicated that they didn’t understand the potential applications of AI in their sector. The respondents also highlighted the importance of and need for training in AI.

Legislation and regulation are often necessary to provide a framework for working in or advancing an industry or sector. However, other mechanisms—including recourse to the courts, research, journalistic investigations, and collective action by social movements or whistleblowers—can also contribute significantly to the evolution of practices and respect for the social consensus that emerges from deliberative exercises. Events of this kind concerning AI are still very fragmentary.

Executive Director

Observatoire Québécois des Inégalités

Montréal, Québec, Canada

Existing approaches to governance of artificial intelligence in the United States and beyond often fail to offer practical ways for the public to seek justice for AI and algorithmic harms. Karine Gentelet correctly observes that policymakers have prioritized developing “guardrails for anticipated threats” over redressing existing harms, especially those emanating from public-sector abuse of AI and algorithmic systems.

This dynamic plays out every day in the United States, where law enforcement agencies use AI-powered surveillance technologies to perpetuate social inequality and structural disadvantage for Black, brown, and Indigenous communities.

Police departments routinely use historically marginalized communities as testing grounds to experiment with controversial AI and big data surveillance technologies such as facial recognition, drone surveillance, and predictive policing. For example, reporters at WIRED magazine found that nearly 12 million Americans live in neighborhoods where police have installed AI audio sensors to detect gunshots and collect data on public conversations. They estimate that 70% of the people living in those surveilled neighborhoods are either Black or Hispanic.

As Gentelet notes, existing AI policy frameworks in the United States have largely failed to create accountability mechanisms that address real-world harms such as mass surveillance. In fact, recent federal AI regulations including Executive Order 141110 have actually encouraged law enforcement agencies “to advance the presence of relevant technical experts and expertise [such] as machine learning engineers, software and infrastructure engineering, data privacy experts [and] data scientists.” Rather than redress existing harms, federal policymakers are staging the grounds for future injustice.

Police departments routinely use historically marginalized communities as testing grounds to experiment with controversial AI and big data surveillance technologies.

Without AI accountability mechanisms, advocates have turned to courts and other traditional forums for redress. For example, community leaders in Baltimore brought a successful federal lawsuit to end a controversial police drone surveillance program that recorded the movements of nearly 90% of the city’s 585,000 residents—a majority of whom identify as Black. Similarly, a coalition of advocates working in Pasco County, Florida, successfully petitioned the US Department of Justice to terminate federal grant funding for a local predictive policing program while holding school leaders accountable for sharing sensitive student data with police.

While both efforts successfully disrupted harmful algorithmic practices, they failed to achieve what Gentelet describes as “rightful reparations.” Existing law fails to provide the structural redress necessary for AI-scaled harms. Scholars such as Rashida Richardson of the Northeastern University School of Law have outlined what more expansive approaches could look like, including transformative justice and holistic restitution that address social and historical conditions.

The United States’ approach to AI governance desperately needs a reset that prioritizes existing harm rather than chasing after speculative ones. Directly impacted communities have insights essential to crafting just AI legal and policy frameworks. The wisdom of the civil rights icon Ella Baker remains steadfast in the age of AI: “oppressed people, whatever their level of formal education, have the ability to understand and interpret the world around them, to see the world for what it is, and move to transform it.”

Senior Policy Counsel & Just Tech Fellow

Center for Law and Social Policy

Drowning in a Mechanical Chorus

In her thoughtful essay, “How Generative AI Endangers Cultural Narratives” (Issues, Winter 2024), Jill Walker Rettberg writes about the potential loss of a beloved Norwegian children’s story alongside several “misaligned” search engine results. The examples are striking. They point also to even more significant challenges implicit in the framing of the discussion.

The fact that search results in English overwhelm those in Norwegian, which has far fewer global speakers, reflects the economic dominance of the American technology sector. Millions of people, from Moldova to Mumbai, study English in the hope of furthering their careers. English, despite, and perhaps because of, its willingness to borrow from other cultures, including the Norse, has become the de facto lingua franca in many fields, including software engineering, medicine, and science. The bias toward English in the search therefore reflects the socioeconomic realities of the world.

Search engines of the future will undoubtedly do a better job in localizing the query results. And the improvement might come exactly from the kind of tightly curated machine learning datasets that Rettberg encourages us to consider. A large language model “trained” on local Norwegian texts, including folk tales and children’s stories, will serve more relevant answers to a Norwegian-speaking audience. (In brief, large language models are trained, using massive textual datasets consisting of trillions of words, to recognize, translate, predict, or generate text or other content.) But—and here’s the crucial point—no amount of engineering can make a model more fair or more equitable than the world it is meant to represent. To improve it, we must improve ourselves. Technology encodes global politics (and economics) as they are, not as they should be. And we humans tend to be a quarrelsome bunch, rarely converging on the same shared vision of a better future.

No amount of engineering can make a model more fair or more equitable than the world it is meant to represent. To improve it, we must improve ourselves.

The author’s conclusions suggest we consider a further, more troubling, aspect of generative AI. In addition to the growing dominance of the English language, we have yet to contend with the increasing mass of machine-generated text. If the early large language models were trained on human input, we are likely soon to reach the point where generated output far exceeds any original input. That means the large language models of the future will be trained primarily on machine-generated inputs. In technical terms, this results in overfitting, where the model follows too closely in its own footsteps, unable to respond to novel contexts. It is a difficult problem to solve, first because we can’t really tell human and machine-generated texts apart, and second, because any novel human contribution is likely to be overwhelmed by the zombie horde of machine outputs. The voices of any future George R. R. Martins or Toni Morrisons may simply drown in a mechanical chorus.

Will human creativity survive the onslaught? I have no doubt. The game of chess, for example, became more vibrant, not less, with the early advent of artificial intelligence. The same, I suspect, will hold true in other domains, including the literary—where humans and technology have long conspired to bring us, at worst, some countless hours of formulaic entertainment, and, at their collaborative best, the incredible powers of near-instantaneous translation, grammar checking, and sentence completion—all scary and satisfying in any language.

Associate Professor of English and Comparative Literature

Columbia University

How to Build Less Biased Algorithms

In “Ground Truths Are Human Constructions” (Issues, Winter 2024), Florian Jaton succinctly captures the crucial importance of the often-overlooked aspects of human interventions in the process of building new machine learning algorithms through operations of ground-truthing. His observations summarize and expand his previous systematic work on ground-truthing practices. They are fully aligned with the views I have developed while researching the development of diagnostic artificial intelligence algorithms for Alzheimer’s disease and other, more contested illnesses, such as functional neurological disorder.

Much of the current critical discourse on machine learning focuses on training data and their inherent biases. Jaton, however, fittingly foregrounds the significance of how new algorithms, both supervised and unsupervised, are evaluated by their human creators during the process of ground-truthing. As he explains, this is done by using ground-truth output targets to quantify the algorithms’ ability to perform the tasks for which they were developed with sufficient accuracy. Consequently, the algorithms’ thus assessed accuracy is not an objective measure of their performance in real-world conditions but a relational and contingent product of tailor-made ground-truthing informed by human choices.

Even more importantly, shifting the focus on how computer scientists perform ground-truthing operations enables us to critically examine the processuality of the data-driven evaluation as a context-specific sociocultural practice. In other words, to understand how the algorithms that are increasingly incorporated across various domains of daily life operate, we need to unpack not only how their specific underlying ground truths have been constructed but also how such ground truths have been operationally deployed from case to case.

We need to unpack not only how their specific underlying ground truths have been constructed but also how such ground truths have been operationally deployed from case to case.

I laud in particular Jaton’s idea that we humanities scholars and social scientists should not stop at analyzing the work of computer scientists who develop new AI algorithms but should instead actively build new transdisciplinary collaborations. Based on my research, I have concluded that many of computer scientists’ decisions on how to build and deploy ground-truth datasets are primarily driven by pragmatic goals of solving computational problems and are often informed by tacit assumptions. Broader sociocultural and ethical consequences of such decisions remain largely overlooked and unexplored in such constellations.

In future transdisciplinary collaborations, the role of humanities scholars could be to systematically examine and draw attention to the otherwise overlooked sociocultural and ethical implications of various stages of the ground-truthing process before their potentially deleterious consequences become implicitly built into new algorithms. Such collaborative practices require additional time investments and the willingness to work synergistically across disciplinary divides—and are not without their challenges. Yet my experience as a visual studies scholar integrated into a transdisciplinary team that explores how future medical applications of AI could be harnessed for knowledge production shows that such collaborations are possible. In fact, transdisciplinary collaborations may indeed be not just desirable but necessary if, as Jaton suggests, we want to build less biased and more accountable algorithms.

Postdoctoral Researcher, Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich

Visiting Researcher, Department of Social Studies of Science and Technology, Institute of Philosophy, History of Literature, Science, and Technology, Technical University Berlin

Celebrating the Centennial of the National Academy of Sciences Building

This is a special year for the National Academy of Sciences (NAS) as its beautiful headquarters at 2101 Constitution Avenue, NW, in Washington, DC, turns 100 years old. Dedicated by President Calvin Coolidge in April 1924 and designed by architect Bertram Grosvenor Goodhue, the building’s architecture synthesizes classical elements with Goodhue’s preference for “irregular” forms. It harmoniously weaves together Hellenic, Byzantine, and Egyptian influences with hints of Art Deco, giving the building a modern aspect—which is consistent with Goodhue’s assertion that it was meant to be a “modern and scientific building, built with modern and scientific materials, by modern and scientific methods for a modern and scientific set of clients.”

Goodhue, celebrated for his Gothic Revival and Spanish Colonial Revival designs, developed a late-career interest in Egyptian Revival architecture around the time that King Tutankhamun’s tomb was discovered. The NAS building’s design references ancient Egypt with its battered, or inwardly sloping, façade, giving the building an air of monumentality. It depicts the Egyptian god Imhotep, the Great Pyramid of Giza, the Museum of Alexandria, the ancient lighthouse on the island of Pharos, and hieroglyphic decorations. The structure reflects Goodhue’s distinctive aesthetic, and it also harmonizes with the nearby neoclassical Lincoln Memorial, which was under construction when the NAS building was planned.

The Bondage of Data Tyranny

In “The Limits of Data” (Issues, Winter 2024), C. Thi Nguyen identifies key unspoken assumptions that pervade modern life. He skillfully illustrates the problems associated with reducing all phenomenon to data and ignoring those realities that cannot be captured by data, especially when it comes to human beings. He identifies examples of how the focus on quantification frequently strips data of context and introduces bias in the name of objectivity. Here, I offer some thoughts that complement the essay’s essential points while approaching them from slightly different perspectives.

While forcing people into groups to enable better data collection may lead to unwanted outcomes, some social categorization is necessary. Society needs legal thresholds to enable the equal treatment of citizens under the law. Sure, there are responsible 15-year-old geniuses and immature 45-year-old fools, but society has to offer some reasonable, but ultimately arbitrary, dividing line in allowing people to vote, or drive, or drink, or serve in the army. The need to codify legal standards for society remains an imperative, but, as Nguyen argues, those standards need not be strictly quantitative.

The universal drive for quantification and reducing phenomenon to data is driven by the architecture of the digital databases that process that data. Storing the data and analyzing them demands that all information inputs be in a format that must ultimately translate to 1s and 0s. This assumption itself, that all information is reducible to 1s and 0s, contains within it the conclusion that concepts, and by extension human thinking, can be reduced to binary terms. An attitude emerges that information that cannot be reduced to 1s and 0s is not worthy of attention. Holistic notions such as art, human emotion, and the soul must be either reduced to strict mathematical patterns or treated as a collection of examples from the internet or other databases.

The universal drive for quantification and reducing phenomenon to data is driven by the architecture of the digital databases that process that data.

A further motivation for the universal embrace of data and the fixation with quantification lies deep in the roots of Anglo-Saxon, and particularly American, culture. Early in the eighteenth century, the ideas of the British philosopher John Locke initiated a tradition that placed far greater value on practical facts that can be sensed (i.e., measured) rather than spiritual beliefs or cultural traditions that are the products of human reflection. By the end of the century, America’s founding fathers, including Benjamin Franklin and Thomas Jefferson, followed Locke’s tradition by emphasizing practicality and measurement. The advent of mass production and consumption—capitalism—only further sharpened the focus on the practical and obtainable. Entering the twentieth century, the great British physicist Lord Kelvin summed up his commitment to empiricism by declaring: “To measure is to know.”

Society leverages the power of current data processing technologies but is subject to their limits. An enduring fixation with data stems from modern beliefs about what type of knowledge is worthwhile. Freeing society from the bias and bondage of data tyranny will require responding to these deeply embedded technological and behavioral factors that keep society limited by contemporary data structures.

Senior Research Associate, Program for the Human Environment

The Rockefeller University

A Tool With Limitations

In the Winter 2024 Issues, the essays collectively titled “An AI Society” offer valuable insight into how artificial intelligence can benefit society—and also caution about potential harms. As many other observers have pointed out, AI is a tool, like so many that have come before, and humans use tools to increase their productivity. Here, I want to concentrate on generative AI, as do many of the essays. Generative AI is a special kind of tool designed to improve human productivity, but like all tools it has limitations. Growth, innovation, and progress in AI are inevitable, and the essays provide an opportunity to invite collaboration between professionals in the social sciences and humanities to work with computer scientists and AI developers to better understand and address the limitations of AI tools.

The rise and overall awareness of generative AI has been nothing short of remarkable. The generative AI-powered ChatGPT took only five days to reach 1 million users. Compare that with Instagram, which took about 2.5 months to reach that mark, or Netflix, which took about 3.5 years. Additionally, ChatGPT took only about two months to reach 100 million users, while Facebook took about 4.5 years and Twitter just under 5.5 years to hit that mark.

Generative AI is a special kind of tool designed to improve human productivity, but like all tools it has limitations.

Why has the uptake of generative AI been so explosive? Certainly one reason is that it helps productivity. There is of course plenty of anecdotal evidence to this effect, but there is a growing body of empirical evidence as well. To cite a few examples, in a study involving professional writing skills, people who used ChatGPT decreased writing time by 40% and increased writing quality by 18%. In a study of nearly 5,200 customer service representatives, generative AI increased productivity by 14% while also improving customer sentiment and employee retention. And in a study of software developers, those who were paired with a generative AI developer tool completed a coding task 55.8% faster than those who were not. With that said, we are also beginning to understand the kinds of tasks and people that benefit most from generative AI and those that don’t benefit or may even experience a loss of productivity. Knowing when and why it doesn’t work is as important as knowing when and why it does.

Unfortunately, one of the downsides of today’s class of generative AI tools is that they are prone to what are called “hallucinations”—they output information that is not always correct. The large language model technology upon which the systems are based is good at producing fluent and coherent text, but not necessarily factual text. While it is hard to know how frequently these hallucinations occur, one estimate puts the figure at between 3% and 27%. Indeed, currently there seems to be an inherent trade-off between creativity and accuracy.

So we have a situation today where generative AI tools are extremely popular and demonstrably effective. At the same time, they are far from perfect, with many problems identified. Just as we drive cars and use the internet, there are risks, but we use these tools anyway because we decide the benefits outweigh the risks. Apparently people are making a similar judgment in deciding to use generative AI tools. With that said, it is critically important that users be well informed about the potential risks of these tools. It is also critical that policymakers—with public input—work to ensure that AI safety and user protection are given the utmost priority.

Professor Emeritus

Department of Computer Science

Southern Methodist University

The writer chaired a National Academies of Sciences, Engineering, and Medicine workshop in 2019 on the implications of artificial intelligence for cybersecurity.

The essays on artificial intelligence provide interesting and informative insights into this emerging technology. All new technologies bring both positive and negative results—what I have called the yin and yang of new technologies. AI will be no exception. Advocates for a new technology usually emphasize its advantages and dismiss consideration of possible adverse effects. It is only later, when the technology has been allowed to operate widely, that actual positive and negative effects become apparent. As Emmanuel Didier points out in his essay, “Humanity is better at producing new technological tools than foreseeing their future consequences.” The more disruptive the new technology, the greater will be its effects of both kinds.

With AI, it’s not just the bias and machine learning gone amok, which are the current criticisms levied against the technology. AI’s influences can go far beyond what we envision at this time. For example, users of AI who rely on it to produce outputs reduce their opportunities for growth of creative abilities and development of social skills and other functional capabilities that we normally associate with well-adjusted human adults. A graphic example of what I mean can be seen in a recent entry in the comic strip Zits, in which a teenager named Jeremy is talking with his friend. He says, “If using a chatbot to do homework is cheating … but AI technology is something we should learn to use … how do we know what’s right or wrong?” And his buddy responds, “Let’s ask the chatbot!” By relying on the AI program to answer their ethical quandary, they lost the opportunity to think through the issue at hand and develop their own ethos. It is not hard to imagine similar experiences for AI users in the real world who are otherwise expected to grow in wisdom and social abilities.

Advocates for a new technology usually emphasize its advantages and dismiss consideration of possible adverse effects.

It will probably not be the use of AI in individual circumstances that becomes problematic, but the overreliance on AI that is almost bound to develop. Similarly, social media are not, by themselves, a bad thing. But social media have now overtaken a whole generation of users and led to personal antisocial and asocial behaviors. The potential for similar negative outcomes when AI use becomes widespread is very strong.

Back when genetic modification was a new and potentially disruptive technology, it was foreseen as possibly dangerous to society and to the environment. In response, policymakers and concerned scientists put safeguards in place to prohibit the unfettered release of gene-edited organisms into the environment, as well as the editing of human germ cells that transfer genetic traits from one generation to the next. Most of these restrictions are still in effect. AI could possibly be just as disruptive as genetic modification, but there are no similar safeguards in place to allow us time to better understand the extent of AI influences. And it is not very likely that the do-nothing Congress we have now would be able to handle an issue as complex as this.

Professor Emeritus

Fischell Department of Bioengineering

University of Maryland

Bioliteracy, Bitter Greens, and the Bioeconomy

The success of biotechnology innovations is predicated not only on how well the technology itself works, but also on how society perceives it, as Christopher Gillespie eloquently highlights in “What Do Bitter Greens Mean to the Public?” (Issues, Winter 2024), paying particular attention to the importance of ensuring that diverse perspectives inform regulatory decisions.

To this end, the author calls on the Biden administration to establish a bioeconomy initiative coordination office (BICO) to coordinate between regulatory agencies and facilitate the collection and interpretation of public acceptance data. This would be a much-needed improvement to the current regulatory system, which is fragmented and opaque for nonexperts. For maximum efficiency, care should be taken to avoid redundancy between BICO and other proposals for interagency coordination. For example, in its interim report, the National Security Commission on Emerging Biotechnology formulated two relevant Farm Bill proposals: the Biotechnology Oversight Coordination Act and the Agriculture Biotechnology Coordination Act.

In addition to making regulations more responsive to public values, as Gillespie urges, I believe that increasing the general public’s bioliteracy is critical. This could involve improving K–12 science education and updating it to include contemporary topics such as gene editing, as well as amending civics curriculums to better explain the modern functions of regulatory agencies. Greater bioliteracy could help the public make more informed judgments about complex topics. Its value can be seen in what befell genetic use restriction technology (GURT), commonly referred to as terminator technology. GURTs offered solutions to challenges such as the efficient production of hybrid seeds and the prevention of pollen contamination from genetically modified plants. However, activists early on seized on the intellectual property protection aspect of GURT to turn public opinion against it, resulting in a long-standing moratorium on its commercialization. More informed public discourse could have paved a path toward leveraging the technology’s benefits while avoiding potential drawbacks.

Greater bioliteracy could help the public make more informed judgments about complex topics.

Gillespie began his essay by examining how some communities and their cultural values were missing from conversations during the development of a gene-edited mustard green. The biotech company Pairwise modified the vegetable to be less bitter—but bitterness, the author notes, is a feature, not a flaw, of a food that is culturally significant to his family.

This example resonated keenly with me. I have attended a company presentation on this very same de-bittered mustard green. Like Gillespie, I do not oppose the innovation itself. Indeed, I’m excited by how rapidly gene-edited food products have made it into the market, and by the general lack of public freakout over them. But like Gillespie, I was bemused by this product, though for a different reason. According to the company representative, Pairwise’s decision to focus on de-bittering mustard greens as its first product was informed by survey data indicating that American consumers wanted more diversity of choice in their leafy greens. My immediate thought was: just step inside an Asian grocery store, and you’ll find a panoply of leafy greens, many of which are not bitter.

Genetic engineering has opened the doors to new plant varieties with a dazzling array of traits—but developing a single product still takes extensive time and money. Going forward, it would be heartening to see companies focus more on traits such as nutrition, shelf stability, and climate resilience than on reinventing things that nature (plus millennia of human agriculture) has already made.

PhD Candidate, Stanford University

Policy Entrepreneurship Fellow, Federation of American Scientists

Christopher Gillespie notes that inclusive public engagement is needed to best advance innovation in agricultural biotechnology. As an immigrant daughter of a smallholder farmer at the receiving end of products stemming from biotechnology, I agree.

Growing up, I witnessed firsthand the challenges and opportunities that smallholder farmers face. So I am excited by the prospect that innovations in agricultural biotechnology can bring positive change for farming families like mine. At the same time, since farming practices have been passed down in my family for generations, I directly feel the importance of cultural traditions. Thus, the author’s emphasis on the importance of obtaining community input during the early development process resonates deeply.

Such public consultation, however, often gets overlooked—to common detriment. In the author’s example of gene-edited mustard greens, the company behind the innovation could have greatly benefited from a targeted stakeholder engagement process, soliciting input from the very communities whose lives would be impacted. Such a collaborative effort can not only enhance the relevance of an innovation but also address cultural concerns. I believe that many agricultural biotechnology companies are already doing public engagement, but how it is being done makes a difference.

Such a collaborative effort can not only enhance the relevance of an innovation but also address cultural concerns.

In this regard, while the participatory technology assessments methods that Gillespie describes represent an effective way to gather input from members of the public whose opinions are systemically overlooked, it is important to recognize that this approach may hold certain challenges. Companies might encounter roadblocks in getting communities to open up or welcome their innovation. This resistance could be due to historical reasons, past experiences, or a perceived lack of transparency. Public engagement programs should be created and facilitated through a decentralized approach, where a company chooses a member of a community to lead and engage in ways that resonate with the community’s values. Gillespie calls this person a “third party or external grantee.” This individual should ideally adopt the value-based communication approach of grassroots engagement, where stories are exchanged and both the company and the community connect on shared values and strategize ways forward to benefit equally from the innovation.

Another step that the author proposes—establishing a bioeconomy initiative coordination office within the White House Office of Science and Technology Policy, focusing on improved public engagement—would also be a step in the right direction. But here again, it is crucial that this office adopt a value-based inclusive and decentralized approach to public engagement.

Though challenges remain, I look forward to a future filled with advancements in agricultural biotechnology and their attendant benefits in areas such as improved crop nutrition, flavor, and yield, as well as in pest control and climate resilience. And I return to my belief that fostering a transparent dialogue among innovators, regulators, and communities is key to building and maintaining the trust needed to ensure this progress for all concerned.

PhD Candidate, Department of Horticultural Science

North Carolina State University

She is an AgBioFEWS Fellow of the National Science Foundation and a Global Leadership Fellow of the Alliance for Science at the Boyce Thompson Institute

“Ghosts” Making the World a Better Place

In “Bring on the Policy Entrepreneurs” (Issues, Winter 2024), Erica Goldman proposes that “every graduate student in the hard sciences, social sciences, health, and engineering should be able to learn some of the basic tools and tactics of policy entrepreneurship as a way of contributing their knowledge to a democratic society.” I wholeheartedly support that vision.

When I produced my doctoral dissertation on policy entrepreneurs in the 1990s, only a handful of scholars, most notably the political scientist John Kingdon, mentioned these actors. I described them as “ghost like” in the policy system. Today, researchers from across the social sciences are studying policy entrepreneurs and many new contributions are being published each year. Consequently, we can now discern regularities in what works to increase the likelihood that would-be policy entrepreneurs will meet with success. I summarized these regularities in an article in the journal Policy Design and Practice titled “So You Want to be a Policy Entrepreneur?

When weighing the prospects of investing time to build the skills of policy entrepreneurship, many professionals in scientific, technological, and health fields might worry about the opportunity costs involved. If they work on these skills, what will they be giving up? It’s legitimate to worry about trade-offs. And, certainly, none of us want highly trained professionals migrating away from their core business to go bare knuckle in the capricious world of political influence.

But to a greater extent than has been acknowledged so far, building skills to influence policymaking can be consistent with becoming a more effective professional across a range of fields. The same skills it takes to be a policy entrepreneur are those that can make you a higher performer in your core work.

Building skills to influence policymaking can be consistent with becoming a more effective professional across a range of fields.

My studies of policy entrepreneurship show collaboration is a foundational skill for anyone wanting to have policy influence. Policy entrepreneurs do not have to become political advisers, lobbyists, or heads of think tanks. But they do need to be highly adept at participating in diverse teams. They need to find effective ways to connect and work with others who have different knowledge and skills and who come from different backgrounds than their own. Thinking along these lines, it doesn’t take much reflection to see that core skills attributed to policy entrepreneurs are of enormous value for all ambitious professionals, no matter what they do or where they work.

We can all improve our productivity—and that of others—by improving our teamwork skills. Likewise, it’s well established that strategic networking is crucial for acquiring valuable inside information. Skills in framing problems, resolving conflicts, making effective arguments, and shaping narratives are essential for ambitious people in every professional setting. And these are precisely the skills that, over and over, we see are foundational to the success of policy entrepreneurs.

So, yes, let’s bring on the policy entrepreneurs in the hard sciences, social sciences, health, and engineering. They’ll have a shot at making the world a better place through policy change. Just as crucially, they’ll also build the skills they need to become leaders in their chosen professional domains.

Professor of Public Policy

Monash University

Melbourne, Victoria, Australia

Erica Goldman makes the important case that we need to better enable scientists and technologists to seek to impact policy. She asserts that by providing targeted training, creating a community of practice, and raising awareness, experts can become better at translating their ideas into policy action. We should build an academic field around policy entrepreneurship as a logical next step to support this effort.

One key reason why people don’t pursue policy entrepreneurship is, as Goldman suggests, “they often have to pick up their skills on the job, through informal networks, or by serendipitously meeting someone who shows them the ropes.” This is in part because these skills are not regularly taught in the classroom. The academic field of policy analysis relies on a client-based model, which assumes that the student already has or will obtain sufficient connections or professional experience to work for policy clients. But how do you get a policy client without a degree or existing policy network?

How do you get a policy client without a degree or existing policy network?

Many experts in science, technology, engineering, and mathematics who have tremendous professional experience—precisely the people we should want to be informing policy—do not have the skills to take on client-based policy work. Take a Silicon Valley engineer who wants to change artificial intelligence policy, or a biochemist who wants to reform the pharmaceutical industry. Most such individuals will not enroll in a master’s degree program or move to Washington, DC, to build a policy network. As Goldman emphasizes, we instead need “a practical roadmap or curriculum” to “empower more people from diverse backgrounds and expertise to influence the policy conversation.”

What if we developed instead a field designed specifically to teach subject matter experts how to impact policy from the outside, how to help them get a role that will give them leverage from within, or how to reach both goals? At the Aspen Tech Policy Hub, we are working with partners such as the Federation of American Scientists to kick-start the development of this field. We focus on teaching the practical skills required to impact policy—such as how to identify key stakeholders, how to develop a policy campaign that speaks to those stakeholders, and how to communicate ideas to generalists. By investing in the field of policy entrepreneurship, we will make it more likely that the next generation of scientists and technologists have a stronger voice at the policy table.

Director, Tech Policy Hub

The Aspen Institute

To Fix Health Misinformation, Think Beyond Fact Checking

When tackling the problem of misinformation, people often think first of content and its accuracy. But contering misinformation by fact-checking every erroneous or misleading claim traps organizations in an endless game of whack-a-mole. A more effective approach may be to start by considering connections and communities. That is particularly important for public health, where different people are vulnerable in different ways. 

On this episode, Issues editor Monya Baker talks with global health professionals Tina Purnat and Elisabeth Wilhelm about how public health workers, civil society organizations, and others can understand and meet communities’ information needs. Purnat led the World Health Organization’s team that strategized responses to misinformation during the coronavirus pandemic. She is also a coeditor of the book Managing Infodemics in the 21st Century. Wilhelm has worked in health communications at the US Centers for Disease Control and Prevention, UNICEF, and USAID.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Monya Baker: Welcome to the Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

How many of these have you heard? “Put on a jacket or you’ll catch a cold.” “Don’t crack your joints or you’ll get arthritis.” “Reading in low light will ruin your eyes.” Health misinformation has long been a problem, but the rise of social media and the COVID-19 pandemic has escalated the speed and scale at which misinformation can spread and the harm that it can do. Countering this through fact-checking feels like an endless game of whack-a-mole. As soon as one thing gets debunked, five more appear. Is there a better way to defuse misinformation?

My name is Monya Baker, senior editor at Issues. On this episode, I’m joined by global health professionals, Tina Purnat and Elisabeth Wilhelm. We’ll discuss how to counter misinformation by building trust and by working with communities to understand their information needs and then deliver an effective response. Tina Purnat led the team at the World Health Organization that strategized responses to misinformation during the coronavirus pandemic. And Elisabeth Wilhelm has worked in health communications at the US Centers for Disease Control, UNICEF and USAID. Tina, Lis, welcome!

Tina Purnat: Hi.

Elisabeth Wilhelm: Thanks.

Baker: Could each of you tell me what you were doing during the pandemic and what did you see in terms of misinformation?

Wilhelm: So, during the pandemic, I was working at CDC and Tina was working at WHO. We’re going to talk a little bit about our experiences, but those don’t represent our former employers or our current ones. They’re just from our own personal experiences and our own personal war stories.

So, early on, I was sent as a responder to Indonesia to support the COVID-19 response in February of 2020. And at the time, there was officially no cases in Indonesia, but several colleagues in several different agencies were quite worried about this. And so, they asked for support. I saw huge challenges regarding COVID there specifically about misinformation, lack of information, too much information. And all of this really affected the government’s ability to respond and build trust with a really anxious public because so little information was available.

Information overload that was causing this anxiety and panic. And that was paralyzing not just for the public, but for the government and public health institutions that were trying to respond to it.

And at the end of March, I had already ended up in quarantine and I was sent to my hotel because I was in a meeting with too many high level officials in a very poorly ventilated room. And at the time, I reconnected with Tina because she had decided to set up a mass consultation from her dining room table through WHO to really understand and unpack this new phenomenon and misinformation. At the time, it was calling it the infodemic, and it was that information overload that was causing this anxiety and panic. And that was paralyzing not just for the public, but for the government and public health institutions that were trying to respond to it.

Purnat: I mean, we both saw writing on the wall, how big of a problem this was going to become globally. So, in February 2020, I was actually pulled from my day job at WHO. I was working in artificial intelligence and digital health, and I was pulled into the search support to the WHO emergency response team. And initially, the focus of my work was on how to quickly and effectively understand the different concerns and questions and narratives people were sharing online about COVID more broadly. It looked really at how the digitized, chaotic information environment is impacting people’s health.

So, I collaborated with Lis and many, many other people from practically all over the world, both on the science of infodemiology and building new public health tools, training more people, and also building collaborative networks. That later became known as infodemic management in health emergencies.

Baker: Just to sum up, Lis was in Indonesia before any cases of COVID had been reported publicly. And you, Tina, were called to manage a lot of things from your kitchen table as WHO tried to ramp up a response. What surprised both of you in terms of misinformation?

Wilhelm: Well, I could say that in Indonesia, it was really clear that everyone was caught flatfooted. But this was, of course, I think the story all over the world of how fast misinformation grew and spread and where people’s questions concerns were not getting answered and then trying to understand who felt like they were responsible for trying to address this misinformation or who was in a position to do something about it.

There’s really no vaccine against misinformation, although people would like there to be. There isn’t one simple answer.

I learned it’s not just government policymakers who play a role in addressing this problem, but it’s also journalists. It’s working with community-based organizations. It’s working with doctors, with nurses, with other health workers and with digital tech experts. And actually, it’s a lot of the lessons that we learned in Indonesia I would bring back to the US to apply in the US context. And there’s a lot of global lessons learned on addressing this information that we were able to bring back home.

And it’s just part of, I think, the largest story that misinformation is a complex phenomenon. The information environment is increasingly complex. No country is not affected by it. And health systems are just starting to understand and wrestle how to deal with it and recognizing that there isn’t one silver bullet. There’s really no vaccine against misinformation, although people would like there to be. There isn’t one simple answer. And I think that became increasingly clear during the pandemic.

And a lot of it has to do with trust. You have to build trust and do that in the middle of a pandemic. And it’s really hard to do that when you’re trying to address misinformation where people have laid their trust in others and not necessarily those that are in front of a bank of cameras and are an official spokesperson speaking to the public every day during a press conference. And so, that to me was a big revelation.

Baker: And Tina, I think I heard this phrase from you first, that instead of taking this very content-focused approach to misinformation, that a more effective way would be a public health approach to information. What does that mean?

If they find the information, the right information at the right time from the right person, then there’s much less opportunity or a chance that they would actually turn to a less credible source. So, we need to really be thinking much further upstream in this evolution of, well, what does actually create rumors and misinformation.

Purnat: One of the principles in public health, for example, is doing no harm. Another principle is really focusing on prevention instead of only mitigation or just treating disease, but actually preventing it. And I think actually what we’ve learned really most during the pandemic is the need to really understand how the information environment works, how misinformation actually takes hold, how it spreads, and what actually drives the creation and spread of it.

So, if you want to be really proactive, really what we’ve learned is that you need to be paying attention to what are actually people’s questions and concerns, or what is the health information that they cannot find because that basically are the needs that they’re trying to address. If we meet them, if they find the information, the right information at the right time from the right person, then there’s much less opportunity or a chance that they would actually turn to a less credible source. So, we need to really be thinking much further upstream in this evolution of, well, what does actually create rumors and misinformation. And not only basically play whack-a-mole chasing different posts.

Baker: How do you go about figuring out what a community’s information needs are?

Wilhelm: Ask them. Just don’t assume that a survey is really going to fully encapsulate what people’s information needs are. The best way is to ask them directly. And there are ways of engaging with communities, understanding their needs, and then deciding better health services to meet those needs. And that really is a community-centered approach that I hope becomes far more normed than it has been. It’s the whole idea of not for us without us.

And so, recognizing that blasting messages at communities that we think are going to be important or relevant to their context and that they’re more likely to follow, that’s the way of doing public health from 50 years ago. And we got to change how we understand and work with communities and involve them in the entire process in the business of getting people healthy.

Blasting messages at communities that we think are going to be important or relevant to their context and that they’re more likely to follow, that’s the way of doing public health from 50 years ago. And we got to change how we understand and work with communities and involve them in the entire process.

Public health is about the fact that your individual decisions can have population level impacts. I like to think of it in this way that everybody should wash their hands after they use the bathroom, but there are policies that also encourage that in places where people eat food. When you go to a restaurant and you go to the bathroom, there’s a big sign on the side of the door that says, “Employees must wash their hands.” So, while there might be also social norms and healthcare providers recommending that people wash their hands after using the bathroom, there also are policies and regulations in place that encourage that and enforce that so that everybody stays healthy and you can get a burger without getting food poisoning.

One of the projects I worked on at Brown really tried to understand people’s experiences on a health topic through stories. We tell each other’s stories. We understand the world through stories. Stories are incredibly motivating and powerful, and they’re usually emotionally based. They’re not fact-based necessarily. My story is my experience. But if I share it with you, you might be convinced of a certain thing because I’ve had this experience. If you can look at stories like that in aggregate, you can start identifying, well, are there common experiences that people in this community have and what can that tell us about how they’re being bombarded by the information environment or the common kinds of misinformation they’re seeing or the concerns they have? Or what are some of the social norms here that might be helpful or harmful for people protecting their health? And what can we do to better design services to meet people’s needs? It’s not just understanding how people are affected by misinformation, but it’s the totality of the information environment and when they want to do the healthy thing, is it easy to do?

Misinformation is often spread by people successfully when their values align with what they’re saying.

Purnat: Misinformation is often spread by people successfully when their values align with what they’re saying, that narrative. So, if a person values autonomy and their own control over their health, then they’re much more likely to discuss and share misinformation or health information that is underpinned by protecting people’s freedoms and rights. Or if people have historically had bad experiences with their physicians or their health service, then they might discuss and share health information and misinformation that offers alternative treatments or remedies that don’t require a visit to the doctor’s office.

That’s literally where you could say vulnerabilities also come in. And this is where the challenge of addressing health misinformation is because it requires solutions that go beyond only communicating, but actually you need to understand and address the underlying reasons and context and situations that people are in that leads to them sharing or believing in specific health information narratives.

So, in public health, we’re often organized in a particular disease, specific health topic, et cetera, but that’s not how people actually experience that day-to-day or their communities don’t experience it in day-to-day. So, when we plan on meeting their information and service needs, we have to look at the big picture and then work with all the relevant services and organizations that may meet the community where they’re at.

Baker: I wonder if you could have examples of situations where a community’s information needs were met well and situations where community needs were not met well?

Purnat: What’s happening right now in the US, it’s the H5N1 bird flu outbreak in cows. Just yesterday, I did a short search on what people are searching for on Google related to the bird flu. And there’s plenty of questions that people have from their day-to-day life that are not being answered yet by any big credible source of health information. Like the first questions people have when Googling it is about the symptoms of the H5N1 infection. But then the next concern is how is this affecting their pets? And then there’s various questions about food safety related to consuming milk and eggs and beef, and also questions in relation to the risk of infection to farmers also via handling animal manure.

And these are all information voids that the Googling public and affected workers have, but it’s likely just the tip of the iceberg. And the challenging part here is that it’s not only the public that isn’t getting the information, it’s also the public health and other trusted messengers don’t know what’s going on either. They’re complaining about slow and incomplete access to data and lack of communication from animal and public health agencies. So, this is a very common situation in outbreaks. And, I don’t know, Lis, can you think of any examples of successful?

Wilhelm: I really struggled to think of examples. And I don’t think there’s a single health topic where absolutely everyone’s information needs were met because if that were true, then we would have 100% coverage of all the things your healthcare provider recommends for you. I mean, I think the example I gave of there’s 30,000 books on pregnancy and childbirth on Amazon yet more keep getting published points to the fact that despite the thought that in the year 2024 you think every single question that could be asked about pregnancy and childbirth has probably been asked, apparently there’s still demand for more information. And that’s just books.

I don’t think there’s a single health topic where absolutely everyone’s information needs were met because if that were true, then we would have 100% coverage of all the things your healthcare provider recommends for you.

I mean, the most trusted sources of information on health, regardless of the topic, is almost always going to be your healthcare provider. And so, it’s that relationship that people have with their healthcare providers that’s also really critically important, if you’re lucky enough to have a primary healthcare provider.

I think the other side of the coin here is what are we doing to ensure that doctors and nurses and midwives and all kinds of health professionals, pharmacists, which are increasingly important during the pandemic because they started vaccinating people for things more than just flu vaccine. These are people who are having direct one-on-one conversations with individuals who have questions and concerns. What are we doing to ensure that they’re getting the training they need to have those effective conversations on health topics, but also recognize that their patients are having all kinds of stuff show up on their Facebook and social media feeds, and how do they address questions and concerns and misinformation that their patients are seeing on their screens, and how do we get health workers to recognize that that’s also part of their job. The information environment is starting to affect how doctors and nurses and other healthcare providers provide care.

And I don’t think even medical education is really caught up to the fact that the majority of people get their health information through a small screen. And that is also going to mediate how they understand and take that information on board, and that also might affect their health behavior. How many people do you know that you regularly see for a checkup or to discuss a medical topic that is a digital native or understands how to send out a tweet? We’re working in a space that’s increasingly digital, but sometimes the people who are in charge of our public health efforts who are in charge of our healthcare systems are not digital natives.

Baker: Yeah. Lis, you had said sometimes in public health, we are our own worst enemies. And I wonder if each of you could tell me what’s the one or two thing that you’ve seen that just frustrates you?

Purnat: There’s a long list actually.

Wilhelm: I want to take out a banjo and sing a song and tell you a story. I think the biggest challenges in public health is that science translation piece between what does the research tell us, how do we talk to the general public about it, how do we talk to patients about it and make sure that it’s understood. And sometimes things break down in that translation process.

There is a bible for people who are communicators, who do risk communication, who do crisis and emergency communication. And there are seven principles in this bible of how you’re supposed to communicate to the public. The first three are be first, be right, be credible. The problem is is that if you spend all of your efforts trying to ascertain whether or not you’re right, you might not be first. You end up being second, third, fourth, or fifth.

We’re really bad at exploring complex information. And we tend to believe the first thing we hear.

And the problem is is that we know from psychology and science that in emergencies and during outbreaks and crises, people’s brains operate differently. And the way it works differently is that we tend to seek out new sources of information. We’re really bad at exploring complex information. And we tend to believe the first thing we hear, which means if we’re not the first thing you heard, but the second, third or fourth or fifth, it’s really, really difficult to dislodge the first thing that you heard. So, that to me is shooting ourselves in the foot.

It’s really difficult to work as a communicator when you’re trying to balance a lack of evidence and science and being able to speak from a place of evidence. And when you’re trying to talk to an anxious public that has questions that we don’t have great answers to yet. And that is a problem that we’re experiencing every single time that there’s a new outbreak or a new disease or a new health threat, where we are racing against time to catch up.

Unfortunately, the internet will always move faster than that. And those questions and concerns will mushroom and turn into misinformation extremely quickly before someone with credibility could step in front of those cameras and deliver those remarks at a press conference. And at the end of the day, who actually listens to that press conference and who believes what is said by that spokesperson?

Purnat: I mean, just to build on what Lis said, one thing that we’re not yet I think appreciating is that this swirl of information and the conversations and reactions impacts our ability to promote public health. Literally, it cuts across individual people communities, but also it impacts health system and even health workers themselves, and we’re not really fully appreciating while this is a systemic challenge.

So, think about the teen vaping epidemic that basically seems to have caught everyone by surprise. It’s been propagated by very, very effective lifestyle-based social marketing campaigns and attractive design of the vapes that specifically spoke to teens. And while we were working to understand the epidemiological picture and we’re really putting in an effort of getting reliable evidence around it to understand the teen vaping problem, while the marketing that was targeting the teens continued to for many years and be unaddressed.

Baker: One thing I have heard is that too often when planning a response, people focus on this—I think it’s Liz who called them magic messages. Tell me about that and why it’s not going to be the most effective thing.

Wilhelm: So, maybe to put it this way, when was the last time that you had a conflict or disagreement with someone and you were searching for the right words and you found the right words and you said your magic words and it solved the problem immediately? This doesn’t happen in real life. If messages were in fact magical and if you just had to find them and identify them, the entire marketing industry will be out of a job and everyone would follow their healthcare provider’s advice on getting adequate exercise and protein in their diets, right?

If you want to understand what a person’s thinking or feeling, ask them. Just don’t make assumptions because that’s how poorly designed messages are developed and those can be actually harmful.

That’s just not how humans work. We’re not empty brains walking around waiting for messages to be filled in our brains that we then follow. We come with our own basket case of experiences, of biases, of our own literacies or lack thereof, our own perspectives on the world, our culture, our religious beliefs, our values. Those all color how we interact with the world and how we seek and get health services.

And so, there’s no magic messages that’s going to cut through that. People are different. Every community is different, and we have to recognize that in that diversity, trying to identify what people’s information needs are is going to look very different from place to place and from topic to topic, which goes back to if you want to understand what a person’s thinking or feeling, ask them. Just don’t make assumptions because that’s how poorly designed messages are developed and those can be actually harmful.

Purnat: And actually, this links also to how the media environment in general that we live in has changed. The days when people sat around the living room and listened to the nightly newscast, that’s like from a hundred years ago. Nowadays, we don’t receive information on health or other topics from single one source that we trust. We’re more like information omnivores. We consume information from different sources online and offline. We trust some more than others. So, when you attempt to blast out health messages into the world like a radio signal, and then you’re hoping that people are tuning in, that’s destined to fail.

When you attempt to blast out health messages into the world like a radio signal, and then you’re hoping that people are tuning in, that’s destined to fail.

But the problem there is that there’s also not anymore, one organization or person that has monopoly on speaking about credible health information. And that challenges how we need to be dealing with or interacting with information environments. We wouldn’t recommend that you hire a beauty influencer to talk about vaccine safety. And that’s just because they may be credible to their audience because of their beauty know-how, but probably won’t really move the needle in terms of public health outcomes. But we could work with beauty influencers probably about things that relate to social media because they’re experts in that.

Baker: So, not just the message, also the messengers?

Wilhelm: It’s the medium. It’s the message. It’s the messengers. It’s everything. I mean, think about it. For example, when you get alert on your phone saying that a tornado watch has just become a tornado warning and that you should go seek shelter or shelter in place, you’re getting the right information at the right time at the right place because geographically, the phone knows where you’re located and it overlaps where there’s this event that’s occurring. But also when we think about magic messages and we think about trust, we assume that people trust the messenger. What if people don’t trust the Weather Channel or the National Weather Service that provides those alerts to their phone?

And if we kind of extrapolate that to other areas of health, people’s trust in their doctor and the CDC and the National Pediatric Association might all be very different. We know that these are credible sources of information, but if these are not trusted, people will seek information from other alternative sources that better align with their values and their information needs. And that’s the real issue.

It’s not about we need to improve trust in these big institutions. It’s just recognizing people of varying levels of trust with different groups of people, different voices, different messengers, different on and on, different platforms, and recognizing that people get information and work with trusted information from different spaces.

Baker: And Tina, you’ve been thinking about how it’s not just information that needs to be supplied, that it’s not just messages that need to be supplied. It’s important to also know how the services will be delivered or make sure that services are being delivered.

Purnat: In ways that actually meet the needs, yes. So, example, during the pandemic, when the vaccine rollout started happening, many different countries used digital portals, digital tools that people could use to schedule their vaccine shot. But some communities either didn’t have internet access, didn’t have devices they could use to schedule an appointment, or they were just too far from locations that were providing the vaccine. That meant that actually, even though on paper the arrangement and the logistics sounded really well thought out, well, some people missed out because they weren’t able to actually take advantage of what the health system was asking them to do and offering.

Baker: Right. So, the message was delivered, but the services not really, not so much?

Purnat: And probably generated some frustration, which led to erosion of trust and frustration with the health authorities.

Wilhelm: A colleague of ours would say, “You want to make a health service fun, easy and accessible.” And so, just recognizing that if you want people to do something, you want to make it as easy as possible for them to do it. And so, that’s the example that Tina gave is a really great one, where there’s a mismatch.

Or early in the pandemic, you are instructing people who might have family members that may have been exposed to the COVID virus, that they should isolate at home, that they should take these precautions so they don’t transmit the virus to other family members. But how exactly is that supposed to work if you are living in a multigenerational household in a slum somewhere where you don’t have access to running water? So, the public health guidance might be very nice, but completely incomprehensible and completely unactionable by the average person that’s living in that type of community.

You don’t want to set people up to fail. If you’re talking to the general public about what they should do, you really need to be specific.

And so, we also have to recognize you don’t want to set people up to fail. If you’re talking to the general public about what they should do, you really need to be specific as to, “Well, what do I do if I have an elderly person that has accessibility issues or somebody who’s immunocompromised in my family,” or “What do I do if a family member has recovered from COVID? Are they eligible to receive the COVID vaccine?” I mean, these are common questions that people were asking, and the guidance wasn’t always really clear as to what people were supposed to do in those situations.

Baker: You said that just improving communications is not going to make everything better. So, what else could people be doing systematically?

Wilhelm: My pet peeve really is this focus to jump to solutions, which actually can do I think more damage in the long run, and that tend to be coercive in nature, content takedowns versus the more harder and necessary work of building trust and improving the breadth and depth of how healthcare workers and health systems engage with communities and with patients. There’s no magic button you can push just like there’s no magic message that increases trust. And there’s no magic button that you can push that can defeat all the underlying reasons why someone might believe misinformation instead of what you’re telling them.

Misinformation represents a failure—not of that individual or that community—but of a government and a health system that is not worthy of trust.

People who believe misinformation in communities that are acting on misinformation represents a failure—not of that individual or that community—but of a government and a health system that is not worthy of trust. If people believe misinformation instead of their healthcare provider, that tells me that something has gone horribly wrong and it isn’t on the individual.

We need to understand this, that this is a systemic public health problem. And we as public health professionals are on the hook to address these complex problems just like we’ve addressed other complex societal problems such as drunk driving or smoking cessation where it requires a lot of the levers, a lot of different levels.

Baker: I’ve really enjoyed learning more about this. I guess I’ll just ask each of you for one thing that you think could be done or that must be understood to move from sort of a less effective narrow approach to a more effective, broader approach.

Wilhelm: You know, the power of the internet is in your hands. As a consumer, as an individual, what you say and what you do and how you interact with people in your online communities and your offline communities can be extremely powerful. And so, take advantage of that power. Have conversations with family members and friends when they have questions or concerns. Point people in the direction of credible information. Engage with people. Do it so respectfully. Not everything has to be a shouting match on the internet.

And that can go a long way to creating a much healthier information environment where people feel like they can voice their questions and concerns without being shouted at down or talked over or dismissed just because they have legitimate concerns. And so, if we can bring some of that into our online and offline interactions every day, I think that would make things a little bit healthier.

We do need public health leadership that understands the critical and integral role that the digital information environment has in health.

Purnat: We do need public health leadership that understands the critical and integral role that the digital information environment has in health. And we need to be able to deal with how technology might be misdirecting people to the wrong health advice or all too often different health authorities still treat their websites like digital magazines. But in reality, they need to publish health information in ways that gets picked up and disseminated automatically online and used by people.

So, one thing that we need to recognize in public health is that this isn’t just in a domain of one or two functions or offices in a CDC or a National Institute of Public Health or a Ministry of Health or a health department. This is actually something that is challenging every role within the health system. And that means that patient-facing, community-facing roles or researchers and analysts and even policy advisors.

And that means we need to recognize that we need to invest in updating of our tools the way that we understand commercial information, social-economic determinants of health, and that needs to trickle into and be integrated both into our tools the way that we support our health workforce, as well as how it informs policy. It’s a bit tough nut to crack, but we can mobilize and use the expertise of practically every person that works in public health and beyond actually.

Wilhelm: This is a global problem. This affects every country from Afghanistan to the US to Greece to Zimbabwe. Everybody’s got the same issues trying to understand and address this complex information environment. And so, we can all learn from one another and recognize that this is a truly global new public health problem that we need to come up with better strategies to address. So, I think paying attention to this increasingly smaller planet that we live on, what happens in other countries affects what happens in ours, especially when it comes to how information is shared and amplified online.

Baker: I’d like to end by asking you about the Infodemic Manager training program that you worked on with the World Health Organization. You have called it a unicorn factory. Why do infodemic managers call themselves unicorns?

The perfect infodemic manager is someone who has public health experience that understands how the internet works, understands digital health, understands communication and social and behavioral science. They understand public health, epidemiology, outbreak response, emergency management. And there are very few humans on the planet who have all these skill sets.

Wilhelm: It’s the idea that the perfect infodemic manager is someone who has public health experience that understands how the internet works, understands digital health, understands communication and social and behavioral science. They understand public health, epidemiology, outbreak response, emergency management. And there are very few humans on the planet who have all these skill sets in one body.

And so, when we developed this training, we invited a very large group of humans from many different backgrounds to come together to learn some of these skills. And so, the joke became that the trainings were unicorn factories, where people went in with their existing and they upgraded a few new ones, and then they came out the other end with a little bit more sparkle and a little bit more ability to address health misinformation. And this took a life of its own. And these people decided to call themselves unicorns. They’re out there in the world, and you will see them with little unicorn buttons and stickers that they’ll have. And it’s kind of cool.

Purnat: And they were extremely committed and found this so valuable that we had people who were still wanting to participate while their country had massive flooding and monsoons or, for example, with family tragedy. And this was just a testament to the fact that really these challenges, people who worked in the communities, who worked in the COVID-19 response, they were recognizing that actually when they talk to each other, to people from other countries, they were actually seeing the same challenges. They were not alone experiencing this. This was not only specific to their country. And it was a big revelation to everyone that actually we can help each other a lot by talking to each other, supporting each other, and sharing what we’re experiencing and what we’re doing, and trying out to try to address these issues.

132 countries is how many people that we’ve trained from over the course of several years throughout this process. And it’s a small moment of joy in what was otherwise a very difficult, complex and horrifying outbreak response because many of the people that we’re being trained were doing this at all hours of the night, all parts of the world on crappy internet connections sitting together to try and solve this problem and learn together for four weeks when they’re off and also in their day job responding to their country’s COVID outbreak.

Wilhelm: So, you would have the Canadian nurse talking to the polio worker in Afghanistan, talking to the behavioral scientists in Australia, talking to the journalists in Argentina who all were taking the training and saying, “Let’s compare notes,” and then realizing how similar the challenges were that they were facing, but also a great way to come up with new solutions to some of those problems together.

Baker: Tina, Lis, thank you for this wonderful conversation. I hope it has inspired more people to become unicorns. Find out more about how to counter health misinformation by visiting our show notes.

Please subscribe to the Ongoing Transformation wherever you get your podcast. And thanks to our podcast producer, Kimberly Quach and our audio engineer, Shannon Lynch. My name is Monya Baker, Senior Editor at Issues in Science and Technology. Thank you for listening.

Missing Links for an Advanced Workforce

Recent investments in the US advanced manufacturing industry have generated a national workforce demand. However, meeting this demand for workers—particularly technicians—is inhibited by a skills gap. In the sector of microelectronics manufacturing, it is critical that we not only pursue effective technician education but also minimize barriers that hinder quality of education and program completion. For example, there are limited accessible avenues for students to gain hands-on industry experiences. Educational programs also face difficulties coordinating curriculum with local workforce needs. In “The Technologist” (Issues, Winter 2024), John Liu and William Bonvillian suggest an educational pathway targeting these challenges. Their proposals align with our efforts at the Micro Nano Technology Education Center (MNT-EC) to effectively train microelectronic industry technicians.

As the authors highlight, we must strengthen the connective tissue across the workforce education system. MNT-EC was founded with the understanding that there is strength in community bonds. We facilitate partnerships between students, educators, and industry groups to offer support, mentoring, and connections to grow the technician workforce. As part of our community of practice, we partner with over 40 community colleges in a coordinated national approach to advance microelectronic technician education. Our programs include an internship connector, which directs students toward hands-on laboratory education; a mentorship program supporting grant-seeking educators; and an undergraduate research program that backs students in two-year technical education programs.

In the sector of microelectronics manufacturing, it is critical that we not only pursue effective technician education but also minimize barriers that hinder quality of education and program completion.

These programs highlight community colleges’ critical partnership role within the advanced manufacturing ecosystem. As Liu and Bonvillian note, community colleges have unique attributes: connections to their local region, diverse student bodies, and workforce orientations. Ivy Tech Community College, one of MNT-EC’s partners, is featured in the article as an institution utilizing its strengths to educate new technologists. Ivy Tech, as well as other MNT-EC partners, understands that modern manufacturing technicians must develop innovative systems thinking alongside strong technical skills. To implement these goals, Ivy Tech participates in a partnership initiative funded by Silicon Crossroads Microelectronics Commons Hub. Ivy Tech works with Purdue University and Synopsis to develop a pathway that provides community college technician graduates with a one-year program at Purdue, followed by employment at Synopsis. This program embodies the “technologist” education, bridging technical education content taught at community colleges with engineering content at Purdue.

As we collectively develop this educational pathway for producing technologists, I offer two critical questions for consideration. First, how can we recruit and retain the dedicated technicians who will evolve into technologists? MNT-EC has undertaken strategic outreach to boost awareness of the advanced manufacturing industry. However, recruitment and retention remain a national challenge. Second, how can we ensure adequate and sustained funding to support community colleges in this partnership? Investing in the nation’s manufacturing workforce by building effective educational programs that support future technologists capable of meeting industry needs will take a team and take funding.

Principal Investigator, Micro Nano Technology Education Center

Professor, Pasadena Community College

Reports & Communications,
MNT-EC

Communications & Outreach,
MNT-EC

Anyone concerned about the state of US manufacturing should read with care John Liu and William B. Bonvillian’s essay. They propose a new occupational category that they maintain can both create opportunities for workers and position the United States to lead in advance manufacturing.

Their newly coined job, “technologist,” requires “workers with a technician’s practical know-how and an engineer’s comprehension of processes and systems.” This effectively recognizes that without an intimate connection between innovation (where the United States leads) and manufacturing (where it lags), the lead will dissipate, as recent history has demonstrated. In this context, the authors lament the US underinvestment in workforce education and particularly the low funding for community colleges, which can serve as critical cogs in training skilled workers.

Indeed, the availability of a skilled workforce ready to support twenty-first century production is the most significant and immediate problem the United States faces in trying to restore its overall manufacturing capability. And semiconductors are on the front line in the struggle. A report released in December 2023 by the Commerce Department’s Bureau of Industry and Security, Assessment of the Status of the Microelectronics Industrial Base in the United States, which summarizes industry respondents to a survey, “consistently identified workforce-related challenges as the most crucial to their business,” with respondents most frequently citing workforce-related issues (e.g., labor availability, labor cost, and labor quality) as important to expansion or construction decisions.

Other data support this perception. A July 2023 report from the Semiconductor Industry Association, Chipping Away: Assessing and Addressing the Labor Market Gap Facing the U.S. Semiconductor Industry, projects that by 2030 the semiconductor’s workforce will grow to 460,000 jobs from 345,000 jobs, with 67,000 jobs at risk of going unfilled at current degree completion rates. And this problem is economywide: by the end of 2030, an estimated 3.85 million additional jobs requiring proficiency in technical fields will be created—with 1.4 million jobs at risk of going unfilled.

The availability of a skilled workforce ready to support twenty-first century production is the most significant and immediate problem the United States faces in trying to restore its overall manufacturing capability.

The US CHIPS and Science Act, passed in 2022, appropriated over $52 billion in grants, plus tens of billions more in tax credits and loan authorization, through new programs at the Department of Defense, the National Institute of Standards and Technology (NIST), and the National Science Foundation. Central to these new initiatives is workforce development. For example, all new CHIPS programs must include commitments to provide workforce training. In addition, NIST’s National Semiconductor Technology Center proposes establishing a Workforce Center of Excellence, a national “hub” to convene, coordinate, and set standards for the highly decentralized and fragmented workforce delivery system.

To rapidly scale up regionally structured programs to meet the demand, it is wise to examine existing initiatives that have demonstrated success and can serve as replicable models. Two examples with a national footprint are:

  • NIST’s Manufacturing Extension Program has built a sustained business model in all states by helping firms reconfigure their operations through lean manufacturing practices, including shop floor reorganization. And the market for this service is not just tiny machine shops, but also enterprises with up to 500 employees, which represent over 95% of all manufacturing entities and employ 50% of all workers.
  • The National Institute for Innovation and Technology, a nonprofit sponsored by the Department of Labor, has developed an innovative Registered Apprenticeship Program in collaboration with industry. Several leading semiconductor companies are using the system to attract unprecedented numbers of motivated workers.

Liu and Bonvillian have described a creative approach to the major impediment to restoring US manufacturing. Rapid national scale-up is essential to success.

Senior Advisor

American Manufacturing Communities Collaborative

Former NIST Associate Director for Innovation and Industry Services

Effective recruitment and training programs are often billed as the key to creating the deep and capable talent pool needed by the nation’s industrial base. The task of creating them, however, has proven Sisyphean for educators. Pathways nationwide are afflicted with the same trio of problems: lagging enrollment; high attrition; and disappointing problem solving, creative thinking, and critical reasoning skills in graduates.

In response to these anemic results, the government has increased funding for manufacturing programs, hoping educators can produce the desired talent through improved outreach and instruction. Looking at the causes of the key problems, however, reveals that even the best programs, such as the one at the Massachusetts Institute of Technology that John Liu and William B. Bonvillian describe, are limited in their potential to solve them.

Recruitment is primarily hamstrung by the sector’s low wages (particularly at the entry level for workers with less than a bachelor’s degree). In many markets, entry-level technician compensation is on par with that offered by burger chains and big box stores. Technologist salaries ring in higher, but many promising candidates (especially high schoolers) opt for a bachelor’s degree instead, because the return on investment is often better. Until that math changes, technician/technologist pathways will never outmatch the competition from other sectors or four-year degrees, both of which pay more, provide a more attractive job structure, or both.

Furthermore, educators cannot easily teach skills such as aptitude for innovation and technical agility in class: students master theory in school and practical application on the job. As a former Apple engineer explained to me, it is not until entering the workforce that people are routinely exposed to the conditions that develop diversity of thought: open-ended problems that require workers to engage with an infinite solution space to arrive at an answer. While approaches like project-based learning can help students acquire a foundation prior to graduation, companies must accept that the bulk of the learning that drives creativity and problem solving will take place on the factory floor, not in the classroom.

It is not until entering the workforce that people are routinely exposed to the conditions that develop diversity of thought: open-ended problems that require workers to engage with an infinite solution space to arrive at an answer.

This means that to address the nation’s manufacturing workforce shortcomings, we must turn to industry, not education. Compensation needs to be raised to reflect the complexity and effort demanded by manufacturing jobs when compared with other positions that pay similar wages. Companies also need to embrace their role as a critical learning environment. Translating classroom-based knowledge into real-world skill takes time and effort by both students and industry. Many European countries with strong manufacturing economies run multiyear apprenticeship programs in recognition of this fact. To date, the United States has resisted the investment and cooperation required to create a strong national apprenticeship program. Unless and until that changes, we should not expect our recent graduates to have the experience and skill of their European counterparts.

In sum, programs such as the one at MIT should be replicated in every manufacturing market across the nation. But in the absence of competitive compensation and scaled apprenticeships, educators cannot create a labor pool with the quantity of candidates or technical chops to shore up the country’s industrial sector.

Senior Fellow and Director of Workforce Policy

The Century Foundation

John Liu and William B. Bonvillian make a compelling case for bridging the gap between engineers and technicians to support the US government’s efforts for reshoring and reindustrialization. They call for new training programs to produce people with a skill level between technician and engineer—or “technologists,” in their coinage. But before creating new programs, we should examine how the authors’ vision might fit within the nation’s existing educational system.

It is surprising that Liu and Bonvillian don’t explain how their new field differs from one that already bridges the technician-engineer gap: engineering technology. Engineering technology programs offer degrees at the associate’s, bachelor’s, master’s, and even PhD levels. And the programs graduate substantial numbers of students. According to the US Department of Education, more than 50,000 associate’s and 18,000 bachelor’s degrees in engineering technology were awarded in 2021–22. The number of bachelor’s degrees represents about 15% of all engineering degrees awarded during that period. The field also has a strong institutional foothold. Programs are accredited by the Accreditation Board for Engineering and Technology and the field has an established Classification of Instructional Programs code (15.00).

Engineering and engineering technology programs have roots that go back to the late nineteenth century. They were not completely distinct from one another until the 1950s, when engineering schools adopted many of the curricular recommendations made by an American Society of Engineering Education’s 1955 report, commonly known as the Grinter Report, and made engineering education more “scientifically oriented.” Engineering technology programs tend to require less advanced mathematics and science but much more applied and implementation work with real-world equipment.

Engineering technology programs tend to require less advanced mathematics and science but much more applied and implementation work with real-world equipment.

A more recent report from the National Academies, Engineering Technology Education in the United States, published in 2017, describes the state of the field, its evolution, and the need to elevate its branding and visibility among students, workers, educators, and employers. The report describes graduates of engineering technology programs as technologists, the same job title Liu and Bonvillian use for their new type of worker who possesses skills that combine what they term “a technician’s practical know-how and an engineer’s comprehension of processes and systems.”

The preface of the National Academies report provides a warning to those taking a “build it and they will come” approach. It states that engineering technology, despite its importance, is “unfamiliar to most Americans and goes unmentioned in most policy discussions about the US technical workforce.” Liu and Bonvillian are advocating that a new, apparently similar, field be created. How do they ensure it won’t suffer the same fate?

The market gap that the authors identify, along with the lack of awareness about engineering technology, point to a deeper problem in the US workforce development system: employers are no longer viewed as being responsible for taking the lead role in guiding and investing in workforce development. Employers are the ones that can specify skills needs, and they profit from properly trained workers, yet we have come to expect too little from them. Until we shift the policy conversation by asking employers to do more, creating programs that develop technologists will fail to live up to Liu and Bonvillian’s hopeful vision.

Associate Professor

Department of Political Science

Howard University

John Liu and William Bonvillian put forth a thoughtful proposal that US manufacturing needs a new occupational category called “technologist,” a mid-level position sitting between technician and engineer. To produce more of this new breed, the authors encourage community colleges to deliver technologist education, particularly by adopting the curricula framework used in an online program in manufacturing run by the Massachusetts Institute of Technology. And in a bit of good news, the US Defense Department has started funding its adaptation for technologist education.

But more is needed. In scaling up technologist programs across community colleges, Liu and Bonvillian propose focusing first on new students, followed by programs for incumbent workers. I might suggest the inverse strategy to center job quality in the creation of technologist jobs. In this regard, the authors state something critically important: “to incentivize and enable workers to pursue educational advances in manufacturing, companies need to offer high-wage jobs to employees.” Here, the United States might take some lessons from Germany, where manufacturers pay their employees 60% more than US companies do, have a robust apprenticeship system, and generally prioritize investments in human capital over capital equipment purchases.

For too long, US workforce policy has prioritized primarily employer needs. It’s time to add back workers at the heart of workforce policy, as my colleague Mary Alice McCarthy recently argued in a coauthored article in DC Journal. Efforts by community colleges can be important here. By partnering with employers, labor unions, and manufacturing intermediaries such as federal Manufacturing Extension Partnerships to upskill incumbent technicians to become technologists, community colleges can expand upward mobility for workers who are part of the 40 million “some college, no degree” population and set the stage for discussing competitive wages and job quality with employers. Plus, they can ensure that these bold new programs are aligned with employers’ needs—especially critical for emerging jobs.

Community colleges can expand upward mobility for workers who are part of the 40 million “some college, no degree” population and set the stage for discussing competitive wages and job quality with employers.

Indeed, the million-plus workers already employed across 56,000 companies within the US industrial base represent an opportunity to recruit program enrollees and provide mobility in a critical sector of manufacturing that arguably ought to be at the forefront of technologist-enabled digital transformation. Then, with the technologist role cemented in manufacturing—with fair pay—community colleges can turn to recruiting new students for the new occupation.

Policymakers should also consider ways to promote competitive pay and job quality as they fund and promote technologist education. Renewing worker power in manufacturing is one such avenue. Here, labor unions can prove useful. The politics of unions have changed. An August 2023 Gallup poll found that 67% of respondents approved of labor unions on the heels of a summer when both President Biden and former President Trump made history by joining picket lines during the United Auto Workers strike.

The time is right for manufacturing technologists. New federal funding, such as through the National Science Foundation’s Enabling Partnerships to Increase Innovation Capacity program and the Experiential Learning for Emerging and Novel Technologies program, is optimally suited to boost technologist program creation at community colleges. But even with such added support, ensuring that technologist jobs are quality jobs ought to be an imperative for employers who will benefit by bringing the authors’ sensible and needed vision to fruition.

Senior Advisor on Education, Labor, and the Future of Work

Head, Initiative on the Future of Work and the Innovation Economy

New America

An Innovation Economy in Every Backyard

Grace J. Wang’s timely essay, “Revisiting the Connection Between Innovation, Education, and Regional Economic Growth” (Issues, Winter 2024), warrants further attention given the foundational impact of a vibrant innovation ecosystem—ideas, technologies, and human capital—on the nation’s $29 trillion economy. She aptly notes that regional innovation growth requires “a deliberate blend of ideas, talent, placemaking, partnerships, and investment.”

To that end, I would like to amplify Wang’s message by drawing attention to the efforts of three groups: the ongoing work of the Brookings Institution, the current focus of the US Council on Competitiveness, and the catalytic role of the National Academies Government-University-Industry Research Roundtable (GUIRR) in advancing the scientific and innovation enterprise.

First, Brookings has placed extensive emphasis on regional innovation, focusing on topics such as America’s advanced industries, clusters and competitiveness, urban research universities, and regional universities and local economies. Recently, Mark Muro at Brookings collaborated with Robert Atkinson at the Information Technology and Innovation Foundation to produce The Case for Growth Centers: How to Spread Tech Innovation Across America. The report identified 35 place-based metropolitan locations that are utilizing the right ingredients—population; growing employment; university spending on R&D in science, technology, engineering, and mathematics per capita; patents; STEM doctoral degree production; and innovation sector job share—to realize innovation growth centers driven by targeted, peer-reviewed federal R&D investments.

The US Council on Competitiveness has also focused on place-based innovation. In 2019, the council launched the National Commission on Innovation and Competitiveness Frontiers, which involves a call to action described in the report Competing in the Next Economy: The New Age of Innovation. The council also formed four working groups, including one called The Future of Place-Based Innovation: Broadening and Deepening the Innovation Ecosystem. From these and other efforts, the council has proposed new recommendations that call for “establishing regional and national strategies to coordinate and support specialized regional innovation hubs, investing in expansion and retention of the local talent base, promoting inclusive growth and innovation in regional hubs, and strengthening local innovation ecosystems by enhancing digital infrastructure and local financing.”

Finally, I want to emphasize the important role GUIRR plays in advancing innovation and the national science and technology agenda. Through the roundtable, leaders from federal science agencies, universities, and industry proactively collaborate to frame issues and conduct activities that advance the national enterprise. GUIRR workshops and reports have also historically included elements to advance the innovation enterprise, including regional innovation.

Leaders from federal science agencies, universities, and industry proactively collaborate to frame issues and conduct activities that advance the national enterprise.

To end with a personal anecdote, I’ve witnessed the success that results from such a nexus, especially from one that was recently highlighted by Brookings: the automotive advanced manufacturing industry in eastern Tennessee. In my former position as chief research administrator at the University of Tennessee, I was deeply involved in that regional innovation ecosystem, along with other participants at Oak Ridge National Laboratory and in the automotive industry, allowing me to experience firsthand just how impactful these ingredients can be when combined and maximized.

More so, as GUIRR celebrates 40 years of impact this year, I know it will continue to serve as a strong proponent of the nation’s R&D and innovation enterprise while continually refining and advancing the deep and critical collaboration between government, universities, and industry as laid out in Wang’s article and amplified by Brookings and the US Council on Competitiveness.

President, The University of Texas at San Antonio

Council Member, National Academies Government-University-Industry Research Roundtable

National Commissioner, US Council on Competitiveness

As Grace J. Wang notes in her article, history has shown the transformative power of innovation clusters—the physical concentration of local resources, people brimming with creative ideas, and support from universities, the federal government, industry, investors, and state and local organizations.

In January 2024, the National Science Foundation made a groundbreaking announcement: the first Regional Innovation Engines awards, constituting the broadest and most significant investment in place-based science and technology research and development since the Morrill Land Grant Act over 160 years ago. Authorized in the bipartisan CHIPS and Science Act of 2022, the program’s initial two-year, $150 million investment will support 10 NSF Engines spanning 18 states, bringing together multisector coalitions to put these regions on the map as global leaders in topics of national, societal, and geostrategic importance. Subject to future appropriations and progress made, the teams will be eligible for $1.6 billion from NSF over the next decade.

NSF Engines have already unlocked another $350 million in matching commitments from state and local governments, other federal agencies, philanthropy, and private industry, enabling them to catalyze breakthrough technologies in areas as diverse as semiconductors, biotechnology, and advanced manufacturing while stimulating regional job growth and economic development. Places such as El Paso, Texas, and Greensboro, North Carolina, will see lasting impacts as they are transformed into inclusive, thriving hubs of innovation capable of evolving and sustaining themselves for decades to come.

Places such as El Paso, Texas, and Greensboro, North Carolina, will see lasting impacts as they are transformed into inclusive, thriving hubs of innovation capable of evolving and sustaining themselves for decades to come.

The NSF Engines program is led by NSF’s Directorate for Technology, Innovation, and Partnerships (TIP), which builds upon decades of NSF investments in foundational research to grow innovation and translation capacity. TIP recently invested another $20 million in 50 institutions of higher education—including historically Black colleges and universities, minority-serving institutions, and community colleges—to help them build new partnerships, secure future external funding, and tap into their regional innovation ecosystems. Similarly, NSF invested $100 million in 18 universities to expand their research translation capacity, build upon academic research with the potential for technology transfer and societal and economic impacts, and bolster technology transfer expertise to support entrepreneurial faculty and students.

NSF also works to meet people where they are. The Experiential Learning for Emerging and Novel Technologies (ExLENT) program opens access to quality education and hands-on experiences for people at all career stages nationwide, leading to a new generation of scientists, engineers, technicians, practitioners, entrepreneurs, and educators ready to pursue technological innovation in their own communities. NSF’s initial $20 million investment in 27 ExLENT teams is allowing individuals from diverse backgrounds and experiences to gain on-the-job training in technology fields critical to the nation’s long-term competitiveness, paving the way for good-quality, well-paying jobs.

NSF director Sethuraman Panchanathan has stated that we must create opportunities for everyone and harness innovation anywhere. These federal actions collectively acknowledge that American ingenuity starts locally and is stronger when there are more pathways for workers, startups, and aspiring entrepreneurs to participate in and shape the innovation economy in their own backyard.

Assistant Director for Technology, Innovation and Partnerships

National Science Foundation

Grace J. Wang does an excellent job of capturing the evolution of science and engineering research, technological innovation, and economic growth. She also connects these changes to science, technology, engineering, and mathematics education on the one hand and employment shifts on the other. And she implores us to seriously consider societal impacts in the process of research, translation, and innovation.

I believe developments over the past decade have made these issues far more urgent. Here, I will focus on three aspects of innovation: technological direction, geographic distribution, and societal impacts.

Can innovation be directed? Common belief in the scientific research community is that discovery and innovation are unpredictable. This supports the idea of letting hundreds of flowers bloom—fostered by broad support for all fields of science and engineering. Increasingly, however, the complexity and urgency of societal grand challenges are leading to a case for mission-oriented innovation. As Mariana Mazzucato pointed out in a report titled Mission-Oriented Research & Innovation in the European Union: “By harnessing the directionality of innovation, we also harness the power of research and innovation to achieve wider social and policy aims as well as economic goals. Therefore, we can have innovation-led growth that is also more sustainable and equitable.”

Increasingly, the complexity and urgency of societal grand challenges are leading to a case for mission-oriented innovation.

Can innovation be spread geographically? Technological innovations and their economic benefits have been far from uniformly distributed. Indeed, while some regions have prospered, many have been left behind, if not regressed. Scholars have offered several ways to address this distressing and polarizing situation. With modesty, I point to a 2021 workshop on regional innovation ecosystems, which Jim Kurose, Cheryl Martin, Susan Martinis, and I organized (and Grace Wang participated in). Funded by the National Science Foundation, the workshop led to the report National Networks of Research Institutes, which helped spur development of the NSF’s Regional Innovation Engines program, which recently awarded $1.6 billion to 10 innovation clusters distributed across the nation. Much, much more, of course, remains to be done.

Can the negative societal impacts of innovation be minimized, and the positive impacts maximized? As example of the downside, consider some of the profound negative impacts of smartphones, social media, and mobile internet technologies. As Jaron Lanier, a technology pioneer, pointed out: “I think the short version is that a lot of idealistic people were unwilling to consider the dark side of what they were doing, and the dark side developed in a way that was unchecked and unfettered and unconsidered, and it eventually took over.” At a minimum, everyone in the science and engineering research community should become more knowledgeable about the fundamental economic, sociological, political, and institutional processes that govern the real-world implementation, diffusion, and adoption of technological innovations. We should also ensure that our STEM education programs expose undergraduate and graduate students to these processes, systems, their dynamics, and their driving forces.

Fundamentally, I believe that we need to get better at anticipatory technology ethics, especially for emerging technologies. The central question all researchers must attempt to answer is: what will the possible positive and negative consequences be if their technology becomes pervasive and is adopted at large scale? Admittedly, due to inherent uncertainties in all aspects of the socio-technological ecosystem, this is not an easy question. But that is not enough reason to not try.

Vice Chancellor for Research

University of California, Irvine

Technology innovation can be a major force behind regional economic growth, but as Grace J. Wang notes, it takes intentional coordination for research and development-based regional change to happen. Over the past year, as parties coalesced across regions to leverage large-scale, federally funded innovation and economic growth programs, UIDP, an organization devoted to strengthening university-industry partnerships, has held listening sessions to better understand the challenges these regional coalitions face.

In conversations with invested collaborators in diverse regions—from Atlanta, New York, and Washington, DC, to New Haven, Connecticut, and Olathe, Kansas—we’ve learned that universities can easily fulfill the academic research aspects of these projects. Creating the organizational glue that engages and keeps academic, industry, local and state government, and nonprofit partners collaborating as a whole is more challenging. One solution successful communities use is creating a new, impartial governing body; others rely on an impartial community connector as neutral convener.

But other program requirements remain a black box—specifically, recruiting and retaining talent and developing short- and long-term metrics. At least for National Science Foundation Regional Innovation Engines awardees, it is hoped that replicable approaches to address these issues will be developed in coordination with that effort’s MIT-led Builder Platform.

Creating the organizational glue that engages and keeps academic, industry, local and state government, and nonprofit partners collaborating as a whole is more challenging.

Data specific to a region’s innovation strengths and gaps can lend incredible insight into the ecosystem-building process. Every community has assets that uniquely contribute to regional development; a comprehensive, objective assessment can identify and determine their value. Companies such as Elsevier and Wellspring use proprietary data to tell a story about a community’s R&D strengths, revealing connections between partners and identifying key innovators who may not otherwise have high visibility within a region.

We often hear about California’s Silicon Valley and North Carolina’s Research Triangle as models for robust innovation ecosystems. Importantly, both those examples emphasized placemaking early in their development.

Innovation often has its genesis in face-to-face interactions. High-value research parks and innovation districts, along with co-located facilities, offer services beyond incubators and lab space. The exemplars create intentional opportunities for innovators to interact—what UIDP and others call engineered serendipity. Research has tracked the value of chance meetings—a conversation by the copy machine or a chat in a café—for sparking innovation and fruitful collaboration.

The changing landscape of research and innovation is having a profound impact on the academy, where researchers have traditionally focused on basic research and are now being asked to expand into use-inspired areas to solve societal problems more directly; this is where government and private funders are making more investments.

Finally, Wang noted the difficulty in making technology transfer offices financially self-sustainable, and NSF’s recently launched program Accelerating Research Translation (ART) seeks to address this challenge. But it may be time to reevaluate the role of these offices. Today’s increasing emphasis on research translation is an opportune time to reassess the transactional nature of university-based commercialization and licensing and return to a role that places greater emphasis on faculty support and service rather than revenue generation. Placing these activities within the context of long-term strategic partnerships could generate greater return on investment for all.

President and CEO

UIDP

Harvesting Insights From Crop Data

In “When Farmland Becomes the Front Line, Satellite Data and Analysis Can Fight Hunger” (Issues, Winter 2024), Inbal Becker-Reshef and Mary Mitkish outline how a standing facility using the latest satellite and machine learning technology could help to monitor the impacts of unexpected events on food supply around the world. They do an excellent job describing the current dearth of public real-time information and, through the example of Ukraine, demonstrating the potential power of such a monitoring system. I want to highlight three points the authors did not emphasize.

First, a standing facility of the type they describe would be incredibly low-cost relative to the benefit. A robust facility could likely be established for $10–20 million per year. This assumes that it would be based on a combination of public satellite data and commercial data accessed through larger government contracts that are now common. Given the potential national security benefits of having accurate information on production shortfalls around the world, the cost of the facility is extremely small, well below 0.1% of the national security spending of most developed countries.

Second, the benefits of the facility will likely grow quickly, because the number of unexpected events each year is very likely to increase. One well-understood reason is that climate changes are making severe events such as droughts, heat waves, and flooding more common. Less appreciated is the continued drag that climate trends are having on global productivity, which puts upward pressure on prices of food staples. The impact of geopolitical events such as the Ukraine invasion then occur on top of an already stressed food system, magnifying the impact of the event on global food markets and social stability. The ability to quickly assess and respond to shocks around the world should be viewed as an essential part of climate adaptation, even if every individual shock is not traceable to climate change. Again, even the facility’s upper-end price tag is small relative to the overall adaptation needs, which are estimated at over $200 billion for developing countries alone.

Third, a common refrain is that the private sector (e.g., food companies, commodity traders) and national security outfits are already monitoring the global food supply in real time. My experience is that they are not doing it with the sophistication and scope that a public facility would have. But even if they could, having estimates in the public domain is critical to achieving the public benefit. This is why the US Department of Agriculture regularly releases both its domestic and foreign production assessments.

The era of Earth observations arguably began roughly 50 years ago with the launching of the original Landsat satellite in 1972. That same year, the United States was caught by surprise by a large shortfall in Russian wheat production, a surprise that reoccurred five years later. By the end of the decade the quest to monitor food supply was a key motivation for further investment in Earth observations. We are now awash in satellite observations of Earth’s surface, yet we have still not realized the vision of real-time, public insight on food supply around the world. The facility that Becker-Reshef and Mitkish propose would help to finally realize that vision, and it has never been more needed than now.

Professor, Department of Earth System Science

Director, Center on Food Security and the Environment

Stanford University

Member, National Academy of Sciences

Given the current global food situation, the importance of the work that Inbal Becker-Reshef and Mary Mitkish describe cannot be emphasized enough. In 2024, some 309 million people are estimated to be acutely food insecure in the 72 countries with World Food Program operations and where data are available. Though lower than the 2023 estimate of 333 million, this marks a massive increase from pre-pandemic levels. The number of acutely hungry people in the world has more than doubled in the last five years.

Conflict is one of the key drivers of food insecurity. State-based armed conflicts have increased sharply over the past decade, from 33 conflicts in 2012 to 55 conflicts in 2022. Seven out of 10 people who are acutely food insecure currently live in fragile or conflict-affected settings. Food production in these settings is usually disrupted, making it difficult to understand how much food they are likely to produce. While Becker-Reshef and Mitkish focus on “crop production data aggregated from local to global levels,” having local-level data is critical for any groups trying to provide humanitarian aid. It is this close link between conflict and food insecurity that makes satellite-based techniques for estimating the extent of croplands and their production so vital.

This underpins the important potential of the facility the authors propose for monitoring the impacts of unexpected events on food supply around the world. Data collected by the facility could lead to a faster and more comprehensive assessment of crop production shortfalls in complex emergencies. Importantly, the facility should take a consensual, collaborative approach involving a variety of stakeholder institutions, such as the World Food Program, that not only have direct operational interest in the facility’s results, but also frequently possess critical ancillary datasets that can help analysts better understand the situation.

While satellite data is an indispensable component of modern agricultural assessments, estimation of cropland area (particularly by type) still faces considerable challenges, especially regarding smallholder farming systems that underpin the livelihoods of the most vulnerable rural populations. The preponderance of small fields with poorly defined boundaries, wide use of mixed cropping with local varieties, and shifting agricultural patterns make analyzing food production in these areas notoriously difficult. Research into approaches that can overcome these limitations will take on ever greater importance in helping the proposed facility’s output have the widest possible application.

In order to maximize the impact of the proposed facility and turn the evidence from rapid satellite-based assessments into actionable recommendations for humanitarians, close integration of its results with other streams of evidence and analysis is vital. Crop production alone does not determine whether people go hungry. Other important factors that can influence local food availability include a country’s stocks of basic foodstuffs or the availability of foreign exchange reserves to allow importation of food from international markets. And even when food is available, lack of access to food, for either economic or physical reasons, or inability to properly utilize it can push people into food insecurity. By combining evidence on a country’s capacity to handle production shortfalls with data on various other factors that influence food security, rapid assessment of crop production will be able to fully unfold its power.

Head, Market and Economic
Analysis Unit

Head, Climate and Earth
Observation Unit

World Food Program

Rome, Italy

Inbal Becker-Reshef and Mary Mitkish use Ukraine to reveal an often-overlooked impact of warfare on the environment. But it is important to remember that soil, particularly the topsoil of productive farmlands, can be lost or diminished in other equally devastating ways.

Globally, there are about 18,000 distinct types of soil. Soils have their own taxonomy, and the different soil types are sorted into one of 12 orders, with no two being the same. In the case of Ukraine, it has an agricultural belt that serves as a “breadbasket” for wheat and other crops. This region sustains its productivity in large part because of its particular soil base, called chernozem, which is rich in humus, contains high percentages of phosphorus and ammonia, and has a high moisture storage capacity—all factors that promote crop productivity.

Even as the world has so many types of soil, the pressures on soil are remarkably consistent across the globe. Among the major source of pressures, urbanization is devouring farmland, as such areas are typically flat and easy to design upon, making them widely marketable. Soil is lost from erosion, which can be gradual and almost unrecognized, or sudden, as following a natural disaster. And soil is lost or degraded from salinization and desertification.

So rather than waiting for a war to inflict damage to soils and flash warning signs about soil health, are there not things that can be done now? As Becker-Reshef and Mitkish mention, “severe climate-related events and armed conflicts are expected to increase.” And while managing such food disruptions is key to ensuring food security, forward-looking polices and enforcements to protect the planet’s base foundation for agriculture would seem to be an important part of food security planning.

In the United States, farmland is being lost at an alarming rate; one reported study found that 11 million acres were lost or paved over between 2001 and 2016. Based on those calculations, it is estimated that another 18.4 million acres could be lost between 2016 and 2040. As for topsoil, researchers agree that it can take from 200 to 1,000 years to form and add an additional inch in depth, which means that topsoil is disappearing faster than it can be replenished.

While the authors clearly show the loss of cultivated acreage from warfare, to fully capture the story would require equivalent projections for agricultural land lost to urbanization and to erosion or runoff. This would then paint a fuller picture as to how one vital resource, that of topsoil, is faring during this time of farmland reduction, coupled with greater expectations for what each acre can produce.

Visiting Scholar, Nicholas School of the Environment

Duke University

Forks in the Road to Sustainable Chemistry

In “A Road Map for Sustainable Chemistry” (Issues, Winter 2024), Joel Tickner and Ben Dunham convincingly argue that coordinated government action involving all federal funding agencies is needed for realizing the goal of a sustainable chemical industry that eliminates adverse impacts on the environment and human health. But any road map should be examined to make sure it heads us in the right direction.

At the outset, it is important to clear misinterpretations about the definition of sustainable chemistry stated in the Sustainable Chemistry Report the authors examine. They opine that the definition is “too permissive in failing to exclude activities that create risks to human health and environment.” On the contrary, the definition is quite clear in including only processes and products that “do not adversely impact human health and the environment” across the overall life cycle. Further, the report’s conclusions align with the United Nations Sustainable Development Goals, against which progress and impacts of sustainable chemistry and technologies are often assessed.

The nation’s planned transition in the energy sector toward net-zero emissions of carbon dioxide, spurred by the passage of several congressional acts during the Biden administration, is likely to cause major shifts in many industry sectors. While the exact nature of these shifts and their ramifications are difficult to predict, it is nevertheless vital to consider them in road-mapping efforts aimed at an effective transition to a sustainable chemical industry. Although some of these shifts could be detrimental to one industry sector, they could give rise to entirely new and sustainable industry sectors.

As an example, as consumers increasingly switch to electric cars, the government-subsidized bioethanol industry will face challenges as demand for ethanol as a fuel additive for combustion-engine vehicles erodes. But bioethanol may be repurposed as a renewable chemical feedstock to make a variety of platform chemicals with significantly more value compared to its value as a fuel. Agricultural leftovers such as corn stover and corn cobs can also be harnessed as alternate feedstocks to make renewable chemicals and materials, further boosting ethanol biorefinery economics. Such biorefineries can spur thriving agro-based economies.

Although some of these shifts could be detrimental to one industry sector, they could give rise to entirely new and sustainable industry sectors.

Another major development in decarbonizing the energy sector involves the government’s recent investments in hydrogen hubs. The hydrogen produced from carbon-free energy sources is expected to decarbonize fertilizer production, now a significant source of carbon emissions. The hydrogen can also find other outlets, including its reaction with carbon dioxide captured and sequestered in removal operations to produce green methanol as either a fuel or a platform chemical. Carbon-free oxygen, a byproduct of electrolytic hydrogen production in these hubs, can be a valuable reagent for processing biogenic feedstocks to make renewable chemicals.

Another untapped and copious source of chemical feedstock is end-of-use plastics. For example, technologies are being developed to convert used polyolefin plastics into a hydrocarbon crude that can be processed as a chemical feedstock in conventional refineries. In other words, the capital assets in existing petroleum refineries may be repurposed to process recycled carbon sources into chemical feedstocks, thereby converting them into circular refineries. There could well be other paradigm-shifting possibilities for a sustainable chemical industry that could emerge from a carefully coordinated road-mapping strategy that involves essential stakeholders across the chemical value chain.

Dan F. Servey Distinguished Professor, Department of Chemical and Petroleum Engineering

Director, Center for Environmentally Beneficial Catalysis

University of Kansas

Joel Tickner and Ben Dunham describe the current opportunity “to better coordinate federal and private sector investments in sustainable chemistry research and development, commercialization, and scaling” through the forthcoming federal strategic plan to advance sustainable chemistry. They highlight the unfortunate separation in many federal efforts between “decarbonization” of the chemical industry (reducing and eliminating the sector’s massive contribution to climate change) and “detoxification” (ending the harm to people and the environment caused by the industry’s reliance on toxic chemistries).

The impacts and opportunities at stake are no small matters. The petrochemical industry produces almost one-fifth of industrial carbon dioxide emissions globally, and is on track to account for one-third of growth in oil demand by 2030, and almost half by 2050. Health, social, and economic costs due to chemical exposures worldwide may already be more than 10% of global domestic product.

As Tickner and Dunham note, transformative change is urgently needed, and will not result from voluntary industry measures or greenwashing efforts. So-called chemical recycling (which is simply a fancy name for incineration of plastic waste, with all the toxic emissions and climate harm that implies), and other false solutions (such as carbon capture and sequestration) that don’t change the underlying toxic chemistry and production models of the US chemical industry will fail to deliver real change and a sustainable industry that isn’t poisoning people and the planet.

Transformative change is urgently needed, and will not result from voluntary industry measures or greenwashing efforts.

Government and commercial efforts to advance sustainable chemistry must be guided by and accountable to the needs and priorities of the most impacted communities and workers, and measured against the vision and platform contained in The Louisville Charter for Safer Chemicals: A Platform for Creating a Safe and Healthy Environment Through Innovation.

The 125-plus diverse organizations that have endorsed the Louisville Charter would agree with Tickner and Dunham. As the Charter states: “Fundamental reform is possible. We can protect children, workers, communities, and the environment. We can shift market and government actions to phase out fossil fuels and the most dangerous chemicals. We can spur the economy by developing safer alternatives. By investing in safer chemicals, we will protect peoples’ health and create healthy, sustainable jobs.”

Among other essential policy directions to advance sustainable chemistry and transform the chemical industry so that it is no longer a source of harm, the Charter calls for:

  • preventing disproportionate and cumulative impacts that harm environmental justice communities;
  • addressing the significant impacts of chemical production and use on climate change;
  • acting quickly on early warnings of harm;
  • taking urgent action to stop the harms occurring now, and to protect and restore impacted communities;
  • ensuring that the public and workers have full rights to know, participate, and decide;
  • ending subsidies for toxic, polluting industries, and replacing them with incentives for safe and sustainable production; and
  • building an equitable and health-based economy.

Federal leadership on sustainable chemistry that advances the vision and policy recommendations of the Louisville Charter would be a welcome addition to ongoing efforts for chemical industry transformation.

Program Director

Coming Clean

Joel Tickner and Ben Dunham offer welcome and long-overdue support for sustainable chemistry, but the article only scratches the surface of societal concerns we should have about toxicants that result from exposure to fossil fuel emissions, to plastics and other products derived from petrochemicals, and to toxic molds or algal blooms. Their proposals continue to rely on the current classical dose-response approach to regulating chemical exposures. But contemporary governmental standards and industrial policies built on this model are inadequate for protecting us from a variety of compounds that can disrupt the endocrine system or act epigenetically to modify specific genes or gene-associated proteins. And critically, present practices ignore a mechanism of toxicity called toxicant-induced loss of tolerance (TILT), which Claudia Miller and I first described a quarter-century ago.

TILT involves the alteration, likely epigenetically, of the immune system’s “first responders”—mast cells. Mast cells evolved 500 million years ago to protect the internal milieu from the external chemical environment. In contrast, our exposures to fossil fuels are new since the Industrial Revolution, a mere 300 years ago. Once altered and sensitized by substances foreign to our bodies, tiny quantities (parts per billion or less) of formerly tolerated chemicals, foods, and drugs trigger degranulation of mast cells, resulting in multisystem symptoms. TILT and mast cell sensitization offer an expanded understanding of toxicity occurring at far lower levels than those arrived at by customary dose-response estimates (usually in the parts per million range). Evidence is emerging that TILT modifications of mast cells explain seemingly unrelated health conditions such as autism, attention deficit hyperactivity disorder (ADHD), chronic fatigue syndrome, and long COVID, as well as chronic symptoms resulting from exposure to toxic molds, burn pits, breast implants, volatile organic compounds (VOCs) in indoor air, and pesticides.

Most concerning is evidence from a recent peer-reviewed study suggesting transgenerational transmission of epigenetic alterations in parents’ mast cells, which may lead to previously unexplained conditions such as autism and ADHD in their children and future generations. The two-stage TILT mechanism is illustrated in the figure below, drawn from the study cited. We cannot hope to make chemistry sustainable until we recognize the results of this and other recent studies, including by our group, that go beyond classical dose-response models of harm and acknowledge the complexity of multistep causation.

Professor of Technology and Policy

Director, Technology and Law Program

Massachusetts Institute of Technology