What We Talk About When We Talk About Impact

When most people talk about “impact,” they often imagine one thing physically hitting another, for example, the impact of the meteorite that scientists think was responsible for killing off the dinosaurs—and leaving, as meteorites do, an impact crater on the edge of the Yucatán Peninsula.

This ballistic sensibility also informs a common understanding of the impact of things less corporeal than meteorites, including ideas and scholarship. Rarely, if ever, do ideas—academic or otherwise—blaze a trail in the sky and leave a clear mark where their impact has occurred. Yet people have an intuition of some series of collisions in which a new idea or new information changes people’s understandings, which changes people’s opinions, which changes people’s behaviors, which brings about different outcomes in the world.

Four decades ago, the Bayh-Dole Act enshrined the creation of intellectual property (IP) as part of the mission of research universities. Academic institutions responded by creating offices of technology transfer and including patents and other tokens of IP in their incentive systems. More recently, universities and their benefactors have sought to expand academia’s mission again, this time to include impact. For instance, my own institution, Arizona State University, wants to “enhance our local impact and social embeddedness” as one of its five high-level goals. On the funder side, in 2023, the Pew Charitable Trusts led a group of funders and research institutions in a “Scan of Promising Efforts to Broaden Faculty Reward Systems to Support Societally Impactful Research.” But the academy has a lot of work to do if impact is to take its place alongside IP in universities’ missions.

Rarely, if ever, do ideas—academic or otherwise—blaze a trail in the sky and leave a clear mark where their impact has occurred.

The search for impact has its own history in decades of jousting between pure versus applied research, curiosity-driven versus mission-driven research, the ivory tower versus the extension service, intellectual merit versus the “broader impacts” criteria at the National Science Foundation (NSF), and so on. But now that impact is a goal, we in the academic community need to elucidate a nuanced understanding of what we really mean by impact, how we imagine it happens, and what we as scholars might do individually and collectively to work toward it.

I approach these questions of impact as a social scientist, and particularly a political scientist concerned with public policy. And I am interested in the impact of scholarly ideas and analysis on legislation and policy, on politics and public discourse, and on people. As fellow political scientist Langdon Winner pointed out decades ago, legislation and technology have a shared identity: both are collective endeavors that authorize and provide infrastructures for how we as individuals and as a society pursue what we will. Winner reasons that if we have certain expectations of democratic practices and institutions for making legislation, then we should have similar expectations of democratic practices and institutions for making technology. I want to extend this reasoning to argue that if universities commit to practices and institutions for creating technological impact in the form of IP, then they should have similar structures for creating other kinds of impact.

A taxonomy of impact

What do we mean by impact? I posit four categories. First is what might be called “actual” impact: scholarship that affects the drafting or goals of legislation, budgets, or policy. An example is the language that directs the US Department of Energy (DOE) to facilitate and fund research, development, and deployment of direct air capture (DAC) of climate-warming carbon dioxide. I use scare quotes around the word “actual” because the concept of impact is often limited to substantive changes in, say, legislation. However, change happens in many other less formal ways. That is, policy change often follows political or social change.

Thus, the second category is impact on general thinking, which is roughly what some faculty aspire to as thought leaders or influencers. In the energy example, an impact on general thinking might be the concept of “overshoot,” which provided urgency to climate policymaking by clarifying how global temperatures are likely to surpass a predefined target (usually 1.5° Celsius above average preindustrial temperatures), thus making DAC a more interesting technology choice. Impact on general thinking can be associated both with the content of general thinking that might change (substantive) and with the agenda or vocabulary or framing with which things are considered that might change (procedural).

Impact on general thinking can be associated both with the content of general thinking that might change (substantive) and with the agenda or vocabulary or framing with which things are considered that might change (procedural).

Finally, one might have an impact on people, either through the training of knowledgeable personnel (the third category) or through the interaction with lay knowledge (the fourth category). Such impacts might lead to substantive changes in the content of what people believe and procedural changes in how they behave, but also to reflexive changes in how they approach problems in relationship to their changing knowledge of an evolving world. For these categories of impact, the Climate Overshoot Commission (for elites) and Earth Overshoot Day (for the lay public) might be helpful examples. Both convey substantive information to change the knowledge upon which elites or lay publics might act, and both attempt to influence the agenda of how society approaches climate change. And yet, especially as the idea of overshoot is somewhat flexibly deployed between expert and lay groups, each asks different things of their respective audiences about their roles in ongoing opportunities for change.

Looking at these categories makes it possible to imagine how universities might create structures to encourage faculty to consider and pursue specific types of impact.

Measuring without a crater

A major challenge for universities is how to attribute and measure impact, wherever it occurs. Here we enter a nebulous area, because ideas are different from meteorites or even technologies that can be patented, licensed, and sold. If an idea results in an actual impact on a law or a budget—for example, adding millions for a new research program—then there is perhaps some common monetary denominator for measurement. But the attribution of actual impact, even if an academic paper is cited in testimony, committee reports, and legislative histories, will be diffuse, unlike the disclosures required by patent applications. One promising technical avenue for measuring this type of influence is the Overton index, which aims to make the relationship between academic work and policy documents discoverable.

For the other categories of impact, the prospects of attribution and measurement are cloudier still, but glimmers of possibility exist. To assess the impact on people, we might borrow from education. Formal education uses structured measurements such as evaluation rubrics. But the impact of scholarly work often happens informally, outside of classrooms. Such informal learning is harder to measure, but some museums, for example, adopt proxy measures such as “dwell time,” or how long someone spends in an exhibition. It is possible, then, to get a ballpark sense of how intensive an impact is—that is, how much people might have learned. Almost all educational institutions measure the size of their audience, how extensive impact is. A third dimension of this space of impact on people might be identity or specificity—the demographic, personal, and professional roles of audience members.

Similar proxy systems can be used to assess impact on general thinking. For example, a tool like Google Trends can help identify when and how often terms are used in web searches. But unless someone is coining an entirely new word or concept, it may not be possible to discern between a person who generates a brilliant idea and a person who succeeds at communicating it. Attribution and measurement do not go hand in hand. More complications arise from changes in behavior, protocol, or language internal to an organization; though often unobserved and undocumented, these are impacts nevertheless.

Finally, there are confounding questions of time and space. Some ideas have an immediate impact: they are retweeted, celebrated in op-ed pages, and become part of a public agenda. Others, however, burn slowly over time but nevertheless instigate profound changes. And some ideas gain local credibility; for example, a community in Nepal figures out how to innovate around a shared resource—but their insights may take decades, or Nobel laureate Elinor Ostrom, to spread to other areas. Even if bibliometric and other analytic measures evolve, the full measure of impact is likely to remain lumpy and elusive.

Tracing the knowledge value collective path

A further complication is that the concept of impact is limited by the fact that it is distinct from the ultimate goal—outcomes. New DAC legislation and technologies are well and good, but they require further interactions to effect the outcomes: reducing carbon dioxide in the atmosphere and mitigating global warming. Impacts are gateways to outcomes, but a cascade of interactions is required for those outcomes to manifest. Thinking about what connects academic or scholarly work to outcomes led to the idea of the “knowledge value collective” (KVC), articulated by (yet another!) political scientist, Barry Bozeman, and his colleague Juan Rogers.

Some ideas have an immediate impact: they are retweeted, celebrated in op-ed pages, and become part of a public agenda. Others, however, burn slowly over time but nevertheless instigate profound changes.

The KVC refers to the set of actors who intermediate between an output, which could be an idea or product, and an outcome in the world. When a new DAC technology comes along, for example, the KVC includes not only potential investors and regulators, but also prospective neighbors of the sites where such technologies would be piloted and deployed, as well as potential buyers in a market for carbon dioxide that does not yet exist. If the people making the DAC technology understand the KVC well enough, they will appreciate the constraints and opportunities better and take those supposed downstream concerns reflexively into consideration when they imagine and design the technology. Research that better understands the KVC is better positioned for impact. When NSF’s Technology, Innovation, and Partnerships directorate emphasizes stakeholder partnerships, when DOE requires community benefits plans, and when the Pew report elaborates the socially engaged work necessary for societal impact, they implicitly endorse a vision of engaging portions of the KVC.

Successfully understanding and navigating the KVC, however, require a set of skills or talents that may be very different from those that have led to the initial technical discovery, invention, or analysis. Indeed, this task becomes the equivalent of an additional research project, complete with needs for new capacities and collaborations. For IP-based impact, omnipresent university-based tech transfer offices and proliferating entrepreneurship and innovation programs provide training for engaging for-profit aspects of the KVC. However, there are few formally organized, university-wide groups that teach their trainees to navigate community-based, not-for-profit, and public sector pathways through the KVC.

The KVC, in other words, provides another way to structure the very squishy concept of impact, helping the practitioner follow the many twists and turns that occur along the way to an outcome. It also moves the creation of impact away from the ballistic model of launching single missiles and hoping for an impact, toward a more practical model of trying multiple approaches and learning skills to navigate a complex sociotechnical landscape. Thus, rather than measuring the craters of impact (or the patents, publications, or earnings of IP), the KVC approach suggests that we might map out possible pathways to outcomes that enable others to discuss the plan and also, after the fact, determine whether these goals were met.

Enter the “impact catechism”

Adding impact to universities’ mission requires a framework that is different from tech transfer, but that is just as well institutionalized and supported. Universities also need to be able to tell a credible story of how they create change in the world using proxy measures, attributions, winding KVCs, and metrics not yet invented. Fortunately, there is help.

Impacts are gateways to outcomes, but a cascade of interactions is required for those outcomes to manifest.

In the 1970s, George Heilmeier, legendary director of the Defense Advanced Research Projects Agency (DARPA), conceived of what is often called the Heilmeier Catechism, a series of eight questions designed to force assumptions about a proposed research project out into the open so they can be subjected to rigorous scrutiny. Heilmeier created the questions not just to guide prospective investigators in clarifying their research ideas, but also to protect the integrity and mission of DARPA so that it was not funding half-baked ideas—or worse, easy ones.

Creating and adopting an “impact catechism” could help academics envision how they can affect the world and guide them through the process. It could also help universities improve their ability to produce impact by beginning to understand the myriad ways their faculty, staff, and students can influence policy, politics, and people—by teaching those skills to their personnel; by choosing important areas of impact self-consciously; and by investing in and valorizing their work. Starting down this path means building the capacity to identify and categorize the types of impact various entities within the university aspire to. Then the university can support those entities’ performance toward those categories by facilitating their presence in the right kinds of networks, advancing their professional development with the right kinds of skills, and providing them with the right kinds of infrastructural support.

In an attempt to develop an impact catechism, I have begun to share the following eight questions informally with colleagues and students:

  1. What kind(s) of impacts (category/type) are you aiming at?
  2. What scope (extensivity) and depth (intensivity) of impact are you planning for?
  3. What specific audience(s) are you addressing or constructing?
  4. What (causal) model do you have in mind for creating impact?
  5. How are you creating opportunities for impact?
  6. Who or what (KVC) connects your outputs to impacts and outcomes?
  7. How are you participating in, researching, or keeping track of (intermediate) impacts along the way?
  8. How will you tell the story of the impact that you have with humility and accuracy?

Different versions of the Heilmeier Catechism exist as it was refined over time—including losing the “catechism” in favor of “questions.” My impact catechism may be a similar type of first draft. As a starting point for faculty members, research development staff, and central research offices, as well as for research sponsors, I hope this version can inspire new practice.

Creating and adopting an “impact catechism” could help academics envision how they can affect the world and guide them through the process.

At the core of this new practice is a different imaginary—one in which faculty and students learn the skills to change the world not only through publishing or patenting or profit-seeking outputs, but also through the skills of social organizing and political communication; through rigorous policy design and implementation; and through public-interest technology development and knowledge mobilization for public purpose. Such knowledge today remains relegated to more explicitly political organizations, even if many—such as think tanks and civic organizations—are also not-for-profits like universities. If universities want to deliver on the goal of having a beneficial impact on their community, their state, their nation, or their world, they must find a way to inculcate these skills.

What better way to start than by asking questions?

How to Make Your Own Automaton

Devote yourself to the resurrection, build
a body that moves. Begin with the skin:

cast a bulwark, impermeable hardware
more malleable than meat.

Dispatch your lusus naturae
like a Trojan horse into factories

where they won’t understand what lurks
in the future. Someone’s just doing

their job: sure, the buildings remain. Yes, it will
self-destruct, wipe out its own hard drive.

Tell yourself your creations are
amusements, or machines made to do

the work and leave you to invention—as if
destruction were the sole work of others.

Tell yourself you have built something
that outlasts grief rather

than something that invents and repeats it.
Did not Daedalus grasp the danger

as he swaddled his son’s agile frame
with a cape of feathers?

And yet, what a wonder,
to see your creation launch

into flight and hover before the sun.

A Space Future Both Visionary and Grounded

In “Taking Aristotle to the Moon and Beyond” (Issues, Spring 2024), G. Ryan Faith argues that space exploration needs a philosophical foundation to reach its full potential and inspire humanity. He calls for NASA to embrace deeper questions of purpose, values, and meaning to guide its long-term strategy.

Some observers would argue that NASA, as a technically focused agency, already grapples with questions of purpose and meaning through its scientific pursuits and public outreach. Imposing a formal “philosopher corps” could be seen as redundant or even counterproductive, diverting scarce resources from more pressing needs. Additionally, if philosophical approaches become too academic or esoteric, they risk alienating key stakeholders and the broader public. There are also valid concerns about the potential for philosophical frameworks to be misused to justify unethical decisions or to shield space activities from public scrutiny.

Yet despite these challenges, there is a compelling case for why a more robust philosophical approach could benefit space exploration in the long run. By articulating a clear and inspiring vision, grounded in shared values and long-term thinking, space organizations can build a sturdier foundation for weathering political and economic vicissitudes. Philosophy can provide a moral compass for navigating thorny issues such as planetary protection, extraterrestrial resource utilization, and settling other celestial bodies. And it may not be a big lift if small steps are taken. For example, NASA could create an external advisory committee on the ethics of space and fund collaborative research grants—NASA’s Office of Technology Policy and Strategy is already examining ethical issues in the Artemis moon exploration program, and the office could serve as one place within NASA to take point. In addition, NASA could bring university-based scholars and philosophers to the agency on a rotating basis, expand public outreach to include philosophical discussions, and host international workshops and conferences on space ethics and philosophy.

By articulating a clear and inspiring vision, grounded in shared values and long-term thinking, space organizations can build a sturdier foundation for weathering political and economic vicissitudes.

Ultimately, the key is to strike a judicious balance between philosophical reflection and practical action. Space agencies should create space for pondering big-picture questions, while remaining laser-focused on scientific, technological, and operational imperatives. Philosophical thinking should be deployed strategically to inform and guide, not to dictate or obstruct. This means fostering a culture of openness, humility, and pragmatism, where philosophical insights are continually tested against real-world constraints and updated in light of new evidence.

As the United States approaches its return to the moon, we have a rare opportunity to shape a future that is both visionary and grounded. By thoughtfully harnessing the power of philosophy while staying anchored in practical realities, we can chart a wiser course for humanity’s journey into the cosmos. It will require striking a delicate balance, but the potential rewards are immense—not just for space exploration, but for our enduring quest to understand our place in the grand sweep of existence. The universe beckons us to ponder big questions, and to act with boldness and resolve.

Former Associate Administrator for Technology Policy and Strategy

Former (Acting) Chief Technologist

National Aeronautics and Space Administration

G. Ryan Faith’s emphasis on ethics in space exploration is well-met given contemporary concerns regarding artificial intelligence and the recent NASA report on ethics in the Artemis program. As we know from decades of study, the very technologies we hope will be emancipatory more often carry our biases with them into the world. We should expect this to be the case in lunar and interplanetary exploration too. Without clear guidelines and mechanisms for ensuring adherence to an ethical polestar, humans will certainly reproduce the problems we had hoped to escape off-world.

Yet, as a social scientist, I find it strange to assume that embracing a single goal, or “telos,” might supersede political considerations, especially when it comes to funding mechanisms. NASA is a federal agency. The notion of exploration “for all humankind” certainly illuminates and inspires, but ultimately NASA’s mandate is more mundane: to further the United States’ civilian interests in space. The democratic process as practiced by Congress requires annual submission of budgets and priorities to be approved or denied by committee, invoking the classic time inconsistency problem. In such a context, telic and atelic virtues alike are destined to become embroiled and contested in the brouhaha of domestic politics. Until we agree to lower democratic barriers to long-term planning, the philosophers will not carry the day.

The notion of exploration “for all humankind” certainly illuminates and inspires, but ultimately NASA’s mandate is more mundane: to further the United States’ civilian interests in space.

Better grounding for a philosophy of space exploration, then, might arise from an ethical approach to political virtues, such as autonomy, voice, and the form of harmony that arises from good governance (what Aristotle calls eudaimonia). In my own work with spacecraft teams and among the planetary science community, I have witnessed many grounded debates as moments of statecraft, some better handled than others. All are replete with the recognizable tensions of democracy: from fights for the inclusion of minority constituents, to pushback against oligarchy, to the challenge of appropriately managing dissenting opinions. It is possible, then, to see these contestations at NASA over its ambitions not as compulsion “to act as philosophers on the spot,” in Faith’s words, but as examples of virtues playing out in the democratic polis. In this case, we should not leapfrog these essential debates, but ensure they give appropriate voice to their constituents to produce the greatest good for the greatest number.

Additionally, there is no need to assume an Aristotelian frame when there are so many philosophies to choose from. The dichotomies that animate Western philosophies are anathema to adherents of several classical, Indigenous, and contemporary philosophies, who find ready binaries far too reductive. We might instead imagine a philosophy of space exploration that enhances our responsibility to entanglements and interconnectivities: between Earth and moon, human and robotic explorers, environments terrestrial and beyond. Not only would this guiding philosophy be open to more people, cultures, and nations, and better hope to escape “terrestrial biases” by rejecting a ready distinction between Earth and space. It would also hold NASA accountable for maintaining an ethical approach to Earth-space relations throughout its exploration activities, regardless of the inevitable shifts in domestic politics.

Associate Professor of Sociology

Princeton University

G. Ryan Faith succinctly puts his finger on exactly what ails NASA’s human spaceflight program—a lack of telos, the Greek word for purpose. In this concept, you are either working toward a telos, or your efforts are atelic. In the case of the Apollo program, NASA had a very specific teleological goal: to land a man on the moon and return him safely to Earth (my personal favorite part of President Kennedy’s vision) by the end of 1969.

This marked a specific goal, or “final cause.” The Hubble Space Telescope, on the other hand, is very much atelic. That is, there is no defined endpoint; you could literally study the universe forever.

This philosophical concept is well and good for the Ivory Tower, but it also has a very practical application at the US space agency.

NASA has gone through several iterations of its human moon exploration program since it was reincarnated during the George W. Bush administration as Project Constellation. I cannot tell you how many times someone has asked me, “Now, why are we going to the moon again? Haven’t we been there? Don’t we have enough problems here on Earth? And don’t we have a huge deficit?”

Why yes, we do have a huge deficit. And the world does feel fraught with peril these days, given the situations in Russia, China, and the Middle East. If NASA is to continue to receive significant federal funding for its relatively expensive human exploration program, it needs to have a crisp answer for why exactly we should borrow money to send people to the moon (again).

Ryan brings up an interesting paradox of the Apollo program’s success, namely that “going to the moon eliminated the reason for going to the moon.” And he reminds us that “failing to deliberately engage philosophical debates about values and vision.… risks foundering.”

If NASA is to continue to receive significant federal funding for its relatively expensive human exploration program, it needs to have a crisp answer for why exactly we should borrow money to send people to the moon (again).

There are certainly many technical issues the agency needs to grapple with. Do we build a single base on the moon or land in various locations? Do we continue with the Space Launch System rocket, built by Boeing, or switch to the Starship rocket or the much cheaper Falcon Heavy rocket, both built by SpaceX?

But the most important question NASA has to answer is why: why send humans to the moon, risking their lives? Should it be to “land the first woman and first person of color” on the moon, as NASA continuously promotes? Why not explore with robots that are much cheaper and don’t complain nearly as much as astronauts do?

I believe there are compelling answers to these questions. Humans can do things that robots cannot, and sending humans to space is in fact very inspirational. The moon can serve as an important testing ground for flying even deeper into the solar system. But first, the problematic question why demands an answer.

The author would say that JFK’s reasoning was compelling: “We choose to go the moon and do the other things…not because they are easy, but because they are hard.” A great answer in the 1960s. But in the twenty-first century, NASA’s leadership would be well-served to consider Ryan’s article and unleash, in the words of Tom Wolfe, the “power of clarity and vision.”

Senior Fellow, National Center for Energy Analytics

Colonel, USAF (retired)

Former F-16 pilot, test pilot, and NASA astronaut

G. Ryan Faith provides a thoughtful examination of the philosophical foundations for human space exploration—or rather the lack of such foundations. Human space exploration is where this lack is most acute. Commercial, scientific, and military missions have reasons grounded in economic, research, and national security imperatives. They are grounded in particular communities with shared values and discourse. Supporters of human space exploration are found in diffuse communities with many different motivations, interests, and philosophical approaches.

The end of the Apollo program was a shock to many advocates of human space exploration as they assumed, wrongly, that going to the moon was the beginning of a long-term visionary enterprise. It may yet be seen that way by history, but the Apollo landings resulted from US geopolitical needs during the Cold War. They were a means to a political end, not an end in themselves.

Former NASA administrator Mike Griffin gave an insightful speech in 2007 in which he described real reasons and acceptable reasons for exploring space. Real reasons are individual, matters of the heart and spirit. Acceptable reasons typically involve matters of state, geopolitics, diplomacy, and national power, among other more practical areas. Acceptable reasons are not a façade, but critical to large-scale collective action and the use of public resources. They are the common ground upon which diverse individuals come together to create something bigger than themselves.

Real reasons are individual, matters of the heart and spirit. Acceptable reasons typically involve matters of state, geopolitics, diplomacy, and national power, among other more practical areas.

It is more than our machines or even astronauts that we send into space, but our values as well. As Faith’s article makes clear, there is value in thinking about philosophy as part of sustainable support for human space exploration. At the same time, the desire for a singular answer can be a temptation to tell others what to do or what to believe. The challenge in space is similar to that of the Founders of the United States: how to have a system of ordered liberty that allows for common purposes while preserving individual freedoms.

As humanity expands into space, I hope the philosophical foundations of that expansion include the values of the Enlightenment that inspired the Founders. In this vein, the National Space Council issued a report in 2010 titled A New Era for Deep Space Exploration and Development that concluded: “At the frontiers of exploration, the United States will continue to lead, as it has always done, in space. If humanity does have a future in space, it should be one in which space is the home of free people.”

Director, Space Policy Institute, Elliott School of International Affairs

George Washington University

A Binational Journey Toward Sustainability

The border between the United States and Mexico is both a political boundary and a demarcation of different ideological representations of a shared binational landscape. Both sides of the border share climate, geography, environment, resource bases, and increasing urbanization, but there are wider divisions among cultures, languages, economies, law, politics, education, and infrastructure. The two sides of the border are even further removed when considering demographic trends; degrees of political autonomy; the relative vigor of civil society; the diversity of institutions in the public, private, and civil sectors; and ability to cope with environmental stress. These multifaceted challenges along the US-Mexico border require collaborative approaches that extend beyond immediate geographical boundaries and across scientific disciplines.

Today, as this binational region undergoes multiple interlinked social, political, and environmental transitions, collaboration around regional sustainability is urgently necessary. Climate change is just one of the factors contributing to deteriorating air and water quality, compromised health, and limited opportunities for sustainable development for those who live in the region. These problems are complex and cross not only international borders, but also interstate and local jurisdictions, impacting Native tribal entities’ relationships with both governments. Finding ways to make life in the area more sustainable requires a systemic understanding of the region through engagement with local residents, including Indigenous groups, decisionmakers at multiple levels of governance, and experts from many disciplines.

The boundary is cast as violent and unruly—a problem rather than an opportunity. An ongoing challenge for a collaborative partnership is to redress this misleading and unhelpful approach.

An additional context for the border must be considered as well. In the capitals of both countries, the boundary is cast as violent and unruly—a problem rather than an opportunity. An ongoing challenge for a collaborative partnership is to redress this misleading and unhelpful approach. The flow of economic migrants and refugees into the United States occurs both legally and illegally, dominating the political conversation in both countries. Illegal activities in the region—such as drug trafficking into the United States and weapons trafficking into Mexico—have generated violence, corruption, and political tensions, intensifying the continued vulnerability of the people and landscape in this area.

In response to the challenges facing the region, the US National Academies of Sciences, Engineering, and Medicine (NASEM), Mexican Academy of Sciences (Academia Mexicana de Ciencias, or AMC), Mexican Academy of Engineering (Academia de Ingeniería de México), and National Academy of Medicine of Mexico (Academia Nacional de Medicina de México) joined together to appoint a committee of experts from the United States and Mexico to conduct a consensus study in 2020. The committee’s report, Advancing United States–Mexico Binational Sustainability Partnerships (Avances de las Alianzas Binacionales para la Sostenibilidad entre Estados Unidos y México), addresses select sustainability challenges in the binational region and makes recommendations on how to build partnerships to advance shared sustainable development goals. Importantly, the study does not focus on border policy per se, but considers the complex relationship of such policies in the context of broader binational sustainability challenges.

The report and the process behind it represent a pioneering example of binational cooperation in which both countries’ national academies jointly identified drylands sustainability as a challenge. More importantly, both countries’ national academies recognized that diagnosis, assessment, engagement, and solution needed to be not just binational but also interdisciplinary, involving experts with varied training as well as transdisciplinary perspectives, building on expertise from civil society and the private sector.

This binational collaboration succeeded in part because leaders understood that environmental concerns are interrelated with social, economic, cultural, and political implications for border policies, trade, civil society, urbanization, and migration.

The path to producing the consensus report took nine years, revealing the necessity of such binational work as well as its challenges. In particular, what began as a relatively focused study of climate change shifted and adapted to become a consensus study about the sustainability of the fragile, shifting cross-border drylands region. Along the way, the project scrambled for funding while navigating the two countries’ shifting politics and a global pandemic.

Lessons from this partnership extend beyond the specific challenges addressed in the consensus study. We would like to highlight the importance of flexibility and adaptability in the face of evolving circumstances, and the challenges of keeping such an initiative going in a turbulent political landscape. In addition, the experience demonstrates the importance of broadly considering sustainability challenges: this binational collaboration succeeded in part because leaders understood that environmental concerns are interrelated with social, economic, cultural, and political implications for border policies, trade, civil society, urbanization, and migration.

A long path to partnership

The sustainability partnership was new for both nations’ academies, but the two countries had many earlier shared frameworks for environmental policy. Past efforts to develop tailored responses that consider the nuances of both sides of the border have involved establishing institutions for binational governance including the International Boundary and Water Commission, the North American Development Bank (NADBank), the Border Environment Cooperation Commission (now merged with NADBank), and the Commission for Environmental Cooperation (based in Montreal to address trinational environmental policy). However, the latter three institutions were developed in response to environmental concerns in the context of the North American Free Trade Agreement, or NAFTA, which was supplanted by the United States-Mexico-Canada Agreement in 2020.

Another long-standing element of cross-border collaboration involves the myriad civil society movements, initiatives, and projects addressing social, environmental, and other challenges that play critically important roles in this region. On the Mexican side of the border, political and economic conditions often imply inadequate environmental stewardship and weak enforcement of environmental protections for conservation; however, recent policy changes include modifications to land and water law. The evolving social-environmental landscape demands adaptive approaches to address the complex issues facing the region.

The border is a good place for science diplomacy.

From our work, it is clear that building shared understanding of problems and sustainability goals creates opportunities for collective responses and solutions, potentially synergizing the expertise and coordinating actions in both countries. In other words, the border is a good place for science diplomacy. With that in mind, several precursor collaborations between NASEM and AMC and the academies of other countries helped pave the way for the 2021 sustainability partnership initiative.

The AMC became interested in engaging with climate change and related sustainability challenges in 2014, after Climate Change: Evidence and Causes, a report by NASEM and the Royal Society in the United Kingdom, was translated into Spanish. Subsequently, AMC and NASEM, along with other academies, established the office of the Inter-American Network of Academies of Sciences (IANAS) at the AMC premises in Mexico City. IANAS led workshops and published books on water and energy. In June 2014, the New Horizons in Science Symposium—a three-academy initiative of AMC, NASEM, and the Royal Society of Canada (Academies of Arts, Humanities, and Sciences of Canada)—followed in Mexico City. This further solidified the collaborative spirit by bringing together young Canadian, Mexican, and US scientists on topics including astrophysics, biotechnology, green chemistry, hazards and disasters, oceanography, and marine biology. These invigorating collaborations highlighted the role of science in addressing regional challenges and sparked enthusiasm among the younger generation of scholars.

In 2015, the community shifted to focus on binational partnerships, with NASEM and AMC beginning to work on the challenges and opportunities of climate and development. By February 2016, when the first organizational meeting was held in Mexico City, the initial framing centered on climate change, ecological dynamics, use of natural resources, and societal vulnerability to climate stressors in the transboundary drylands.

A critical point in the budding collaboration occurred at the 2016 meeting held in Washington, DC. Participants included representatives from key US federal agencies as well as the Mexican embassy’s science attaché, who all demonstrated a keen interest in shared challenges for public policy around resilience in the drylands of the border area. This meeting, which was one of the last formal acts of NASEM president Ralph Cicerone, laid the groundwork for the subsequent efforts of the binational committee. However, even though there was consensus among participants that the study was needed, there was agreement that it might not lead to any programmatic change. At the time, no funders could be secured, given the challenging political environment presented by the upcoming US presidential election. That the initiative succeeded despite these hurdles is a testament to the dedication of stakeholders in both countries. 

Without funding, the efforts languished until NASEM’s Board on Environmental Change and Society became involved in finding a solution. Conversations with binational experts explored the factors that influence social-ecological resilience in the border region, with an eye toward considering the effects of a changing climate. Themes that came up included exploring applied research, setting priorities for actionable solutions, and identifying pathways forward by highlighting promising collaborative efforts, all of which reflected a commitment to addressing real-world challenges faced in the region.

The aim has always been to examine and deepen our analyses and point to initiatives that will help address sustainability issues in the transboundary region in both the public and private sectors.

In mid-2017, however, the priorities of the project began to crystallize around sustainability science. This framing enabled researchers to view the climate challenge in the context of exploring environmental, economic, political, cultural, and social challenges that characterize the broader transboundary region. Institutional and financial support was secured through NASEM’s sustainability office and the Cynthia and George Mitchell Foundation. The new framing also provided an opportunity to advance the underexplored academic subject of sustainability science. In describing the future work, José Franco, past AMC president, said, “The aim has always been to examine and deepen our analyses and point to initiatives that will help address sustainability issues in the transboundary region in both the public and private sectors.”

But as the focus on sustainability evolved, it came to line up with an emerging shared institutional emphasis on the Sustainable Development Goals (SDGs) at both NASEM and AMC. Although the SDGs have more widespread institutional support in Mexico than in the United States, NASEM has long been a strong proponent of them. Ongoing changes in climate, land degradation, social instability, and other binational challenges make achieving the SDGs in the US-Mexican transboundary region both daunting and urgent. Recognizing that the transboundary collaboration would be key to reaching the SDGs in the region, the group also began to focus on SDG17, which pinpoints the importance of global partnerships in sustainability.

The collaboration gained further momentum in November 2017, when the partnership was able to facilitate engagement between experts and the highest levels of policymaking in Mexico. The AMC skillfully opened doors for the committee to make a presentation at the Senate office in Mexico City to three Senate subcommittees (water, climate change, and the Mexico-US border) on the ongoing process of the study and efforts aimed at policymaking. This and a subsequent event showcased the potential policy and political impacts that could accrue from the study.

They began to look at complex interactions among the economic, environmental, and social dimensions of the broader region.

As the collaboration developed, the vision expanded to acknowledge land beyond what might strictly be considered the border region. A May 2018 workshop in San Luis Potosí, Mexico, held at an interdisciplinary research center raised the question of whether the border can be considered in isolation. At this meeting, the focus of the project extended to encompass the larger dryland region of Mexico and the United States. This change also proved to be significant later, when the consensus committee drew its members from this wider geographic area.

This workshop pioneered a design approach that laid the foundation for the subsequent consensus study. At the meeting, the participants steered clear of doing analysis strictly by sector or geographic region. Instead, they began to look at complex interactions among the economic, environmental, and social dimensions of the broader region. Rather than considering each sector in turn, the committee’s approach emphasized understanding the transboundary region through its multiple systemic interconnections.

Constructing a consensus study

In August 2019, members of the committee reconvened at the Biosphere 2 in Arizona to plan for next steps, including the proposal for a first-of-its-kind formal consensus study between NASEM and AMC. With funding in place from the Mitchell Foundation, and a sustainability focus that coalesced around SDG17, the way was paved for the official approval of the consensus study. Initial meetings were held virtually, with a March 2020 in-person meeting scheduled at AMC headquarters in Mexico City—just as the SARS-CoV-2 virus was declared a global health emergency. The ensuing shutdown prevented several committee members from traveling to the meeting, while others had to rush home before their flights were canceled. Overall, the experience demonstrated the resilience and adaptability of the effort. The July 2020 meeting with stakeholders, designed to serve as the primary data collection process to augment background documents and committee members’ expertise, was held entirely in virtual mode. Subsequently, writing teams met virtually to draft their chapters, with the full committee meeting virtually periodically for cross-pollination and to take stock of overall progress.

The consensus report argued for more global partnerships structured around social science theory, and for applied research to explore potential strategies and mechanisms of improving coordination between institutions on both sides of the border.

When it was completed in 2021, the consensus report argued for more global partnerships structured around social science theory, and for applied research to explore potential strategies and mechanisms of improving coordination between institutions on both sides of the border. The report’s completion also exposed the limitations of the process. For example, SDGs offer a particular way of dealing with complex issues, but they do not lend themselves well to constructing a persuasive account that fits all desired goals. Native American participants in the process voiced concern that constructing a more compelling narrative to encourage accomplishing SDGs would need to take into account considerations of Indigenous communities, who have deep and long-standing precolonial ties across the border and remain committed to those relationships in spite of the imposition of arbitrarily divisive nation-state boundaries. Future partnerships can build on this realization and work to address it.

A road map for partnership

Over the last nine years, the path forward has not always been clear, requiring constant readjustments and adaptations. Stakeholders found themselves navigating through an ever-evolving landscape of challenges and priorities. Despite the uncertainties, the joint efforts of NASEM, AMC, and the consensus study team members, along with the commitment of other stakeholders, have yielded significant progress in addressing sustainability challenges in the US-Mexico border region. Today, in the context of rapid global change, we see an unprecedented opportunity to create new partnerships that collaboratively address shared binational sustainability challenges and inform the development of national policies and management capacity to promote sustainable development.

We see an unprecedented opportunity to create new partnerships that collaboratively address shared binational sustainability challenges and inform the development of national policies and management capacity to promote sustainable development.

The principles synthesized by the joint academies study provide a road map for building effective partnerships in achieving broader SDG targets. From identifying context-specific partnerships to ensuring coproduction of agendas and strategies, the principles emphasize the importance of collective involvement, trust building, resilience, and adaptability. What holds partnerships and the binational region together are the interrelationships among stakeholders and the environment that supports them.

Central to these principles is leadership that ensures partners and the stakeholders they represent are collectively involved in pursuing common goals. Coproduction of knowledge, activities, and assessments helps to ensure effective relationship building and capacity sharing among partners. This is not only essential to the immediate socio-environmental challenge at hand, but also to sustaining partnerships beyond the scope of a particular initiative. The capacity of partnerships for planning and decisionmaking is linked to external policymaking and must ensure flexibility, adaptability, and responsiveness to changing conditions.

The collaborative journey stands as a model for addressing complex and interconnected challenges through sustained partnerships. Moving forward, it can serve as a beacon for other regions facing similar challenges, highlighting the power of international collaboration, adaptability, and a shared commitment to sustainability. In sum, what holds partnerships—indeed, the binational region—together are the relationships we build with each other.

Like the Web Is Part of the Spider

"The Maniac" by Benjamin Labatut

Applying too much presentism—interpreting past events in terms of modern values and concepts—can ruin fiction. The power of literature, after all, is in revealing and exploring the universal qualities that make us human, regardless of time or place.

Yet it’s hard not to view Benjamín Labatut’s The Maniac through the lens of today’s rapid expansion of artificial intelligence. And at first blush, the book’s primary subject—the Hungarian polymath and all-around genius John von Neumann (1903–1957)—doesn’t have many qualities in common with the rest of us. A child prodigy, he became a young star in the mathematics community and, after immigrating to the United States, a major figure in the Manhattan Project and a pioneer across many fields of science, particularly computing.

I label von Neumann the “primary subject” instead of the “protagonist” because The Maniac isn’t a novel, strictly speaking. Rather, Labatut calls it “a work of fiction based on fact.” The fiction comes in the way most of the book is structured: narrative fragments written in first-person point of view by von Neumann’s colleagues, collaborators, and loved ones. (Richard Feynman, for one, is a lot more fun than I would have anticipated.) The facts, then, are von Neumann’s life, times, and achievements.

Seeing von Neumann through the eyes of others helps paint a holistic picture of him as mathematician, thinker, and man. Strikingly, many descriptions emphasize his extreme rationality combined with, as fellow mathematician Theodore von Kármán puts it, “almost childlike moral blindness.” Von Neumann’s tutor, Gabor Szego, describes him as possessing “a sinister, machinelike intelligence that lacked the restraints that bind the rest of us.” If those depictions sound like an AI to you, you’re in good company. It’s telling that the one first-person perspective Labatut never gives the reader is von Neumann’s own, almost as if his brain were as impenetrable as a computer.

But something else emerges from these perspectives, something quite human after all. What drove von Neumann, the reader learns, was a deep desire for order and rationality. In this, he’s not alone. Labatut visited this theme in his first book, the aptly titled When We Cease to Understand the World, another quasi-fictional exploration of scientists and mathematicians grappling with chaos in their own minds and in the universe.

The Maniac isn’t a novel, strictly speaking. Rather, Labatut calls it “a work of fiction based on fact.”

The world von Neumann inhabited was one of disorder and disconcerting change, a place where the creeping forces of fascism and nationalism grew ever stronger. And, under the weight of new scientific discoveries, the universe seemed to echo humanity’s irrationality, terrifying mathematicians and even lay people. A passage attributed to von Kármán, for example—“inexplicable shapes of non-Euclidean space, populated with bizarre objects that suggested the impossible”—could have come straight out of the cosmic horror fiction of the contemporaneous H. P. Lovecraft.

In von Neumann’s doctoral thesis, he set out to combat these forces, “to find the purest and most basic truths of mathematics, and to express them as unquestionable axioms, statements that could not be denied, disproven, or contradicted, certainties that would never fade or become distorted and so would remain—like a deity—timeless, unchangeable, and eternal.” But his faith in eternal logic was shattered when he encountered Kurt Gödel’s incompleteness theorems.

Von Neumann was part of a cohort of mathematicians and scientists who emigrated from Europe to the United States before World War II—many as refugees—and became involved in the Manhattan Project. Building nuclear weapons required calculations, and lots of them. Von Neumann worked on the mathematics of the Fat Man plutonium bomb and became familiar with the machines used for speeding such calculations. After the war, interest in enhanced computing power continued, particularly for defense applications. The hydrogen bomb was much more powerful and complex than Fat Man, and von Neumann offered to build the US government a new, digital computer that could handle the needed calculations. In return, he requested the unused computing time for his own purposes. His plan was to explore the power of computing across many fields.

The world von Neumann inhabited was one of disorder and disconcerting change, a place where the creeping forces of fascism and nationalism grew ever stronger.

He called his machine MANIAC, for Mathematical Analyzer, Numerical Integrator and Computer. (Spoiler: it’s not the only maniac in the book!) As Feynman puts it in one of the novel’s passages: “It’s scary how science works. Just think about this for a second: the most creative and most destructive human inventions arose at exactly the same time. So much of the high-tech world we live in today, with its conquest of space and extraordinary advances in biology and medicine, were spurred on by one man’s monomania and the need to develop electronic computers to calculate whether an H-bomb could be built or not.”

The last quarter of The Maniac jumps ahead 60 years after von Neumann’s death, to the 2016 match between AlphaGo, a computer program, and human Go champion Lee Sedol. As relayed in exciting play-by-play action, the computer wins all games but one. Some of its moves are inspired and original, totally defying convention: rather than using brute force, the machine seems to be thinking. Yet at one point it also appears to become delusional, making nonsense moves. It’s tempting to conclude that such “hallucination” accompanies any intelligence, human or machine. But Labatut warns that computer intelligence continues to grow. The newer AlphaZero, developed only a year later by Google’s DeepMind, defeated AlphaGo 100 games to 0.

“It’s scary how science works. Just think about this for a second: the most creative and most destructive human inventions arose at exactly the same time.”

 What motivated von Neumann was certainly nothing like what drives today’s tech titans. He never spent a single moment trying to monetize your personal data or optimize your feed’s algorithm to sell you handbags or sway your vote. But it remains an open question for me whether he abandoned his quest for order and rationality after accepting Gödel’s theorems, or if he ultimately came to view machine intelligence as a means to overcome the limits of the human psyche and find the ultimate order he still believed existed in the universe. 

In the present, it’s safe to say that AI has so far amplified chaos rather than quelled it. Which gets to von Neumann’s views about technology, laid out in a marvelous 1955 essay entitled “Can We Survive Technology?” There’s no avoiding presentism here. The essay touches on AI and automation, nuclear power, climate change, and the extreme difficulty of disentangling the useful and dangerous implications of the same technologies (a theme addressed in Issues). Seventy years later, von Neumann’s insights resonate with many current policy challenges.

No techno-optimist, he nonetheless viewed technological advances as inevitable. “For progress there is no cure,” he wrote in the essay. “Technology—like science—is neutral all through, providing only means of control applicable to any purpose, indifferent to all.” He urged his readers to consider technology not as separate from humanity, but “part of us, just like the web is part of the spider.” But if technology is part of us, then who are we? Will we use AI to help people, or to subdue, manipulate, and control them?

The path forward, he concluded, requires circling back to what humans have that the machines do not, the universals that make reading thought-provoking, frightening, and challenging books like The Maniac worthwhile. “To ask in advance for a complete recipe would be unreasonable,” von Neumann wrote. “We can specify only the human qualities required: patience, flexibility, intelligence.”

Catalyzing Renewables

In “Harvesting Minnesota’s Wind Twice” (Issues, Spring 2024), Ariel Kagan and Mike Reese discuss their efforts targeting green ammonia production using water, air, and renewable electricity to highlight the role of community-led efforts in realizing a just energy transition. The effort showcases an innovative approach to spur research and demonstrations for low-carbon ammonia production and its use as a fertilizer or for other energy-intensive applications such as fuel for grain drying. Several themes stand out: the impact that novel technologies can have on business practices, communities, and most importantly, the environment, and the critical policies needed to drive change.

The market penetration of renewables in the United States is anticipated to double by 2050, to 42% from 21% in 2020, according to the US Energy Information Administration. However, a report by the Lawrence Berkeley National Laboratory finds that rapid deployment of renewables has been severely impeded in recent years because it takes, on average, close to four years for new projects to connect to the grid. Therefore, technologies such as low-carbon ammonia production catalyze the deployment of renewables by creating value from “islanded” sources—that is, those that are not grid-connected. They also reduce the energy and carbon intensity of the agriculture sector since ammonia production is responsible for 1% of both the world’s energy consumption and greenhouse gas emissions.

US Department of Energy programs such as ARPA-E REFUEL and REFUEL+IT have been instrumental in developing and showcasing next-generation green ammonia production and utilization technologies. Pilot-scale demonstrations, such as the one developed by Kagan and Reese, significantly derisk new technology to help convince early adopters and end users to pursue commercial demonstration and deployment. These programs have also created public-private partnerships to ensure that new technologies have a rapid path to market. Other DOE programs have been driving performance enhancements of enabling technologies such as water electrolyzers to reduce the cost of zero-carbon hydrogen production and further expanding end uses to include sustainable aviation fuels and low-carbon chemicals.

Pilot-scale demonstrations, such as the one developed by Kagan and Reese, significantly derisk new technology to help convince early adopters and end users to pursue commercial demonstration and deployment.

The leap from a new technology demonstration to deployment and adoption is often driven by policy. In their case, the authors cite a tax credit that provides up to $3 per kilogram of clean hydrogen produced. But uncertainties remain: the US government has not provided full guidance on how this and other credits will be applied. Moreover, the production tax credit expires after 10 years, lower than typical amortization periods of capital-intensive projects. Our primary research with stakeholders suggests that long-term power purchase agreements with the renewable energy producer and an ammonia (or other product) producer could help overcome barriers to market entry.

Although their article focuses on the United States, the lessons that Kagan and Reese are gaining might also prove deeply impactful worldwide. In sub-Saharan African countries such as Kenya and Ethiopia, crop productivity can be directly correlated with fertilizer application rates that are lower than global averages. However, these countries have abundant renewable resources (geothermal, hydropower, wind, and solar) and favorable policy environments to encourage green hydrogen production and use. Capitalizing on the technology being demonstrated in Minnesota, as well as in DOE’s Regional Clean Hydrogen Hubs program, could enable domestic manufacturing, increase self-reliance, and improve food security in these regions and beyond.

Director, Renewable Energy

Technology Advancement and Commercialization

RTI International

Industrial Terroir Takes on the Yuck Factor

"Industrial Terroir Takes on the Yuck Factor," by Christy Spackman, explores direct potable reuse of wastewater.
Illustration by Shonagh Rae

Would you drink water you knew was made from your own poop or pee? What if that water showed up in a glass of award-winning beer? Perhaps you’ve never asked yourself these questions. For attendees of Scottsdale, Arizona’s annual Canal Convergence arts festival, the One Water Brewing Showcase offered an opportunity to try limited-edition craft beers made from highly purified wastewater. With each sip, drinkers found themselves tasting a speculative future not yet legally possible in Arizona—one where wastewater is immediately cleaned and returned to the municipal water supply system. Participating brewers donned their “water hero” capes, helping the municipal water utility’s efforts to get the public to accept—and possibly even embrace—purified recycled water.

If you’ve been paying attention to the news about the dwindling Colorado River or seen the spectacular bathtub ring in Lake Powell demarking the reservoir’s peak years, you’re well aware that Arizona needs “water heroes.” Since the 1980s, Arizona’s water managers have proposed and deployed a range of techniques to mitigate dwindling water supplies. Some are fantastical, like a contested proposal to geoengineer the atmosphere. Other techniques seem more practical, including water augmentation (e.g., desalinization of brackish groundwater, reducing the number of trees in forests, groundwater recharge); improving current water use (e.g., conservation, reusing wastewater); and efforts to innovate in water augmentation (e.g., cloud seeding). Though all these water futures rely on technological intervention, reuse is the one that involves people’s intimate, embodied experiences of smelling, tasting, and consuming water.

This question of water quantity and quality certainly hits close to home for me, as an inhabitant of Maricopa County, Arizona—one of the fastest growing metropolitan regions in the United States. My hometown has an average rainfall of only 8 inches per year. (Washington, DC, by comparison, receives more than 40 inches per year.) Meeting the water needs of the region’s 4.95 million inhabitants is a struggle. Recent record-breaking heat, multiple dry monsoon seasons, significant migration to the area, and the enduring trend of planting lawns into the desert mean regulators, policymakers, and others responsible for supplying water to water-stressed regions are constantly searching for “new” sources of water. Water recycling, also known as reuse (utility managers are conscious of the positive associations of terms such as recycled or reclaimed), sits at the heart of these possible water futures.

Treat, treat, and treat again

Water reuse offers providers a pathway for transforming water, no matter its source, into whatever type of water is needed. Many people who live in water-scarce regions practice informal forms of reuse, such as using water from rinsing vegetables to water house plants. Using water containing discards is not new.

With each sip, drinkers found themselves tasting a speculative future not yet legally possible in Arizona—one where wastewater is immediately cleaned and returned to the municipal water supply system.

At the municipal scale, reuse falls into two categories: indirect and direct. In indirect reuse, wastewater treatment facilities return treated water to aquifers or other natural buffers. Indirect reuse has proved a politically palatable approach for introducing reuse to communities. For example, due to a 1989 Scottsdale mandate that golf courses use reclaimed water, courses partnered with the city to fund the building of an advanced wastewater treatment plant. Now all courses in Scottsdale are watered with reclaimed water. The city then uses any excess water left over by the golf courses to recharge its aquifers—a move that the city points to as helping it reach “safe yield” levels (the rate at which groundwater can be withdrawn without affecting long-term water levels) nearly 20 years before mandated by Arizona law.

In contrast, direct potable reuse (DPR) sends the treated wastewater directly back into the water delivery system. It does this either by putting the treated wastewater into raw water headed for a drinking-water treatment plant, or by blending the treated wastewater with finished water ready for distribution. For technologically minded folks who think of water in terms of its molecules, it’s not a big leap to go from purifying wastewater enough for safely watering golf courses or recharging aquifers to purifying wastewater to the point that it is considered safe for human consumption. The transformation of wastewater back into drinking water relies on a combination of different advanced treatment techniques grounded in the philosophy of, as Scottsdale Water representatives explain it, “treat, treat, and treat again.”

Potable reuse depends on proven technologies. Despite this, potable reuse remains in a liminal state, teetering at the inflection point of widespread adaption. This is in large part due to public resistance to the idea of using wastewater as a source for drinking water. Technological treatments and scientific analyses alone cannot completely transform wastewater into drinking water. Laws, regulations, and permits govern the ability for wastewater to be reclassified as drinking water in the absence of an environmental buffer. Only a small number of cities in the United States are set up for direct potable reuse. Yet a growing number of cities in states including Colorado, Texas, California, and Arizona are actively exploring a future with the technology. Advocates for potable reuse are still working out how to help the technology diffuse into legal and social realms using a variety of approaches, including development of master plans; early efforts to seek stakeholder input; public-facing education and outreach; and demonstration treatment systems.

One of the main hurdles to DPR water is what researchers call the “yuck” factor. Anthropologists and theorists such as Mary Douglas and Julia Kristeva have long been curious about what makes something disgusting or distasteful. They point out that moments when people think “yuck!” are often tied to socially and culturally defined risks. Sometimes the risk assessment activated by yuck is physiological, such as when someone recoils from a bitter substance. Sometimes it’s psychological: when a caregiver physically recoils from an object a child picked up out of a garbage can, for example. As demonstrated by the range of fermented-food lovers and active communities of dumpster divers, yuck is not only innate—it’s also learned, and can potentially be unlearned.

The transformation of wastewater back into drinking water relies on a combination of different advanced treatment techniques grounded in the philosophy of, as Scottsdale Water representatives explain it, “treat, treat, and treat again.”

People who find their tap water tastes or smells “yucky” often opt out of using it as their drinking water by buying bottled water or using filtration systems. To water providers, the voiced and silent opting-out of people in their districts can appear irrational: it prioritizes an aesthetic reaction to the tastes, smells, textures, or temperature of water over trust in authorities, monitoring agencies, or science. My research collaborator, Marisa Manheim, and I suggest that rather than approaching yuck as an irrational aesthetic quirk to be educated away, policymakers, water providers, and others consider this opting-out of using municipal water as drinking water as a rational choice based in personal and subjective experience that matters as much as water’s quantifiable aspects. And this subjectivity is, in important ways, the result of more than a century of municipal water engineering—both liquid and social.

Making water taste like nothing

Over the twentieth century, the people in charge of producing municipal water worked very hard to make water’s tastes and smells fade into the background so that consumers could ignore or overlook its flavor. Making water taste like nothing is still one of their core goals.

Water used to be a very different beverage. Sanitarian George Whipple, writing about the value of pure water in 1907, characterized drinking waters found in New England as having a moderate amount of color and significant cloudiness. In contrast, people from the Midwest, “where all the streams are muddy,” Whipple noted, most often objected to unknown colors rather than color in general. Overall, he pointed out, most people could accept a small amount of cloudiness produced by small particles of clay. But the majority rejected water containing coarse sediment.

Municipal waterworkers in the early twentieth century worked not just to remove colors and particles, they also had to mitigate industrial contamination. Their success in making an acceptable municipal water depended on the development of new forms of sensory and technical expertise. Erasing the distinctive flavors from raw water that tasted or smelled like phenol from iron works, for example, or that smelled musty, fishy, or sulfurous due to natural processes (or overgrowth of microorganisms caused by agricultural run-off), happened slowly. In fits and bursts, twentieth-century waterworkers got better at communicating with each other about how to identify, treat, and manage unwanted tastes and smells in the water they produced, alongside their more pressing work of making water safe to drink. In the early to mid-twentieth century, waterworkers tested new treatments and developed systems for quantifying how well these treatments reduced tastes and odors in water. They created shared vocabularies for describing tastes and odors. The introduction of new analytical methods in the 1960s allowed them to begin characterizing the molecules that cause tastes and smells in water. And the development of international collaborative networks, especially with researchers in France in the 1980s, resulted in additional tools for identifying and treating the molecular causes of unwanted tastes and odors.

Rather than approaching yuck as an irrational aesthetic quirk to be educated away, policymakers, water providers, and others consider this opting-out of using municipal water as drinking water as a rational choice based in personal and subjective experience.

With each improvement of their skills, waterworkers made it increasingly easy for drinkers to ignore the relationship between the water they drank and the natural and man-made environments it came from. In a sense, drinkers lost their awareness of the particular places their water came from. Instead, they came to expect that their water should taste of nothing and come from an idealized, pristine “somewhere” that is more of a nowhere in its lack of specificity. (There are exceptions: many New Yorkers, for example, proudly tout the Catskills watershed as the source of their drinking water.) I think that erasing the sensory connection between water and where that water came from put a wedge between how many individuals experience and understand the world surrounding them and how that environment really is.

Ironically, current efforts to make recycled water acceptable must face head-on where that water comes from and what it has come into contact with. Today, in trying to sell citizens on recycled water, advocates must grapple with consumer awareness that this water comes from their toilets, taps, and other sources often deemed “yucky.” 

Brewing support for wastewater reuse

Beer, promoters of DPR have realized, is a useful tool for transforming public perception of DPR from that of a liminal technology to that of an established technology. By partnering with beer brewers to produce tasty beverages, DPR proponents aim to create new, positive associations for consumers—consumers who may be soon asked to support infrastructural or legislative retooling of water provisioning. Organizers of tastings seek to sever the affective connections between past experiences and expectations around wastewater so that DPR can finally transition out of its liminal state.

For people attentive to numbers, the choice to use beer and its brewers as ambassadors for DPR may seem odd: it can take between 8 and 24 gallons of water to produce one pint of finished beer. On the other hand, beer is 90–95% water. In fact, beer quality and style depend in part on water quality. A water’s mineral content, pH, and hardness have historically shaped the differing regional flavors and characteristics of beers. In the past, brewers relied on ground and surface waters located near their breweries, but contemporary beer brewers in the United States largely draw on municipal water. Brewers trying to produce a consistent product are most successful when they know what is in the water they are brewing with and can adjust for variations. For brewers interested in brewing different types of beer from the same municipal water source, and especially for brewers from regions where water sources vary seasonally, installing a water purification system such as reverse osmosis significantly improves their ability to consistently make any style of beer.

Current efforts to make recycled water acceptable must face head-on where that water comes from and what it has come into contact with.

The idea that good water makes good beer explains the choice of beer as a launching point for making an ingestible argument to drinkers about the quality of DPR water. Organized efforts in the United States to use beer made with recycled water first started in 2014. That year Clean Water Services, a wastewater treatment organization that primarily serves Washington County in Oregon, partnered with the Oregon Brew Crew homebrew club to invite homebrewers to make beers out of recycled water. The initial water offered to brewers was 30% effluent—essentially, they obtained water from the river downstream of their discharge point, purified it, and sent it off to home brewers. The following year, Clean Water Services officially started the Pure Water Brewing Challenge. Media outlets responded: National Public Radio, the Guardian, and Food & Wineall covered the effort. Clean Water Services successfully demonstrated that going beyond a simple education campaign could open new avenues for talking about water recycling. 

Scottsdale’s One Water Brewing Showcase in 2019 was the first competition to be widely open to the public—previous beer brewing competitions had remained accessible only to people associated with municipal water production. In designing the event to engage a public audience, Scottsdale Water shifted the scale of the conversation about water reuse. When Marisa Manheim, then a graduate research assistant, and I interviewed brewers and Scottsdale Water officials, the utility’s public information officer pointed out, “I’m not going to get 30,000 people to show up to drink water. But I can get 30,000 people to show up to drink beer.” Scottsdale Water’s public-facing approach is catching on: more recently, in collaboration with filtration membrane manufacturer Xylem, beers brewed from recycled water have appeared in Berlin (2019) and Calgary (2020). Good beer, especially good beer made from what was recently wastewater, makes for good press. 

Just straight water

The aesthetic characteristics of the DPR water delivered to brewers told its own story about the water quality. “When you get [the water] right from the truck it tastes like nothing. It tastes like absolutely nothing,” one brewer told Manheim and me. A brewing team that participated in the 2017 AZ Pure Water Brew Challenge recalled pulling a sample glass of water from the tank when it was delivered. “This was pre-COVID,” one of the brewers noted, “so we just stood around and passed this glass around, everybody drinking the same water. We don’t do a whole lot of sensory on our water typically, but everybody was like … ‘Wow, this is water.’” The communal tasting of the delivered water reinforced producers’ technological claims about the purity and quality of DPR water. The water’s visible clarity further highlighted the quality: “It would have been a really good picture because it was just so crystal clear,” another brewer recalled. “Like, not what people associate with reclaimed water—at least, maybe what I didn’t associate with reclaimed water.” Aesthetic characteristics, analysis sheets, and for those who attended, tours of demonstration facilities, combined to persuade brewers that they had received what water providers had promised: high quality water ready for brewing with. All that remained was producing and sharing with the public the ingestible evidence in cups, cans, and bottles.

Good beer, especially good beer made from what was recently wastewater, makes for good press. 

Making certain that consumers knew the beer was brewed with DPR water was central to the Arizona Department of Environmental Quality’s regulatory restrictions. As Manheim and I found in our interview work, for those organizing Arizona brewers as DPR water ambassadors, the legal requirement to tell people the beer contained DPR water carried an added bonus: it meant that brewers were becoming outreach collaborators and were being pushed by regulatory restrictions to comply with that educational mission. By keeping the laminated analysis sheet of the DPR water either at the bar or visibly posted, taproom staff were able to immediately provide information. One brew team explained, “People might have been a little hesitant at first, but if you had the conversation—if you had the minute or two to sit and explain how good the water was, how clean it was, the whole process of it, why we chose to do the project—it was pretty easy to convert people.”

The success of this approach relied on DPR water’s pureness. That pureness showed up in the row of zeros on the analysis sheets provided to brewers upon the water’s delivery. As one brewer told Manheim and me: “We looked at the printout and it basically was H2O, everything had been stripped down. There was no chemicals. There were no minerals. Everything was like 0.000 parts per million. So, it was just straight water.” By using advanced water treatment processes, which in Scottsdale’s case includes reverse osmosis, the DPR water delivered to brewers allowed them to precisely dial in the mineral makeup of the water used to reflect the naturally occurring levels of minerals from any desired (and chemically characterized) location.

At the same time, the complete removal of characteristic minerals normally found in municipal water challenged some brewers: it showed that for many small brewers, brewing is a constant give-and-take between the beer producer and the local environment shaping the characteristics of the water a municipality delivers. The variations brought on by that fluctuation can be valuable. But variations can also threaten the long-term viability of a business unless accompanied by a narrative that specifically valorizes variation. When it comes to producing consistent beer, DPR’s blank-slate nature could potentially level the playing field—at least in terms of water supply—between cash-strapped microbrewers and larger, more established breweries.

“People might have been a little hesitant at first, but if you had the conversation … it was pretty easy to convert people.”

Foods with “industrial terroir”—a riff on the French concept of terroir, which links food, taste, and place—have had the perceptible traces of the tastes of place minimized and managed. DPR takes this premise further: DPR water, of the type that Scottsdale Water delivered to brewers, promises to transform the perceptible and imperceptible cues of a specific time and place into nothing. The water produced lacks the molecular marks made by any place, plant, or animal life. It is a blank slate, ready to be reinscribed with whatever locally or nationally identified flavor profile a beer brewer or utility wishes (or can afford) to recreate. DPR’s industrial terroir, its ability to become whatever users need it to be, adds another layer of technological distance between everyday consumers and the systems and infrastructures shaping environmental quality. It says, “Everything is okay here. Put your attention elsewhere.”

A radical reorganization

Proponents of DPR draw on a technocentric argument that “water should be judged by its quality, not its history.” Their argument falls within an integrated water management movement called One Water, summarized in the idea that “water in all its forms has value,” and as such “should be managed in a sustainable, inclusive, integrated way.” Stakeholders within the movement aim to reorganize water governance. For example, rather than having one utility manage drinking water and another wastewater, a region adopting a One Water approach would integrate drinking water and wastewater management. The One Water approach is calling, in some senses, for a radical reorganization of deeply entrenched nineteenth- and twentieth-century ways of thinking about and managing water in the environment. In this framing, wastewater is no longer seen as separate and in need of being directed away from a community. Through interventions including water recycling for potable reuse and green infrastructures that direct water from storms to underground aquifers instead of sewers, people working in a One Water framework are trying to undo some of the consequences of earlier water management and governance.

The visceral nature of “yuck” threatens the goal of reorganizing water management. For proponents of DPR, finding a way to invite the consuming public to go from “yuck” to “yum” is the water utility equivalent of making gold from dross. In contrast to the yuck factor, what Manheim and I think of as the yum factor happens when something tastes good enough that one wants to consume it again. The yum factor is more than just an enjoyable taste; it connects the molecules that make up flavors with positive social, cultural, and aesthetic experiences to create new memories. Indeed, efforts to activate the yum factor rely heavily on not just a single moment of tasting, but also on the context around tastings. Beer made from recycled water and served at art festivals or scientific expo floors or water tastings at the end of a plant tour are designed to help drinkers create new, positive associations between bodily experiences and municipal water that move beyond what researchers call “pre-cognitive affective reactions.” By working to activate the yum factor, organizers hope that participants will be able to set aside their hesitancies or suspicions around municipal water and recycled water.

The One Water approach is calling, in some senses, for a radical reorganization of deeply entrenched nineteenth- and twentieth-century ways of thinking about and managing water in the environment.

Despite the significant buy-in from brewers who participated in the Arizona brew fests, hesitancy and suspicion still occasionally emerged to counteract the technological optimism and charisma of beer brewed with DPR. Though participating brewers generally embraced the overall project, not everyone was as easily persuaded. One brew team recalled their malt vendor’s reaction: “He poured [the beer], and he’s like, ‘Oh, man it looks great,’ and we’re kind of talking about it a little bit. And, before he tried it, he was like, ‘You did what?’ And then he tried it. He was like, ‘Wow, that’s really clean. It’s really good.’ But then some other brewers came in, and he was like, ‘Oh, you gotta try this poop beer!’”

The malt vendor’s comment highlights both an appreciation of the shock factor associated with feces and a desire to share the experience with others. It also hints at the difficulty in relying on a tasting experience to undo culturally situated hesitancies. One brewery owner reported that he had a regular customer who refused to drink the beer brewed with DPR: “He just said he just couldn’t because of working in sewage [treatment] for 30 years.” (Anecdotally, people involved in promoting DPR in Arizona have reported that those working in the water industry have been some of the most challenging people to get to try DPR.) Hesitancy, suspicion, and even the little moments of humor—like “You gotta try this poop beer”—all point to how past experiences intersect with present and even future moments of sensing.

Efforts to activate the yum factor, playful as they may be, are political acts embedded within larger processes of decisionmaking. By engaging inhabitants, policymakers, and members of the press in using their bodies to “taste” the future, proponents of DPR are asking different publics to actively support legislative, regulatory, and infrastructural changes to the status quo. As participants taste, they accept a physiological invitation to rewrite the connections between taste and memory, to erase past concerns not just about the quality of a single glass of water, but also about the capacity of technologies, regulators, experts, and governments to provide all people with access to safe and good water.

This essay has been updated to better reflect Marisa Manheim’s contribution to the research.

Supporting Scientific Citizens

What do nuclear fusion power plants, artificial intelligence, hydrogen infrastructure, and drinking water recycled from human waste have in common? Aside from being featured in this edition of Issues, they all require intense public engagement to choose among technological tradeoffs, safety profiles, and economic configurations. Reaching these understandings requires researchers, engineers, and decisionmakers who are adept at working with the public. It also requires citizens who want to engage with such questions and can articulate what they want from science and technology.

This issue offers a glimpse into what these future collaborations might look like. To train engineers with the “deep appreciation of the social, cultural, and ethical priorities and implications of the technological solutions engineers are tasked with designing and deploying,” University of Michigan nuclear engineer Aditi Verma and coauthors Katie Snyder and Shanna Daly asked their first-year engineering students to codesign nuclear power plants in collaboration with local community members. Although traditional nuclear engineering classes avoid “getting messy,” Verma and colleagues wanted students to engage honestly with the uncertainties of the profession. In the process of working with communities, the students’ vocabulary changed; they spoke of trust, respect, and “love” for community—even when considering deep geological waste repositories. 

To achieve larger goals of decarbonizing energy systems and becoming a more just society, the energy transition needs deep citizen input.

To previous generations, the idea that nuclear engineers would be comfortable applying words like “love” to their work is somewhat mind-blowing. Today’s nuclear power plants were designed and sited with minimal citizen involvement, and feelings would have just gotten in the way. The same goes for the rest of the energy system—the electrical grid and global web of tankers, pipelines, coal mines, and oil wells—built to varying government, industry, and economic imperatives. But to achieve larger goals of decarbonizing energy systems and becoming a more just society, the energy transition needs deep citizen input. In part, this is because the trust between communities and power plant operators must be rebuilt to reflect today’s values of egalitarianism and transparency. It’s also a practical matter: the deployment of next-generation nuclear technologies requires social support.

And this process is necessary well beyond nuclear power; technologies as disparate as AI and sewage treatment also need citizen input to develop public trust and balance tradeoffs that meet larger social goals such as reliable information and drinkable water.

Many communities in the United States—transcending political and geographic barriers—are eager to participate in sociotechnical transformation. When the National Science Foundation announced the Regional Innovation Engines program to spur innovation ecosystems across the United States, they received 679 submissions from 520 organizations—for just 10 slots. Similarly, 378 communities applied to the US Department of Commerce’s solicitation for its Regional Technology and Innovation Hubs competition. And 79 applications were submitted for inclusion in the Department of Energy’s Regional Clean Hydrogen Hubs program to develop hydrogen energy infrastructure—exceeding the funding provided for the program by approximately nine times.

The scientific enterprise has been slow to recognize that the characteristics of emerging technologies—and public desire to participate in their deployment—require a shift in emphasis from discovery and technical innovation to social transformation. The top-down innovations of the postwar period prioritized skilled researchers and engineers, rather than social scientists and public participation. These priorities remain baked into federal funding. Of the nearly 83,000 STEM graduate students the federal government supported in 2021, only 6,271 were social scientists. Those differences in funding mean social scientists are more likely to graduate with debt than other scientists.

The enterprise is investing in the scientific minds required for previous technological revolutions but not yet in the broad base of scientific citizens needed for the future.

Similarly, the enterprise successfully raised the number of postsecondary STEM graduates at all levels by a third between 2012 and 2021—no small feat. But meanwhile K–12 scores in math and science drifted downward, with a growing gap between the highest and lowest scores. The enterprise is investing in the scientific minds required for previous technological revolutions but not yet in the broad base of scientific citizens needed for the future.

It’s helpful to look back at the period after the Civil War, when agriculture—particularly, new crops and new methods of farming—was something like semiconductors today: a route to national security and regional economic transformation. Within a few decades, after the creation of the US Department of Agriculture (USDA), the establishment of land grant universities, and the founding of historically black colleges and universities, it became clear that this wasn’t a technocratic process, but one that required that every farmer to become a science-based, industrially attuned, constantly innovating participant.

The next step was a system of hundreds of local, semiautonomous agricultural research and demonstration stations, funded by the Hatch Act of 1887. The map of these stations from 1900 resembles that of the hopeful candidates for today’s innovation hubs. But still more outreach was needed to get farmers involved, so county extension agents were enlisted to spread information personally.

Finally, “corn-growing clubs,” an innovation from Ohio, gave boys seed corn and awarded prizes to those who produced the highest yield. Girls joined “tomato clubs,” where they canned tomatoes with the latest scientific guidance. As these clubs became USDA’s 4-H program, they connected the universities and experiment stations to kids, teaching them record keeping, data gathering, and reliance on expert knowledge.

In the early days, the clubs were a trick to get to their parents. “The farmers were reached through their children, and the interest thus aroused will be handed to their children’s children,” a USDA official wrote in 1905.  But it was the “children’s children” who drove an agricultural revolution through decades of continuous innovation in agriculture as farmers incorporated mechanization, hybrid seed, fertilizer, pest control, and irrigation, not to mention financial incentives like crop subsidies

This earlier construction of the scientific enterprise committed significant resources not only to research, but directly to community problem-solving, outreach, knowledge generation, and dissemination.

This earlier construction of the scientific enterprise committed significant resources not only to research, but directly to community problem-solving, outreach, knowledge generation, and dissemination. More importantly, the agricultural stations, extension service, and 4-H enabled citizens to gather data and employ scientific insights in their daily lives—making them participants and beneficiaries in sweeping social and technological changes. (Full disclosure: I was a 4-H kid, furiously competing for blue ribbons in the County Calf Scramble and science-informed muffin-baking. But I’m not advocating for 4-H, so much as a scientific enterprise that deliberately and persistently engages with all of society, particularly young people.)

Another opportunity to engage might be found in reimagining citizen science as a platform to enable scientific citizens. Citizen science projects aim to engage the public in research—gathering data, sorting it, or decoding, say, protein folding. But these efforts have a tendency to employ citizens as helpers rather than full participants. In a 2024 report on federal prizes and citizen science, the White House Office of Science and Technology Policy mentioned that 46 out of 82 federal citizen science projects noted that getting citizens to hoover up data is “cost-effective.”

Some projects, however, empower the public to gather data and use it to advocate for themselves and their communities. One example is the National Oceanic and Atmospheric Administration’s campaign to engage cities in mapping urban heat islands, which helps communities execute a volunteer-led field campaign to document and understand the effects of heat on residents’ lives. Participants then become invested in taking action.

As much as developing and deploying emerging technologies requires scientific citizens as active participants, with support these same technologies could enable and empower community decisionmaking.

As much as developing and deploying emerging technologies requires scientific citizens as active participants, with support these same technologies could enable and empower community decisionmaking by making it easy to collect data and use artificial intelligence and other resources now confined to researchers. For example, iNaturalist’s smartphone app helps users identify and map the position of invasive plants, which is essential for finding strategies to control them. With its AI interface, the app gives users an immediate (though tentative) identification of plants and animals. Later, experts weigh in—taking advantage of the synergy between their knowledge and the efficiency of the platform.

In my community, where invasive plants are infiltrating sensitive wetlands, the app makes us better—better naturalists, better neighbors, better marsh stewards, and better all-around citizens. As the enterprise’s mission shifts to support social transformation, it should search for deeper public engagement in these rarely recognized intersections of science and love.

How to Procure AI Systems That Respect Rights

In 2002, my colleague Steve Schooner published a seminal paper that enumerated the numerous goals and constraints underpinning government procurement systems: competition, integrity, transparency, efficiency, customer satisfaction, best value, wealth distribution, risk avoidance, and uniformity. Despite evolving nomenclature, much of the list remains relevant and reflects foundational principles for understanding government procurement systems.

Procurement specialists periodically discuss revising this list in light of evolving procurement systems and a changing global landscape. For example, many of us might agree that sustainability should be deemed a fundamental goal of a procurement system to reflect the increasing role of global government purchasing decisions in mitigating the harms of climate change.

In reading “Don’t Let Governments Buy AI Systems That Ignore Human Rights” by Merve Hickok and Evanna Hu (Issues, Spring 2024), I sense that they are basically advocating for the same kind of inclusion—to make human rights a foundational principle in modern government procurement systems. Taxpayer dollars should promote human rights and be used to make purchases with an eye toward processes and vendors that are transparent, ethical, unbiased, and fair. In theory, this sounds wonderful. But in practice … it’s not so simple.

Hickok and Hu offer a framework, including a series of requirements, designed to ensure human rights are considered in the purchase of AI. Unsurprisingly, much of the responsibility for implementing these requirements falls to contracting officers—a dwindling group, long overworked and under-resourced yet subject to ever-increasing requirements and compliance obligations that complicate procurement decisionmaking. A framework that imposes additional burdens on these individuals is doomed to fail, despite the best intentions.

The authors’ suggestions also would inadvertently erect substantial barriers to entry, dissuading new, innovative, and small companies from engaging in the federal marketplace. The industrial base has been shrinking for decades, and burdensome requirements not only cause existing contractors to forego opportunities, but deter new entrants from seeking to do business with the federal government.

A framework that imposes additional burdens on these individuals is doomed to fail, despite the best intentions.

Hickok and Hu brush aside these concerns without citing data to bolster their assumptions. Experience cautions against this cavalier approach. These concerns are real and present significant challenges to the authors’ aspirations.

Still, I sympathize with the authors, who are clearly and understandably frustrated with the apparent ossification of practices and the glacial pace of innovation. Which leads me to a simple, effective, yet oft-ignored, suggestion: rather than railing against the existing procurement regime, talk to the procurement community about your concerns. Publish articles in industry publications. Attend and speak at the leading government procurement conferences. Develop a community of practice. Meet with procurement professionals and policymakers to help them understand the downstream consequences of buying AI without fully understanding its potential to undermine human rights. Most importantly, explain how their extensive knowledge and experience can transform not only which AI systems they procure, but how they buy them.

This small, modest step may not immediately generate the same buzz as calls for sweeping regulatory reform. But engaging with the primary stakeholders is the most effective way to create sustainable, long-term gains.

Associate Dean for Government Procurement Law Studies

The George Washington University Law School

Merve Hickok and Evanna Hu stage several important interventions in artificial intelligence antidiscrimination law and policy. Chiefly, they pose the question of whether and how it might be possible to enforce AI human rights through government procurement protocols. Through their careful research and analysis, they recommend a human rights-centered process for procurement. They conclude the Office of Management and Budget (OMB) guidance on federal government’s procurement and use of AI can effectively reflect these types of oversight principles to help combat discrimination in AI systems.

The authors invite a critical conversation in AI and the law: the line between hard law (e.g., statutory frameworks with enforceable consequences), and soft law (e.g., policies, rules, and procedures) and other executive and agency action that can be structured within the administrative state. Federal agencies, as the authors note, are now investigating how best to comply with the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110), released by the Biden administration on October 30, 2023. Following the order’s directives, the OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence, published on March 28, 2024, directs federal agencies to focus on balancing AI risk mitigation with AI innovation and economic growth goals.

Both E.O. 14110 and the OMB Policy reflect soft law approaches to AI governance. What is hard law and soft law in the field of AI and the law are moving targets. First, there is a distinction between human rights law and human rights as reflected in fundamental fairness principles. Similarly, there is a distinction between civil rights law and what is broadly understood to be the government’s pursuit of antidiscrimination objectives. The thesis that Hickok and Evanna advance involves the latter in both instances: the need for the government to commit to fairness principles and antidiscrimination objectives under a rights-based framework.

What is hard law and soft law in the field of AI and the law are moving targets.

AI human rights can be viewed as encompassing or intersecting with AI civil rights. The call to address antidiscrimination goals with government procurement protocols is critical. Past lessons on how to approach this are instructive. The Office of Federal Contract Compliance Programs (OFCCP) offers a historical perspective on how a federal agency can shape civil rights outcomes through federal procurement and contracting policies. OFCCP enforces several authorities to ensure equal employment opportunities, one of the cornerstones of the Civil Rights Act of 1964. OFCCP’s enforcement jurisdiction includes Executive Order 11246; the Rehabilitation Act of 1973, Section 503; and the Vietnam Era Veterans’ Readjustment Assistance Act of 1974. OFCCP, in other words, enforces a combination of soft and hard laws to execute civil rights goals through procurement. OFCCP is now engaged in multiple efforts to shape procurement guidance to mitigate AI discriminatory harms.

Finally, Senators Gary Peters (D-MI) and Thom Tillis (R-NC) recently introduced a bipartisan proposal to provide greater oversight of potential AI harms through the procurement process. The proposed Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act mandates several evaluative protocols prior to the federal government’s procuring and deploying of AI systems, underscoring the need to test AI premises, one of the key recommendations advanced by Hickok and Hu. Preempting AI discrimination through federal government procurement protocols demands both soft law, such as E.O. 14110 and the OMB Policy, as well as hard law, such as the bipartisan bill proposed by Senators Peters and Tillis.

Professor of Law

Director, Digital Democracy Lab

William & Mary Law School

Merve Hickok and Evanna Hu propose a partial regulatory patch for some artificial intelligence applications via government procurement policies and procedures. The reforms may be effective in the short term in specific environments. But a broader perspective, which the AI regulatory wave generally lacks, raises some questions about widespread application.

This is not to be wondered at, for AI raises considerations that make it especially difficult for society to respond effectively. Eight problems in particular stand out:

  1. The definition problem. Critical concepts such as “intelligence,” “agency,” “free will,” “cognition,” “consciousness,” and even “artificial intelligence” are not well understood, involve different technologies from neural networks to rule-based expert systems, and have no clear and accepted definitions.
  2. The cognitive technology problem. AI is part of a cognitive ecosystem that increasingly replicates, enhances, and integrates human cognition and psychology into metacognitive structures at scales from the relatively simple (e.g., Tesla and Google Maps) to the highly complex (e.g., weaponized narratives and China’s social credit system). It is thus uniquely challenging in its implications for everything from education to artistic creation to crime to warfare to geopolitical power.
  3. The cycle time problem. Today’s regulatory and legal frameworks lack any capability to match the rate at which AI is evolving. In this regard, Hickok and Hu’s suggestion to add additional layers of process onto bureaucratic systems that are already sclerotic, such as public procurement, would only exacerbate the decoupling of regulatory and technological cycle times.
  4. The knowledge problem. No one today has any idea of the myriad ways in which AI technologies are currently being used across global societies. Major innovators, including private firms, military and security institutions, and criminal enterprises, are not visible to regulators. Moreover, widely available tool sets have democratized AI in ways that simply couldn’t happen with older technologies.
  5. The scope of effective regulation problem. Potent technologies such as AI are most rapidly adapted by fringe elements of the global economy, especially the pornography industry and criminal enterprises. Such entities pay no attention to regulation anyway.
  6. The inertia problem. Laws and regulations once in place are difficult to modify or sunset. They are thus particularly inappropriate when the subject of their action is in its very early stages of evolution, and changing rapidly and unpredictably.
  7. The cyberspace governance problem. International agreements are unlikely because major players manage AI differently. For example, the United States relies primarily on private firms, China on the People’s Liberation Army, and Russia on criminal networks.
  8. The existential competition problem. AI is truly a transformative technology. Both governments and industry know they are in a “build it before your competitors, or die” environment—and thus will not be limited by heavy-handed regulation.

AI raises considerations that make it especially difficult for society to respond effectively.

This does not mean that society is powerless. What is required is not more regulation on an already failing base, but rather new mechanisms to gather and update information on AI use across all domains; enhance adaptability and agility of institutions rather than creating new procedural hurdles (for example, eschewing regulations in favor of “soft law” alternatives); and encourage creativity in responding to AI opportunities and challenges.

More specifically, two steps can be taken even in this chaotic environment. First, a broad informal network of AI observers should be tasked with monitoring the global AI landscape in near real time and reporting on a regular basis without any responsibility to recommend policies or actions. Second, even if broad regulatory initiatives are dysfunctional, there will undoubtedly be specific issues and abuses that can be addressed. Even here, however, care should be taken to remember the unique challenges posed by AI technologies, and to try to develop and retain agility, flexibility, and adaptability whenever possible.

President’s Professor of Engineering

Lincoln Professor of Engineering and Ethics

Arizona State University

Unleashing Synergy or Accelerating Fragmentation?

Over the last 40 years, international collaboration has become an essential strategy for the global science and technology (S&T) enterprise. Collaboration enables countries to leverage foreign expertise and equipment and distributes the economic and industrial burdens associated with research and development, resulting in a lighter load for each nation involved. Collaboration also supports the diffusion of technological innovation, promoting cooperative international efforts to tackle global grand challenges. In comparison to isolated national endeavors, unified efforts reduce financial strain, mitigate risk, and enhance collective global security.

Today, however, escalating technological competition is changing the value proposition of international S&T cooperation. Tensions between China and the United States have complicated multilateral cooperation and open innovation, leaving South Korea—as well as many emerging countries—feeling pressure to align with one side or the other. 

As the competition for technological supremacy takes on new dimensions, governments must carefully evaluate the risks of collaboration against the risks of falling behind. Thus, for middle-power and emerging countries, it’s crucial to strategically distinguish between areas of technological advantage in competition and areas where they should pursue collaboration. The course that these countries take—and each will be different—will profoundly remodel the landscape of global science. 

An abrupt shift

South Korea, where I have served as a member of government advisory committees including the Ministry of Science and ICT (Information and Communication Technology) and the Ministry of Foreign Affairs, exemplifies how the map of global R&D is shifting. Once renowned for its substantial investment in science, last August the government unexpectedly proposed cuts in funding for the first time in three decades. About six months later, the country announced that it was joining the European Union’s research and innovation program, Horizon Europe. With this initiative, South Korea is making a bold shift from an inward-looking, domestic R&D system to an international one that positions the country as what the government has termed a “global pivotal state” in the realms of science and technology. 

Tensions between China and the United States have complicated multilateral cooperation and open innovation, leaving South Korea—as well as many emerging countries—feeling pressure to align with one side or the other. 

Over the past half century, South Korea’s investment in R&D had grown from just $20 million in 1964 to $83.7 billion in 2022. By 2022, the country’s total R&D expenditure as a percentage of gross domestic product was 5.21%, ranking first among Organisation for Economic Co-operation and Development countries and second in the world after Israel. So the 2024 cuts marked a significant departure from this history, decreasing government R&D spending by approximately $20 billion—16.6% compared to the previous fiscal year.

South Korean scientists and researchers immediately expressed concern that this unexpected shift could jeopardize the nation’s scientific progress and innovation. The collective anxiety within the scientific community was a response to the government’s top-down switch from funding basic research toward applied work and international collaboration without consulting researchers. The international community also spoke apprehensively about how the cuts might affect South Korea’s capabilities in artificial intelligence and other cutting-edge technologies, leading some observers to question the nation’s future position in global technology. 

Shortly after, amid assurances from South Korean president Yoon Suk Yeol that his administration was committed to supporting R&D, a new policy direction became clear. The Ministry of Science and ICT adopted “the Global R&D Strategy,” which included a tripling of the international cooperation budget to $1.31 billion year-on-year, signaling a shift away from collaborations driven by researchers and toward targeted collaborations aimed at developing relationships with leading global research institutions, enhancing joint research initiatives, and emphasizing outcomes. This showed an intent to reorient South Korea’s engagement with global R&D to reflect the nation’s strategic priorities, moving away from sporadic international cooperation efforts that were often fragmented across ministries. All of this provided the rationale for joining Horizon Europe, which was accomplished in about nine months.

South Korea is making a bold shift from an inward-looking, domestic R&D system to an international one that positions the country as what the government has termed a “global pivotal state” in the realms of science and technology. 

In articulating this vision and putting it into practice, South Korea has embarked upon a policy experiment aimed at ensuring that international research collaboration focuses on the production of tangible research outcomes. Within the country, this shift stemmed from the realization that research quality had plateaued over the past decade. The global ranking for South Korean research papers falling within the top 1% of citation indices has stagnated, showing only a marginal progression from fifteenth place in 2012 to fourteenth in 2022. I think this is due to a failure to emphasize basic research and the development of next-generation technology in the context of an R&D ecosystem with a risk-averse culture. A change in direction of national R&D policy is clearly necessary, but transforming the weaknesses of the current system into future strengths will require careful attention and detailed strategies. 

Joining Horizon Europe in search of outcomes

On March 25, 2024, South Korea officially completed negotiations to join Horizon Europe, making it the first Asian country outside the European region to join. This allows Korean researchers to apply for grants on an equal footing with EU researchers in Pillar II of the program, which emphasizes global challenges. The very act of creating a channel for cooperation outside of domestic government funds is a strong signal for researchers and institutes to consider changing the way they collaborate. 

Joining Horizon Europe is part of the Korean government’s broader strategy to open up the country’s research and innovation system and reduce its dependence on domestic innovation. It is also a strategic diversification to create diplomatic ties that can be expanded to other areas of cooperation. If this is successful, Korea may even consider mimicking the European Union’s role in Horizon Europe and create a similar program of its own in Asia, further broadening global R&D’s collaborative portfolio.

However, joining Horizon Europe is not without risk. South Korea is not joining Horizon Europe on the same footing as EU members and may find that, unlike in sports, home games and away games are played according to different rules. Even though joining Horizon Europe is expected to enhance Korea’s global scientific collaborations and leverage, it does not include equal access to the European Research Council. Furthermore, the European Union is not a singular entity, but a community of nations with their own agendas, which may not always be in line with the policy directions of EU leadership in Brussels. Smaller EU nations may face fiercer competition for funds as non-EU countries join and take a share of the research funds. As countries establish different directions for industrial policy, new fault lines may emerge.

Joining Horizon Europe is part of the Korean government’s broader strategy to open up the country’s research and innovation system and reduce its dependence on domestic innovation. 

Additionally, at the outset, although the initiative’s focus is not on facilitating cooperation only between “likeminded” nations, Chinese participation in Horizon Europe has fallen recently. If South Korea’s inclusion in Horizon Europe is perceived as signaling increased barriers to collaboration with China, this may accelerate the fragmenting of the international scientific community. And as South Korea attempts to ameliorate isolation by joining the European Union’s efforts, that may exacerbate tensions with China in unanticipated ways. 

Given these many opportunities and uncertainties, a business-as-usual approach to S&T policy in South Korea will fail. The government must demonstrate that it has a clear agenda or risk appearing to be drifting in the wake of larger international currents. So far, from the perspective of the Korean scientific community, recent policy changes are difficult to interpret. When the strategy is unsettled, it is challenging for individual scientists—never mind whole institutions—to initiate truly transformative changes.

Conditions for success 

South Korea’s policy experiment is abrupt and bold—but in order to be successful, the government must now change the way it administers, guides, funds, and evaluates international cooperation in science and technology. This will be new territory for an S&T bureaucracy that is distributed across 25 government-funded research institutes and has historically been narrowly focused on national R&D. It must now create a coherent strategy to manage uncertainties across multiple layers of the country’s scientific enterprise. For example, is the new openness calibrated toward inbound or outbound collaborations? And will the new strategy focus on national competitiveness, or on increasing competition within the S&T enterprise, or on science diplomacy? Now that the general direction has been set, it’s time to address the details. 

The government must demonstrate that it has a clear agenda or risk appearing to be drifting in the wake of larger international currents.

The first step will be to develop guidance for decisionmaking about which S&T fields to pursue, and how they contribute to the overall strategy. Specifically, the coordinating agencies need to understand the Technology Readiness Level of each domestic research field so they can engineer research collaborations to enhance the country’s industrial and strategic positioning in the global value chain. In particular, decisionmakers must determine which fields should focus on technology acquisition through cooperation with advanced institutes; which fields should prioritize technology transfer with developing countries to establish long-term cooperative relationships; and which fields are ready to participate in multilateral collaboration and lead international joint initiatives to address global challenges. For example, sectors where South Korea may lead subfields in the future—such as secondary batteries and displays, lunar exploration, and small modular reactors—require carefully customized strategies. On the other hand, a different set of considerations are required in fields where the country currently lags in fundamental research but has manufacturing capacity that could assist in bringing new products to market—such as quantum technologies, pharmaceuticals, and hydrogen. Investments must be guided by frameworks that clarify fundamental questions around collaboration: why to engage in S&T cooperation, in which technological fields, with whom, and how.

Second, the dynamic nature of global R&D necessitates creating flexible budget practices that can adapt and respond to the way partner countries work. But to do this, decisionmakers need to clearly understand what their mission is. Is it to welcome researchers from overseas, or to facilitate long-term international projects? Should adaptable R&D budgets be reciprocal with partner countries? If R&D projects have budgetary flexibility, shouldn’t other budgets earmarked for international collaboration be managed in the same way? Answering such questions will set the mid- and long-term direction of R&D for years to come. And even after these policies are fine-tuned, the Herculean task of producing specific guidelines for institutions and researchers will remain. The ministry will need to create a comprehensive guide on joint intellectual property ownership, contracts, and research security—to name just a few.

Investments must be guided by frameworks that clarify fundamental questions around collaboration: why to engage in S&T cooperation, in which technological fields, with whom, and how.

Furthermore, policymakers need to be able to determine whether collaborative strategies are working, and if so, which ones work better than others. New ways to evaluate the effect of international research collaboration on science and technology and research productivity need to be developed. So far, bibliometric studies based on coauthorship analysis alone have been the dominant methodology. However, coauthorship data are a partial or imperfect proxy for the very complex and multifaceted characteristics of international collaboration in S&T. Thus, an alternative approach should include analyzing the causal effects between R&D expenditure, intellectual property rights, and marketization; assessing the contextual effects of national innovation systems; addressing how gender disparities in international research collaboration are managed; and examining the long-term socioeconomic effects of South Korea’s S&T-driven official development assistance (ODA) to developing countries. 

Finally, a comprehensive strategy for intra-ministerial and governmental coordination will be critical. Multilateral cooperation in S&T often intersects significantly with S&T ODA. Although the Korean government increased the ODA budget by more than 31% in 2024, it has not modified the governance structure. With 46 aid agencies involved, fragmentation remains a major challenge. Moreover, as global R&D expands, 25 R&D government research institutes are currently performing overlapping tasks. Without effective cross-institutional coordination, even a significant budget increase can result in inefficiencies.

An evolving ethos of collaboration

In 2024, the global S&T community stands at a crossroads, confronted by unprecedented challenges that necessitate concerted and collective strategies. These challenges extend far beyond the walls of laboratories and publications to include the uncertainties of the US presidential election, the destabilizing conflict between Ukraine and Russia, escalating geopolitical tensions in the Middle East, and the techno-ethical quandaries engendered by the rapid proliferation of generative artificial intelligence technologies. 

In the face of these complex currents, countries that adopt isolationist or narrowly focused strategies are more likely to be poised for decline. Even as research collaboration between the United States and China has declined, other collaborative efforts across a spectrum of disciplines are proving resilient. In this evolving ecosystem, countries such as South Korea, which are actively interacting with the global S&T community, have an important role to play in fostering innovation and norms and maintaining multilateral agreements across domains as diverse as sustainable development, industrial standardization, and the economy. 

Although the Korean experience is rooted in a unique sociohistorical context, it provides insights into the complexity of formulating national strategies for international collaboration in science and technology in this new age. As the global system of S&T research collaboration that was established over the past half century shifts, the process of reconfiguring collaborations will present substantial complexities and challenges, necessitating extensive data to inform effective policymaking. South Korea’s transition could provide lessons for how emerging countries can enhance the value of global public goods while promoting innovation. Amid global struggles for technological hegemony, balancing competition and collaboration within national innovation strategies is the most critical issue. 

AI’s Wave

"The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma" by Mustafa Suleyman, with Michael Bhaskar.

Although informative and bold—not to mention much endorsed and promoted—Mustafa Suleyman’s new book, The Coming Wave, is ultimately unsatisfying. Suleyman, cofounder of the Google-acquired artificial intelligence company DeepMind and now CEO of Microsoft AI, wrote the book with assistance from technology journalist Michael Bhaskar. They attempt four interlocking tasks: to call out the existential threat of uncontained artificial intelligence, admonish readers not to ignore the dangers, situate the warning within a historical context of ever-increasing waves of techno-societal transformation, and make concrete policy proposals for achieving containment. The policy proposals are the most provocative and problematic aspect of the book.

The arc of Suleyman’s argument is given by the titles of the initial, penultimate, and ultimate chapters: “Containment Is Not Possible,” “Containment Must Be Possible,” and “Ten Steps Toward Containment.” Three-fourths of the book is dedicated to compelling arguments supporting the “not possible” thesis, which is nevertheless salted with “must be possible” counterpoints. With impassioned seriousness, Suleyman’s rhetoric becomes an urgent plea to confront a unique threat. “If this book feels contradictory in its attitude toward technology, part positive and part foreboding, that’s because such a contradictory view is the most honest assessment of where we are.” Suleyman might be likened to the concerned creators of Bulletin of the Atomic Scientists in 1945, nuclear engineers who feared the new weapons they’d created.

At least three arguments differentiate Suleyman’s alarm from other jeremiads about AI. One is the way he places AI in the longer history of technological change by employing a popular science-technology-society boilerplate about how waves of agricultural, mechanical, chemical, and electrical innovations have challenged people to either catch and ride them or be dragged under and swept away. Against this waveform of evolving techno-reality, he posits that any societal effort to restrict or somehow contain new technologies will be fighting the tide. Despite its emotional resonance, professional historians of technology would criticize the wave metaphor as simplistic.

Suleyman’s core analytical contribution is to conceive artificial intelligence as an omni-use, hyperevolving technology that is transforming a broad spectrum of other technologies similarly to how electricity transformed manufacturing, communication, urban life, and more. “Technologies of the coming wave are highly powerful, precisely because they are fundamentally general.” Deploying AI in chemical engineering and synthetic biology ups the ante on creating new materials and organisms, posing potential environmental and societal disruptions of an unprecedented speed and catastrophic magnitude.

Despite its emotional resonance, professional historians of technology would criticize the wave metaphor as simplistic.

Without denying the possible benefits to AI and synthetic biology, Suleyman simply argues that too much attention is given to benefits at the expense of risks and threats. He attributes this tendency to what he calls “pessimism aversion”: motivated reasoning makes humans too optimistic. In his telling, worry about the coming wave is warranted because of AI’s “on-demand utility that permeates and powers almost every aspect of daily life.” AI is being adopted and tested in a wide variety of contexts, propelling development, decreasing costs, and spreading use. The technology is hyperevolving (through fast, iterative learning processes), developing with increasing autonomy (with AI systems, according to Suleyman, “conducting their own R&D cycles”), and can have asymmetric impact (by design or by hacking). As Suleyman predicts, “Containing something like this is always going to be much harder than containing a constrained, single-task technology, stuck in a tiny niche with few dependencies.”

Low-cost, widespread adoption constitutes an especially critical threat. Global competitors, rogue nonstate actors, or millenarian fanatics now possess the tools—or will soon—to disrupt global infrastructure, challenge established power structures, and threaten public health. In the past, such challenges would demand a massive build-up of military weapons, industrial capacity, or social organization; with artificial intelligence and synthetic biology, disruption of the global order might come from a local lab or laptop computer.

Suleyman’s third distinctive contribution is an argument for containment as the necessary precondition for managing the oncoming AI wave. He is a resolute critic of fellow Silicon Valley techno-philosophers under the spell of libertarian antistate sentiments. With an innovator’s can-do spirit, he outlines 10 concrete steps that could open the door to containment or prudent management. Suleyman rejects freeze-and-flight responses in favor of fight—or at least inventorying all possible tools that he can imagine to fight with.

His first recommendation is to “encourage, incentivize, and directly fund much more work” on safety engineering. “It’s time for an Apollo program on AI safety and biosafety.” Second, safety measures must be audited; such measures “will struggle to be effective if you can’t verify that they are working as intended.” Third, he wants to slow down AI development, perhaps with national export controls. “The wave can be slowed, at least for some period of time and in some areas,” he writes, and “buying time in an era of hyperevolution is invaluable.”

Suleyman rejects freeze-and-flight responses in favor of fight—or at least inventorying all possible tools that he can imagine to fight with.

Fourth and fifth, Suleyman argues that critics need to become makers (“credible critics must become practitioners”), and corporations must integrate high purpose into the pursuit of profit. Critics too often “fall into the pessimism-aversion trap that is hardwired into techno/political/business elites.” Unwilling to recognize their own impotence, they have too much faith in “writing theoretical oversight frameworks or op-eds calling for regulation.” Suleyman presents himself as a model here. He recalls the emphasis he placed on factoring in ethics and safety alongside profit in founding DeepMind.

Proposals six and seven address the state. Democratic governments, he writes, must “get way more involved, back to building real technology, setting standards, and nurturing in-house capability.” States can better steer AI toward the public interest if they are involved in creating it. Additionally, states should pursue moderating international agreements. “We need our generation’s equivalent of the nuclear treaty to shape a common worldwide approach … setting limits and building frameworks for management and mitigation that, like the wave, cross borders.”

Proposals eight and nine shift to individuals. Specific policies must be generally supported by national and international technoscientific cultures—as Suleyman writes, they’ll need “real, gut-level buy-in from everyone involved in frontier technologies.” And the public must also be on board. Throughout this section, Suleyman discusses what “we” need to do. This “we” refers variously to the author and coauthor, AI researchers and entrepreneurs, scientists and engineers generally, the global West, or all humanity. “When people talk about technology—myself included—they often make an argument like the following. Because we build technology, we can fix the problems it creates. This is true in the broadest sense. But the problem is there is no functional ‘we’ here. Insofar as “the invocation of the grand ‘we’ is at present meaningless, it prompts an obvious follow-up: let’s build one.” Recommendation nine is to create social or “we” movements for containment.

Finally, the tenth step is “coherence, ensuring that each element works in harmony with the others, that containment is a virtuous circle of mutual reinforcing measures and not a gap-filled cacophony of competing programs.”

Democratic governments, he writes, must “get way more involved, back to building real technology, setting standards, and nurturing in-house capability.”

Despite Suleyman’s awareness of danger, sincere effort at a response, and appreciation of current fragilities in liberal democracy, there is something deeply naive and unrealistic about many of his proposals. Take the idea of an Apollo program for AI safety. Suleyman ignores the difference, as economist Richard R. Nelson once framed it, between “the moon and the ghetto”—the difference between putting a man on the moon and raising people out of poverty. Apollo may have been a daunting problem, but an international AI safety program, even if funding were available, would be a wicked problem of the highest order.

Still, not all of his proposals are so crazy. Recent actions by both the European Parliament and the US Congress to regulate AI can be read as efforts to operationalize proposals six and seven. But can anyone genuinely imagine the European Union or United States as models for a global commonwealth? Is EU or US leadership sufficient to institute global rules? Is there a conceivable nation or multilateral body capable of detecting and preventing uncontained AI developments from posing global dangers? Is it possible that Suleyman is practicing the pessimism-aversion he otherwise warns against?

In an epilogue, Suleyman makes a final appeal: he presents a vision for technology as a beneficial, progressive force that the elusive “we” must “never lose sight of.” “Too many visions of the future start with what technology can or might do and work from there.” Instead, society should first imagine how technology can “amplify the best of us, open new pathways for creativity and cooperation…. It should make us happier and healthier, the ultimate complement to human endeavor and life well lived—but always on our terms, democratically decided, publicly debated, with benefits widely distributed.” Alas, this sounds like the kind of bland cliché that ChatGPT would write.

Rashada Alexander Prepares the Next Generation of Science Policy Leaders

Since 1973, the American Association for the Advancement of Science’s (AAAS) Science and Technology Policy Fellowship (STPF) has brought thousands of scientists and engineers into the policy world. The fellowship is a very popular pathway into science policy, and AAAS fellows have featured in several episodes of our Science Policy IRL series. 

In this episode, we talk with the STPF fellowship director, Rashada Alexander. After completing a chemistry PhD and postdoc, she applied for an STPF fellowship that placed her inside the National Institutes of Health, where she worked for 10 years. 

Alexander talks to us about how her fellowship experience helped her look up from the lab bench and find meaning in her life. In particular, she found ways to build relationships, learn how to read a room, and navigate organizational structures—skills that are not always valued in scientific labs. She explains why scientists and engineers should apply for this transformational experience.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and Arizona State University.

If you are even the tiniest bit curious about science policy, you’ve probably heard about AAAS, or the American Association for the Advancement of Science. AAAS’s Science and Technology Policy Fellowship is legendary. It’s how many scientists and engineers have moved from the research world to the policy world, and AAAS fellows have featured in several episodes of our Science Policy IRL series.

I’m Lisa Margonelli, Editor-in-Chief at Issues. In this episode, we’re going behind the scenes of the STPF Fellowship with the Fellowship Director, Rashada Alexander. She talks to us about how her fellowship experience helped her look up from the bench and find meaning in her life, and why you should apply for this transformative experience.

Rashada Alexander: Thank you so much. I’m excited to get to spend some time with you as well as the listeners. Hi, everybody out there.

Margonelli: Hi. So, we always start with this first question of how do you define science policy, but I want to start with something a little bit different this time. How do you define AAAS? It’s totally legendary in the science world, but is it an advocacy organization? Is it a news organization? Is it a membership organization? What is AAAS?

Alexander: AAAS is a lot of those things to many different people. So on the surface, we are a multidisciplinary scientific society. So you might have the Society for Neuroscience or you might have the American Psychological Association. We are looking across STEMM fields: science, technology, engineering, mathematics, medicine. So we’re looking across all of those. And the thing that we are most well known for is publishing the journal Science, which is one of the largest and most well-known and quite prestigious scientific journals in the game. And so we are often known for that, but there are a number of other things that AAAS does.

It is a membership-based organization, to your question, Lisa. There is also a way in which we keep our members informed and folks who come into our orbit informed, about what is happening on the federal R&D budget side with research and development. What is happening in terms of policy that could affect the research enterprise? So we are in ways news, in some ways, we are publishing, and we are programs, like the Science and Technology Policy Fellowships, like the AAAS Mass Media Fellowships, like the Scientific Responsibility and Justice Division. So we do a lot of things. I think we have a lot of hats that we wear. Is that helpful?

Margonelli: Yeah, that is. So let’s start with the usual question. How do you define science policy?

I think about what is the intersection between STEMM as I have known it and experienced it, and policy, and that’s what I think about when I think about science policy.

Alexander: There is a pretty typical definition that a lot of folks give which is there is science for policy, so scientific information, STEMM-related expertise that goes into and supports the making of policy and the implementation of it, and then there’s also policy for science. So how do we do stem-cell research? How do we think about the use of maybe what are potentially illegal drugs in the use of research to understand more about them? How do we do that? What is the policy for it?

The way that I think about it—it might be just as broad, but it resonates with me—is when I’m working, I think about what facet of the STEMM enterprise, be it research, be it workforce, be it education, be it clinical research, what facet of that could be useful in this work that I’m doing? So that might mean critical thinking skills that you are keen to develop when you’re in scientific technological training spaces. That might mean what is the newest advance that should affect how we implement this policy about groundwater management? What is our understanding about the workforce of folks that come out of the STEMM enterprise and where do they go and what do they do and how do they put that expertise to use? So I think about what is the intersection between STEMM as I have known it and experienced it, and policy, and that’s what I think about when I think about science policy: how are they just coming into contact with each other and needing to influence one the other?

Margonelli: You’re the director of AAAS’s Science and Technology Policy Fellowship. So what does a day in your life look like?

Alexander: Well, a day in my life looks like consistently thinking about the whole of the program. There are multiple facets of the program to think about, but I’m thinking about the whole of it. So to give you a sense of what does that mean, so we have somewhere between 150 to 200 new fellows every year, so people who are just now starting in the program, and that means that they are just now embarking on this fellowship journey where they are going to be embedded in the federal government for that full year. They are going to contribute their STEMM expertise to policymaking and implementation, and they’re also going to learn how policy can affect some of the work of the STEMM enterprise. We end up having about 300 fellows in every class and we have 4,000-plus alumni, like myself. I was once a fellow as well. So what does it mean for that whole mass of folks? And that’s not counting the agencies that we engage with.

They are going to contribute their STEMM expertise to policymaking and implementation, and they’re also going to learn how policy can affect some of the work of the STEMM enterprise.

Now, the day-to-day, what does that mean? That might mean getting to talk to you about the program. That might mean getting to go somewhere with some undergraduates or graduate students who are trying to figure out, as I once was, where are they going to go? What are they going to do? What’s it going to look like? It might mean engaging with some federal partners to see, okay, so what kind of expertise are you seeking? Right? We have a number of different folks that come in to us from different fields. What are you seeking? What are the next areas we need to be thinking about?

One of the things that we’re working on right now is getting ready for our second rapid-response cohort in artificial intelligence. AI is a huge space right now where we’re thinking about policy, about legislation, about implementation, about its impact day to day on our lives. And we saw an opportunity last fall to say, “Hey, this is happening right now in real time. Let’s be responsive and let’s put some additional expertise on Capitol Hill with legislators right now as they’re trying to grapple with these issues.” And so that is what we did, and we are doing it a second time. So we did it last fall, had six fellows placed. We are going to do it again this year, hoping to build upon that. What does that look like? How do you do that? You got to fundraise. You got to think about the placement opportunities. What kinds of offices could these folks work in? Helping them understand what the opportunity might be, doing all of that and supporting the people who do even more of the day-to-day, boots-on-the-ground work of that.

Margonelli: So you’re constantly figuring out how to innovate or change within this space and how to activate more stuff.

Alexander: Yes.

Margonelli: That’s really interesting. I read a quote from one of the early… I don’t know if he was really a founder of the S&T Policy Fellowship. His last name was Golden, and he said something like, “Good ideas are contagious,” back in 1973. And he said basically that he saw this all as a site for innovation, which I think is a really interesting twist for scientists who may see innovation happening in the lab, but this is a big space for social innovation.

This program does something unique in terms of supporting federal service. So about 40% to 50% of us who come through the program end up staying in the government.

Alexander: It is. It is. I think that might’ve been William T. Golden, who was a treasurer of AAAS at one point. I agree completely with that. When I came into the fellowship, I found it transformative, for myself as a human being and as a professional. One of the reasons why was because I got to experience that contagiousness of good ideas. I walked into a space where some of the abilities and skills that were not always appreciated during my graduate and postdoc work, like relationship-building, actually situational cognizance, learning to read a room, figuring out how you actually navigate organization structures, how you navigate the realities of people, those were not always appreciated in the lab, in research environments. And when I came into the fellowship, it was like, “Oh no, we’re all thinking about that. We’re all trying to figure out how to do things, how to be effective, how to take what we learned at the bench and give it meaning beyond that.”

And so those good ideas, that being contagious, it’s completely true, and I think that this program does something unique in terms of supporting federal service. So about 40% to 50% of us who come through the program end up staying in the government, be it local, regional, federal. That’s pretty significant to expose people to policy in that way, to expose them to the inner workings of something so significant of civic life and then have that passion ignited enough that a lot of them stay. That means something, and that energizes, I think, governments. Bringing in those different perspectives, it energizes it, and so we get to be a part of that.

Margonelli: I think it’s really interesting because that 40% to 50% of people who stay behind, those are people who’ve devoted many years to their higher education and many years to following a vision of biology or geology or chemistry or psychology, and now they’re switching to a very different stream. Let’s go back to you and your career to get a sense… This is the other question we always ask. What was your career path into science policy? So you started, I think, as a chemist.

Alexander: Yeah.

Margonelli: And how did you decide on chemistry?

Alexander: My high-school chemistry teacher, Mrs. Canipe, one day, I think it was 10th grade, she cut a big chunk of sodium off of the sodium that she kept in a jar in the classroom and she put it into a very large beaker of water. Now, for anybody who might’ve forgotten or doesn’t know, sodium in its just plain old state is very reactive with water. So it blew up, and I was hooked. I was like, “Whatever this is, I want in. I want in. You just blew up a big chunk of stuff in front of me. I want to know, how does that work? What was that? What went into it? How does that start?” And so that was the beginning of it for me, and then from there, I just kept enjoying learning about how the world around us operated beyond what we can see and how that influences what we can see. The idea of genotype to phenotype, right? There’s a way that the underlying changes, and now that changes what you see, so it was always fascinating to me.

Margonelli: And you are in Kentucky?

Alexander: No, Alabama. So I’m originally from Florence, Alabama, born in Birmingham. Florence is a small town in the northwest corner of Alabama, near Huntsville. It’s often something that people hear in the defense space, air and space. And so that’s where I was raised, and then I went to college at Youngstown State University in Youngstown, Northeastern Ohio, always a good spot in my heart, and then I went to graduate school at the University of Kentucky in Lexington.

Margonelli: Oh, okay. And what did you think you were going to do as a chemist?

Alexander: I did not know. I definitely wanted to keep learning, but I honestly did not know. I feel like a lot of the career evolution that I’ve experienced came because I was trying to learn more. I wanted to understand more. So I did not know exactly what I was going to do, and that’s actually why I ended up going into graduate school. I got towards the end of my undergraduate work and I was like, “I don’t know if I have enough knowledge to obtain a job that I will like doing each day more often than I do not.” I was like, “I don’t know if I do have the knowledge for that.” And I was like, “Well, how do you get more knowledge after undergraduate? Graduate school!” So that was where I went, and wanted to just keep learning about how I could enjoy science the most and contribute to it the most, I think.

Margonelli: So you saw it also as a community?

Alexander: Yeah. Yeah, because you think about it, you go into graduate school and when you first get there… you’re over the moon, idolizing your advisor and the professors in the department, and by the time you leave, the people that you’re often going to for advice and that you’re learning the most from are your peers. And it’s not that you disregard the advice from those folks who have gone before you in a different way, but you see it in a more comprehensive way and you build community and you build understanding from the people who are working right alongside you at the same level.

Margonelli: All right. That’s really an interesting thing. That’s an interesting aspect of science that a lot of times looking backwards, people don’t see that those communities are so important, because they focus on the hierarchy, which becomes super important in all the tenure and all the things that happen later on, but that community is super important.

Alexander: Well, very important, because you’re also developing your professional identity in a lot of ways, whether you realize it or not. You’re starting to learn about the lines, right? In a lab, you might be the person making all of the ordering decisions for the whole lab, managing radioactive safety for the whole lab, setting the protocols up for the whole lab, especially if your advisor is new, as new as you might be when you start out in graduate school. So you are starting to form this professional identity and this set of skills, but often nobody says that to you in graduate school, right? It’s not seen that way, but that is what you are developing. And I don’t know if I would’ve gotten through graduate school sane without some of the fellow graduate students and postdocs I worked with. I might’ve gotten out sane without my advisor.

I didn’t understand some of the nuances that you find when you step outside of the bench, meaning everything is not always determined by the science at hand.

Margonelli: (laughs) So how did you end up getting an S&T fellowship in 2009? What led you to apply?

Alexander: It was the same thing you asked me earlier, right? What did I think I was going to do? I started thinking about what I could do. What would it feel like? How would it look? And I applied in 2006, I think, for the Fellowship. 2005, 2006. And I did not get in, and then I was quite naïve in a number of ways. I did not know what science policy could be. I did not know what an opportunity like this would look, and I didn’t understand some of the nuances that you find when you step outside of the bench, meaning everything is not always determined by the science at hand. There are other decisions, other factors, other considerations. And it’s not quite like, “Oh, I’m going to work and become my advisor and then I’ll have the life they had and I’ll do the things they did.” That’s not how it works, nor should it be.

And so I started thinking about what else could I do, and that was how I found out about the fellowship the first time around. And then after it, I went and I did a second postdoc and I was happy about that, because I felt like if I was going to transition away from a bench-based career, I was going to do it no matter what and it would just be a matter of the timing. And so I wanted to do a postdoc, though, that allowed me to really focus on research in an environment that felt more supportive than my first postdoc had been. And so I went into a lab that was established, where I was very clear with folks up front, “Hey, I’m not going to become an academic researcher. That’s not what I’m doing. I don’t know what I’m going to do, but I’m not doing that.” And they were very welcoming and appreciative of that clarity and that candor. I think sometimes when you feel less worried about the immediate stuff, you can think about the higher-level things.

So then I started going, “What do I want to do? What do I want to be engaged in?” And what I had been doing all along but didn’t realize was I clicked, I was smarter, stronger, faster, when I was doing more than just my research. So if I was involved with the Graduate Student Association, the Black Graduate Professional Student Association, the Postdoc Association, all those things, it enriched me in a particular way, including in the work I did. I was happier doing the work. I was a little more resilient in certain spaces, which in research is really important.

I realized I want to benefit science. I just know it’s not going to be at the bench. It’s not my ministry, so let me go and make that easier for somebody else.

So I started thinking, “Okay, well what does it look like for you to engage more in those spaces to benefit science?” And I realized I want to benefit science. I just know it’s not going to be at the bench. It’s not my ministry, so let me go and make that easier for somebody else. And so that was how I started to think about what else could I do, and then at the same time, I started thinking about policy more. How does an institution decide what kind of benefits its postdocs do or don’t get? How is it decided that as a training grant recipient, I can’t set up an individual retirement account? Well, I can’t, because my income’s not taxable, so it’s not considered income that I can do that with. Who decides that? Where does that sit? What does that look like? And so that led me to just trying to figure out how those pieces come together, and so that was, I think, what seeded my path into policy.

Margonelli: Wow. I think it’s really interesting that you saw this as your ministry.

Alexander: I didn’t think about the language I was using until you said that, Lisa. Yeah, let’s go with it. Let’s go with it.

Margonelli: Yeah. So tell me about the ministry.

Alexander: I like to do multiple things in service of something. Throughout my career, people have often been like, “Don’t you just want to do one thing?” And I’m like, “No.” Who wants to do one thing? I want to do multiple things. I want to do them in service of something, preferably bigger than me, but I want to do multiple things, and I like being able to turn on a dime. And there are more places in the world that benefit from people who can talk across boundaries and who can talk across orgs and silos and sectors, because I don’t think we struggle because there’s not enough smart people. I think sometimes smart people, we don’t always know how to talk to each other and we don’t always know how to share information in ways. Sometimes our systems don’t facilitate that. So I think I’ve always liked the idea of making things better, and making things better seemed more likely if I expanded the impact that I had. So I guess maybe that’s my ministry. Maybe.

Margonelli: And so let’s talk a little bit about what happened when you got the S&T policy fellowship. You ended up at the NIH Office of Extramural Research, which also is a place that in some ways set some of that policy about the experience of grad students and postdocs all across the country at a big meta level. Did you get to choose the NIH or were you assigned to that office?

We work very hard to ensure whatever the funding source, whatever the fellowship is, that folks are not told what they’re going to work on. That is not touched. They get to decide that.

Alexander: So I got to choose it. A little bit of background about the S&T policy fellowship. STPF is our acronym for it. We have a very strong free-agency principle, so when fellows come into the program, we give them as much information as possible so that they can make the most informed choice for themselves. We work very hard to ensure whatever the funding source, whatever the fellowship is, that folks are not told what they’re going to work on. That is not touched. They get to decide that. We do finalist interview week, so we bring folks into the DC area so that they can be here for about a week, they can go to interviews at the different agencies that they might actually place at, they can engage with those folks, and see where they might want to be.

And so I did that. I didn’t know exactly where I wanted to go at the time. We encourage people in their candidate statements for the program to talk about where they could see themselves potentially working, because it helps us to understand how they’re thinking about this opportunity. And so I knew that the National Institutes of Health was the one I’d gotten funding from already as a graduate student postdoc, and then I thought, “Well, wouldn’t it be interesting to see what those policies look like?”

So when I went in to talk with… It was Della Han who was my mentor during the Fellowship, and she’s still someone that I try to keep in touch with. I sat down with her and I asked her, why did she want to consider me for a placement spot there? Because I like to know why do you want to talk to me? What is it I did? Especially if it’s good, because then I can do it twice. So she started talking to me and I talked about some of the same things I just mentioned to you with regard to my ministry: what I really like to do, what jazzes me up.

It was transformative. It also punched me in the face a couple of times. It was not always fun, but it was always a pleasure.

And she was like, “Exactly that, Rashada. We are the office.” Because the Office of External Research sets the policy and implements policy and figures out, how is this thing that came through legislation or came through an HHS—Department of Health and Human Services—mandate or something broader that came from the Office of Science and Technology policy, how do you actually operationalize that? How do you make it a real thing? And that is a lot of problem solving. That’s relationship building. That’s critical thinking, for sure. And it’s also a little reactive. All those things really moved me. I liked the idea that I was having to adapt to things and I was having to figure stuff out. And so I got all of that and more. I loved that fellowship. I loved that experience. It was transformative. It also punched me in the face a couple of times. It was not always fun, but it was always a pleasure. Does that make sense?

Margonelli: It does, and transformative things are not always the nicest thing to experience at the time.

Alexander: Man, they are not.

Margonelli: So what do you mean when you say sometimes it punched you in the face?

Alexander: I had to learn some things. One of the biggest lessons I learned was know your audience. Read the room. I remember giving a presentation, and I had been informed that I needed to do it, but I think there was important context that was not shared with me as more of a newbie to the federal government, because every agency also has its own history.

If you don’t know much about the National Institutes of Health, NIH, let me tell you. There are 27 different institutes and centers, and we call those ICs. People often refer to them as ICs, institutes and centers, and every single one of them has its own culture. So if somebody’s told you that they know about one NIH IC, they know about one NIH IC. 27 different fiefdoms, all with their own rules. They were established at different times in history, different contexts. So they have their own way of doing things. If you’re going to operate effectively at NIH, you better know that. I had some learning to do. And so I went into that meeting and those people ate my face, Lisa. They ate it right off. And well they should have, because I needed to know that in order to be successful and in order to give them the information that they needed to be a part of solutions.

Sometimes you go into a space and people aren’t going to like what you say. That’s just life. But think about how you’re presenting it to them, who you’re presenting it to, so that when you walk out of the room, you still added some value, even if the value is just knowing, yeah, these people don’t like that and this is the conversation we had. Even that is valuable, because you may not have known it when you came in the room, but that doesn’t mean that being in the room is going to be pleasant.

Sometimes it’s not just how you mess up or that you mess up; it’s how you recover on the other side. Sometimes people will remember that more than the mess-up.

So they asked me all kinds of questions that I had no answer for, and I was just like, “I didn’t know this was going to happen.” And when I left it, I’ll be really candid, I went back to my desk and I cried. Oh, I cried. My little face was so sad. But, I knew I had to go back into my fellowship work and I had to work with some of those same people. I had to figure out how to recover. Hence, resilience. That was really important to understand, because sometimes it’s not just how you mess up or that you mess up; it’s how you recover on the other side. Sometimes people will remember that more than the mess-up. So that was a really good lesson for me, and sometimes you’re going to take some licks. That’s just life. You’re going to be the bearer of bad news.

Margonelli: I think that’s such an interesting lesson about science policy, about leaving the bench, because you might not realize that the bench itself is a culture. You think that it’s, I don’t know, access to truth or something, sometimes. That’s one of the myths. And then each agency has its own culture. Each branch of government has its own culture. The world has many, many cultures, and you’re pulling all of these things together and you’re also connecting one culture to another. So when you went into that IC, you were connecting it to whatever other thing.

It’s so interesting. I mean, at Issues, we have scientists and we have policy people writing and trying to communicate across discipline with each other, but we also have that policymakers talk about things in a completely different way. They’re very emotional, down to earth, in a way that sometimes scientists come to us with a five-point plan and we’re like, “Well, you got to have these people in it because the people are who the policy-makers respond to.” And so it’s this whole interesting, fascinating process of translation and meaning-making. What you said about the ministry really resonates with me.

Alexander: Yes. The way you’ve laid that out is so eloquent. Thank you for that. I think that opportunities like STPF… because it is not the only one, it is one of the most well known, but it’s not the only one. Anything that, as my boss puts it, gets you to look up from the bench so you begin to think about, “How does what I did relate to other places? Who will care? Why will they care? What will it mean to them?” I think is so powerful because just realizing that you have to look up from it, right? Like you said, we don’t realize that the culture of research, it is a culture. It certainly is. And one of the best things that came from this program and from these kinds of opportunities for me was that chance to then look at the bigger picture and understand where I fit in it. And I’ll tell you, it has benefited me in multiple ways outside of my life.

One of the best things that came from this program and from these kinds of opportunities for me was that chance to then look at the bigger picture and understand where I fit in it. And I’ll tell you, it has benefited me in multiple ways outside of my life.

One of the things—I don’t know if this is appropriate, but I’ll tell you—my baby brother died last year unexpectedly. I’ve thought a lot about how can I honor him? How can I keep loving him? Which seems quite separate from science policy, but I’ve been thinking about could I set up a foundation for him? Could I think about something that is important? Some way I wanted to support him or love him or enable his excellence that I didn’t get to do that I could do for somebody else. And part of me thinking about that involves the knowledge I learned from this fellowship and this work now.

If you want to have a program that helps people, that does something in a community, where do you go to do that? Well, you want to look at the local and municipal government. You want to have conversations. You want to see what other initiatives are they doing that you can build upon? You want to think about who could be early adopters, supporters, ambassadors of what you’re doing. Are there other places where there’s already something like what you want to do? You don’t need to start something new. You can build on it. These are all lessons, smart lessons, in the science policy ecosystem that I don’t know if I would’ve had had I not come through something like this.

Margonelli: That’s very moving. Thank you for that, and I think what you’re speaking to is that science policy can be… it sounds like a very technical thing, science policy, and what I tell people I edit this science policy magazine, it sounds, wow, really technical, because science and policy are both highly technical words, but it can be deeply, deeply personal and satisfying and a way to connect to the world and make it a better place.

Alexander: Especially when you’re in the midst of things like election years, pandemics, environmental disasters, ever-going climate shifts. I think of it as connecting the expertise to action. We have the expertise. There’s action that we need to take. How do we connect those things? How do we enable it? I’m really happy to be in a space where I feel like I can do that and I can be contributing in some strong ways and enabling others to do something similar.

Margonelli: So I want to just ask you about what it was like to work for a foundation. You worked for the NIH as a fellow, and then you became an employee of the NIH, and you were there for some time, and then you made a decision to go off and, I think, work for a foundation. Can you tell us a little bit about what that was like and the opportunities in that other mode?

Alexander: Yeah. So I had been at NIH, probably I was getting close to eight, nine years, and I liked the work. The last position I had was as a program director in the National Institute of General Medical Sciences where I got to work on programs that build research capacity and infrastructure in those places that don’t get a lot of NIH money, so often places that are unfortunately called flyover states like Nebraska, Oklahoma, the Dakotas. I got to make sure that people in Montana and Mississippi got to do really strong research just like folks in California and Boston. That’s the way I thought about it, and that was good work. I enjoyed that.

I got the question about, “Don’t you want to just do one thing?” I started to get that more, and I realized I wanted to be creative in different ways. Subject matter expertise is often the currency of certain spaces, and I was just like, “I don’t want to do that.” And I started thinking, “Well, what if I left the federal government?” Scary thing. My mom was freaked out. She did not like that. She was like, “Don’t you leave that good government job.”

All of the things that are facing us, these challenges, the federal government can’t solve them all. We need to be working with folks in other sectors. How can we do that? How do they solve problems?

But I started thinking about what if somebody with my insight into the federal government left and worked in another sector? Because I could bring some of that. I could make connections to folks in government and also all of the things that are facing us, these challenges, the federal government can’t solve them all. We need to be working with folks in other sectors. How can we do that? How do they solve problems? I had all these questions and thinkings about it.

And so I found out about a position through actually the fellows network, the alumni network, at the Foundation for Food and Agriculture Research. People call it F-F-A-R, but most of the people who work there just call it FFAR. And it was founded in the 2014 Farm Bill as a complement to the US Department of Agriculture, USDA, the folks who inspect our food and look at our food and figure out how we’re going to feed all these people. A complement to them in developing public-private partnerships.

And so I was interested for a couple of reasons. One, it was connected still to the federal government, but it was a nonprofit. Two, FFAR has a really cool model where in order for them to use any dollar that they have, to spend one dollar, it has to be matched by a non-federal dollar, which means that before you even begin, you have to make sure people care about what you’re doing. You have to have people engaged as partners, which is a really cool thing and a little bit different, I think, from a lot of research where you just do it and then later you’re like, “Oh, nobody cared about that. Whoopsie.” In this case, you had to care right up front, and I liked that and I liked the potential impact that that could make.

So I said, “Well, let’s try it,” and I interviewed for a position there and was fortunate enough to get hired on, and that was really cool because it was a small organization, so less than 50 folks, and as my friend put it, “If you want to wear a hat, you can be the sheriff.” And that’s a cool space to be in, because I wanted to be more creative. I wanted to be able to flip around and try different stuff, and I could do that there. I got to get an HR function established, which is important for any organization, science-based or otherwise, because you got to deal with your people. I got to put out the first ever impact report for the organization to talk about what it was doing and its real value proposition. That was really cool. So it was awesome.

Margonelli: That’s great. So you’ve come to AAAS and you bring an intimate knowledge of what it’s like to have grown up in a flyover state, as you say. You bring a knowledge of what it’s like to live in the body of a Black woman.

Alexander: Yes.

Margonelli: You bring all this embedded lived knowledge. You know what it’s like to work at NIH. You know what it’s like to be outside of the government. You know what it’s like to be at a public-private foundation. And so where are you seeing opportunities to develop the vision of the AAAS S&T Policy Fellowship?

Alexander: I think about it, I guess, three phases. So we celebrated our 50th anniversary last year. I’m thinking about what are we going to be talking about at the 60th anniversary? What’s going to be next? The 70th. The 80th. So this program’s been in existence for fifty-plus years now. There’s a lot of knowledge that we have about the training space, how it can be optimized, the kinds of skills that folks come out having, as well as the kinds of things that it’s useful for folks to be learning about and thinking about in their matriculation. And so I’m really thinking about how can STPF and the lessons it has learned, how can that be imparted towards the training space, towards the development space, before people get to the fellowship? What could that look like? So that’s one space.

Then I’m thinking about what happens post-fellowship. We have 4,000-plus alums. A lot of passion and commitment around public service, around using STEMM to make things better, using that expertise and those skills that we get to make the world better. So then how do we activate that? And so we are doing a lot of engagement with our alumni and thinking about what is useful to y’all after your fellowships. Are there opportunities for you to maybe come back into the federal government and bring a heightened level of expertise and insight? Are there spaces where we can be facilitating connections that have nothing to do with our specific programs but that we are uniquely situated to do? So I’m thinking about it. What can we give back and how can we pave the way forward?

And then the third place is right there in the middle. We have a core fellowship every year that ends up putting a lot of people in the federal government, and the AI cohort is an example. So I talked about us launching that rapid-response cohort. Part of what that has done for us, which I’m really excited about, is we’re not going to have rapid-response cohorts in a particular subject everywhere. It doesn’t make sense, because at some point, it’s not rapid, but what we can do during that time is almost use it as a wedge to expand our understanding and that of the core fellowship and alumni as well, and so we’ve been doing that with some very targeted artificial intelligence-relevant programming around policy, equity, around the tools that we use. We’re actually looking to facilitate movement, useful connections.

I think that we talk about AI and it started to become like talking about Bitcoin or cryptocurrency where everybody says a word, but nobody knows what it means until somebody gets in trouble. And then we go, “Oh my goodness, it’s that.” And so I want us to move into a space where we see this tool, the sets of capabilities for what they can bring to us, we work intentionally with them, and we are also resilient in how we think about them, planning for potential failures, planning for challenges. How can we do that without freaking ourselves out so much that we don’t do anything?

How can we do that without freaking ourselves out so much that we don’t do anything?

I’m really interested in that, and so this AI cohort has allowed us to really think about that for the broader core cohort. What kind of programming can we put in front of them so that they don’t say, “Oh, that’s not my thing,” but they start to say, “Oh, this is where I need to be thinking about it. This might be the opportunity, this might be the risk.” So that is a space where I really keep looking at how can we keep optimizing that core cohort? How can we keep ensuring that they are as well set up for public service as possible?

Another thing right now is human beings. We come out of this pandemic. We got all this stuff in our little hearts. We’re all lonely and freaked out, and everything’s different, and that affects how you engage with other people. We’re putting all these people in the federal government. The federal government’s not always an easy place to be in, but it is a place where resilience and mindfulness is particularly powerful. How do we give them those skills too, in addition to some of the technical pieces? How do we ensure that we’re putting reasonable human beings in these spaces who are willing to act reasonable with other human beings? It’s all a part of that. So I’m thinking in those three buckets.

Margonelli: That’s really interesting. We’ve gotten a really strong sense of the questions that motivate you to do this work. That’s normally our last question here, so I just want to ask you a different question, which is when people listen to this, if anyone could possibly listen to this conversation and still be on the fence about applying, what’s your last guiding words in thinking about the application? You applied more than once, so resilience in application is important.

Alexander: Definitely.

Margonelli: And not feeling that it’s a system for rejection; it is a system for deeper understanding and engagement. So tell us a little bit about that in light of the application process.

Alexander: First off, when you apply for this program, it becomes really clear, and if it isn’t already to you, let me make it clear to you. This is not a research position. So you need to think about the things you’ve done beyond your research, and that alone can be so useful. I used to do chemistry demos for kids at public libraries and schools and stuff, and I never thought about that as science communication, but it totally is. You’re explaining what’s happening. You’re helping them understand a scientific phenomenon in this space and maybe igniting some curiosity. That’s a big part of science communication, and sometimes you can forget what you’ve done. And so a process like this, be it this or be it any other program, can help you remember what else you’ve done and what else energized you. I think a lot of times figuring out what makes you tick, what kind of things do you like doing, will often give you insight.

It’s also a place where you really learn the value of the saying, “Don’t let perfect be the enemy of good.” There is much to do. There is much to be approached, much that challenges us. Perfection is never really an option.

So I think the process itself is going to grow you. If you want to be challenged, if you want to grow, if you want to be sharpened… I think of iron sharpens iron. If you want that, this is a heck of a place to come. You want to do something interesting and challenging, this is a heck of a place to be, and it’s also a place where you really learn the value of the saying, “Don’t let perfect be the enemy of good.” There is much to do. There is much to be approached, much that challenges us. Perfection is never really an option.

You think about perfection, it’s always a subjective term, right? What’s perfect? Well, what’s the purpose, right? This is a space that made me really think about something being fit for purpose. Can it be fit for what you need? Is it good enough for what you need? Can it move us forward? If I do this today, even if it’s hard and not what I want, will it set the stage for something better later? Often, the answer is yes, and this program, I think, helped understand the value of that, because that is building resilience too.

Margonelli: Thank you so much.

Alexander: Thank you.

Margonelli: This was just a really great conversation, not just about this fellowship, but about creating meaning in your life.

Has this conversation inspired you to become a AAAS Science and Technology Policy fellow? Check out our show notes to find out how to apply. Applications close November 1st.

Is there something about science policy you’d like to know? Let us know by emailing us at podcast@issues.org or by tagging us on social media using the hashtag #SciencePolicyIRL.

If you’re listening to this, you’re probably pretty passionate about science policy. Go to issues.org/survey to participate in our survey of the science-policy community. That survey closes on August 15th. We’d love to hear from you.

Please subscribe to The Ongoing Transformation wherever you get your podcast. Thanks to our podcast producer, Kimberly Quach, and our audio engineer, Shannon Lynch. I’m Lisa Margonelli, Editor-in-Chief at Issues in Science and Technology. Thank you for listening.

Decolonize the Sciences!

In “Embracing the Social in Social Science” (Issues, Spring 2024), Rayvon Fouché covers the full range of racialized phenomena in science, from criminal use of Black bodies as experimental subjects to the renaissance he maps out for new anti-racism networks, programs, and fellowships. His call for “baking in” the social critique, rather than adding it as mere diversity sprinkles on top, could not be clearer and more compelling.

Yet I know from my experience on National Science Foundation review boards, at science and engineering conferences, and in conversations with all sorts of scientific professionals that this depth is almost always mistranslated, misidentified, and misunderstood. Fouché is calling for creating a transformation, but most organizations and individuals are hearing only the elimination of bias. What is the difference?

The distinction is perhaps most obvious in my own field of computing. For example, loan algorithms tend to create higher interest rates for Black home buyers. Ethnicity is not a variable: data that correlate merely with “being Black” can be inferred by computing, even without human directives to do so. So it is difficult to oppose using the legal system, but tempting to solve as an algorithm problem.

As important as the elimination of bias truly is, it creates the illusion that if we could only eliminate bias, the problem would be solved. Bias does not address the more significant problem: in this case, that homes and loans are extremely expensive to begin with. The costs of loans and dangers of defaulting have destroyed working-class communities of every color; and “too big to fail” means that our algorithmic banking system turns risk for the entire nation’s economy into profits for banks’ own making. And that is not just the case for banking. In health, industry, agriculture, and science and technology in its many forms, eliminating bias merely creates equal exploitation for all, equally unsustainable lives, and forms of wealth inequality that “see no color.”

As important as the elimination of bias truly is, it creates the illusion that if we could only eliminate bias, the problem would be solved.

My colleagues will often conclude at this point that I am pointing toward capitalism, but I have spent my career trying to point out that communist nations generally show the same trends: wealth inequality, pollution, failure to support civil rights. And that is, from my point of view, largely because they use the same science and engineering, formulated around the principles of optimization for extracting value. Langdon Winner, the scholar known for his “artifacts have politics” thesis, was wrong, but only in that the destructive effects of technological artifacts occur no matter what the “politics” is. Communists extract value to the state, and capitalists extract value to corporations, but both alienate it from the cycles of regeneration that Indigenous societies were famously dedicated to. If we want a just and sustainable future, a good place to start is to decolonize our social sciences, not just critique science for failing to embrace them, and perhaps develop that as mutual inquiries across the divide.

What would it take to create a science and technology dedicated not to extracting value, but rather to nurturing its circulation in unalienated forms? Funding from NSF, the OpenAI Foundation, and others have kindly allowed our research network to explore these possibilities. We invite you to examine what regenerative forms of technoscience might look like at https://generativejustice.org.

Professor, School of Information

University of Michigan

Rayvon Fouché argues that social science, especially those branches that study inequity, must become integral to the practice of science if we want to both address and avoid egregious harms of our past and present. Indeed, methodologies and expertise from the social sciences are rarely included in the shaping and practice of scientific research, and when they are, they are only what Fouché likens to “sprinkles on a cupcake.”

Metaphors are essential to describing abstract processes, and every gifted science teacher I ever had excelled at creating them to help students understand how invisible forces can create such visible effects and govern the behavior of the things that we can feel and see. As the noted physicist Alan Lightman famously noted, “metaphor is critical to science.” The metaphors that we use matter, perhaps especially in regard to scientific understanding and education, and I find the metaphor of “science” as a cupcake and the social sciences as sprinkles very useful.

The many disasters and broken promises that have destroyed Black and other marginalized peoples’ trust in the medical establishment might have been averted had experts from other fields been empowered to produce persuasive arguments against their use beforehand. To move to another metaphor, Fouché describes the long-standing effects of “scientific inequity” as practiced upon Black populations as a “residue,” a sticky trace that persists across historical time. The image vividly explains why it is that some people of colors’ mistrust of science and unwillingness to engage with it as a career can be understood as an empirically informed and rational decision.

Some people of colors’ mistrust of science and unwillingness to engage with it as a career can be understood as an empirically informed and rational decision.

As Fouché shows, science becomes unethical and uncreative when it excludes considerations of the social and the very real residues of abuse and disregard that produce disaffection and disengagement. At the same time, the “social” has itself been the object of mistrust and cynicism, with some observers asserting that governments ought not be responsible for taking care of people, but rather that individuals and families need to rely upon themselves. Such ideas helped fuel the systematic defunding of public higher education and other “social” services. STEM fields and occupational fields such as business became more popular because they were seen as the best routes for students to pay off the sometimes life-long debt of a necessary college education. Correspondingly, the social sciences and the humanities have become luxury goods. The state’s unwillingness to support training in these fields is one reason that nonscientific expertise is viewed as a sprinkle, sometimes even to those of us who practice it and teach it to others.

At the same time, this expertise has never been needed more: the fascination, excitement, and horror of artificial intelligence’s breakneck and generally unregulated and unreflective adoption suggests that we greatly need experts in metaphor, meaning, inference, history, aesthetics, and style to “tune” these technologies and make them usable, or even to convincingly advocate for their abandonment. In a bit of good news, recent study of “algorithmic abandonment” demonstrates that users and institutions will stop using applications once they learn that they consistently produce harmful effects. At the same time, it’s hard to “embrace the social” when there is less of it to get our arms around. The scientific community still needs what Fouché calls a “moral gut check,” akin to Martin Luther King Jr.’s 1967 encouragement to “support the sustenance of human life.” For to care about the social is to care for each other rather than just for ourselves.

Gwendolyn Calvert Baker Collegiate Professor, Department of American Culture and Digital Studies Institute

University of Michigan, Ann Arbor

Rayvon Fouché’s call to “lean into the social” and to reckon with science’s “residues of inequity” must be answered if scientists are to help create more equitable and just societies. Achieving this goal, he admits, will require the difficult work of transforming the “traditions, beliefs, and institutions” of science. I concur.

Yet I want to clarify that held within this science that requires transformation are the social sciences themselves. After World War II, the natural, physical, and social sciences all were reconstructed from the same conceptual cloth, one that assumed that truth and justice depended upon the separation of science from society.

For Fouché, this separation must end. His reasons are compelling: without fundamental knowledge of and engagement with the communities and societies out of which sciences arise, scientists operate in moral and social vacuums that too often lead to harm, when what is meant is good. Yet the idea that science should exist in a “pure” space apart from society is deeply baked into today’s scientific institutions.

It could have been otherwise. After the US bombing of Nagasaki and Hiroshima, some prominent politicians and scientists called for an end to the purposeful seclusion of science from society that the Manhattan Project embodied. However, a countervailing force emerged in a troubling form, pseudoscience. At the same time the United States funded physicists to create atom bombs, Germany and the Soviet Union—building on efforts begun in the United States—bluntly directed genetics into policies of racial purification. In response, most geneticists argued that the murderous force of resulting racial hygiene laws lay not in their science, but rather in its perversion by political power. As a result, many geneticists retreated from their political activism of the 1920s and ’30s.

The idea that science should exist in a “pure” space apart from society is deeply baked into today’s scientific institutions.

For their part, prominent social scientists, including the pioneering sociologist Robert K. Merton, argued that science was a wellspring of ethos that democracies needed, and to ensure these ethos survived, science should exist in an autonomous space. Just like markets of classic political economy, science ought to be left alone. This argument expanded to become a central tenet of the West during the Cold War.

In a matter of a few short years, then, science transformed from a terrifying destructive force that needed to be held in check by democratic institutions to one that would itself protect democracies. The natural, physical, and social sciences all embraced this idea of being inherently good and democratic—and thus to be protected from abuse by the unjust concentration of government power. This historically and institutionally entrenched illusion has left contemporary sciences, including the social sciences, poorly equipped to recognize and respond to the many and consequential ways in which their questions inextricably entangle with questions of power.

I agree with Fouché that more trustworthy sciences require addressing these entanglements. The critical question is how. My colleagues and I are currently seeking answers through the Leadership in the Equitable and Ethical Design (LEED) of Science, Technology, Engineering, Mathematics, and Medicine initiative. After decades of building institutional practices and protocols designed to separate science from society, this task will not be easy. Those of us involved with LEED of STEMM look forward to working with Fouché and other visionary scientific leaders to rebuild scientific institutions not around Cold War visions of security and separation, but rather around contemporary critical needs to forge the common grounds of collaboration.

Professor of Sociology

Founding Director, Science and Justice Research Center

University of California, Santa Cruz

Living in Viele’s World

For generations, New York City engineers have consulted a hand-colored map of Manhattan’s waterways from 1865, a masterwork that depicts creeks and canals, marshes and meadows in vivid shades of blue, green, and pink. If you want to know whether a property might flood, where ground might be unstable, or whether a basement is likely to fill up with water, the map has your answers. “A swell lithograph, long as a Buick,” one admirer enthused. It was drawn by Egbert Ludovicus Viele, a West Point–trained engineer who wanted to pictorially explain the health risks of filling up the city’s natural watercourses and diminishing natural drainage in the name of urban development. To that end, the Viele Map, as it’s now known, showed the streams, swamps, pig sties, and shanties.

Before New York City became a metropolis, with its underground web of subways and utilities running under the bristles of high rises, there was Viele. His map is the informational substrate—the foundation—that made today’s city possible. But outside the offices of urban planners and civil engineers, very few people know Viele’s name. They don’t know that he proposed the first subway for New York—the Arcade Under-Ground Railway—and supported the creation of the Board of Public Health. Or that he was deeply involved in building both Central Park and Prospect Park. If they do know of a foundational New York architect, it is Viele’s rival, Frederick Law Olmsted, who is given full credit for designing Central Park. Olmsted’s New York freezes the heart of the city in a bucolic pastoral, a kind of never-never land. By contrast, Viele’s city is still growing—up, down, betwixt, and beyond. Viele’s story is as much about how New York became New York as it is about the politics of recognition—how credit is assigned to engineers.

Viele became interested in the connection between sanitation, engineering, and health when he was a lieutenant during the Mexican-American War, sitting frustrated on the north bank of the Rio Grande while cholera killed more of his men than actual combat. At the time, miasma theory explained cholera as a product of offensive odors from rotting refuse. The British crusader Edwin Chadwick even envisioned a “pure air” solution, with tall tubes stretching into the heavens to draw fresh oxygen down for piped delivery to dwellings at a price. Viele was convinced that good sanitation could protect people from disease, and he later recounted his sense that topography, particularly places where water pooled, was driving infection.

Viele envisioned a city that might grow willy-nilly but in which peoples’ lives could be improved by sanitation, by subways and elevated trains, and by canals. He lived in the real world—a working city with unruly aims and desires.

Viele had a chance to test his ideas when he returned to New York in the mid-1850s. A few years earlier, New York City acquired a parcel of land, around 800 acres, through eminent domain. The space, to be turned into a park, would serve as the “city’s lungs,” with the belief that urban progress could enable social and moral growth. The task of converting that “cheerless waste into a scene of rural beauty,” in the words of one historian, fell to Viele. He meticulously mapped the muddy land, pooling patterns, and underground streams of the purchased plots, which he described as “perhaps the most unpropitious that could have been selected for such a purpose on the whole continent.”

As Central Park’s first chief engineer, Viele emphasized drainage measures to ensure the park didn’t turn into “a pestilential spot, where rank vegetation and miasmic odors taint every breath of air.” He aimed to transform the low swamp into a grassy meadow with a new reservoir, a driving loop for carriages, a sports field, a military parade ground, a botanical garden, and winding trails. Viele’s survey of both natural and installed drainage was detailed. He believed that his modern masterwork would rival the grand gardens of Berlin, Paris, and London, and still be completed under the allotted budget of $1.5 million. Viele’s design, historian Jon Scott Logel notes, followed the maxim that “‘the greatest art is to conceal art’ through a mixture of ‘the natural’ and the ‘artificial.’”

Months into construction, in September 1857, a dapper Frederick Law Olmsted walked past the job seekers lined outside Viele’s shack office. Olmsted, an architect, carried an influential political endorsement as his letter of recommendation. Olmsted described himself as an “unpractical man” who valued “townsite consciousness,” an urban design that privileged public parks for democratizing healthful air and light. In contrast, Viele believed combining the tools of organized water-waste sewerage and systematic sanitary surveys could promote hygienic environments. Their contrasting visions would both prove crucial to comprehensive city planning.

The two men were very different from each other. Olmsted was charming; Viele was crotchety. Olmsted’s vision was cheery, while Viele’s was weary. Olmsted was a Republican, Viele a Democrat—leading Viele to suspect that the city Republicans had planted Olmsted to derail his visionary proposal for Central Park. Viele silently glanced at Olmsted’s letter in his office and ignored him for the rest of the day, dismissing him because he wanted a more “practical man.” Yet Olmsted was persistent. Visits later, Viele gave him his first assignment. Olmsted would remember that moment where Viele “exhibited his practical ability by leading me through the midst of a number of vile sloughs in the black and unctuous slime of which I sometimes sank nearly half-leg deep.”

The city’s political divisions were sharp, and Republicans wanted to curb the Democrats’ influence over the project. The English-born architect Calvert Vaux convinced the park commissioners that Viele’s design was mediocre and mundane, something one would expect from a mere engineer. Viele was ousted, his design dropped. Instigated by Vaux, the city sponsored a design competition for the layout of the park. Vaux paired up with Olmsted, and in April 1858, they won the contest and a $2,000 prize for their naturalistic Greensward plan, now referred to as the “Sistine Chapel of landscape design.” Olmsted was appointed Central Park’s chief architect, blending Viele and Olmsted’s former duties as chief engineer and superintendent. Viele contended that Vaux and Olmsted stole his plan, and a court later ruled in Viele’s favor and awarded him some $8,000. (Viele and Olmsted clashed again on the design of Prospect Park in Brooklyn, whose initial planning began under Viele in late 1860.)

Viele and Olmsted were both agents of reform and in constant competition. The contrast in their design philosophies, however, alludes to a status game that still prevails in engineering today.

Viele’s activism was motivated by his belief that the developers shouldn’t ignore Manhattan’s natural topography. In 1865, when the Citizens Association of New York campaigned for the creation of a Board of Health, Viele was a strong proponent. One of his maps even appeared in a report, described as “medical topography.”

In the 1870s, Viele gained more influential positions, which he used to develop the Upper West Side and propose mass transit options using elevated railroads. And in 1883, to Olmsted’s dismay, Viele became president of the Parks Commission that had once fired him. Olmsted grumbled that for 25 years, it had been Viele’s “principal public business to mutilate and damn the park.”

The design philosophies Viele and Olmsted championed, both vital for public health, offer insights into the politics of recognition evident across engineering—and how we as public and professionals assign prestige to one line of work over another. Viele and Olmsted were both agents of reform and in constant competition. The contrast in their design philosophies, however, alludes to a status game that still prevails in engineering today. An innovator who, for example, installs a “sheep meadow” in the middle of a metropolis receives many laurels and much praise—while those who do operations and maintenance fly under the radar of cultural recognition. Prestige is often mistaken for excellence, but it lands somewhat indiscriminately. As scholar Lewis Leopold wrote in 1913, “prestige … throws its cold electric glitter on strong and weak, useful and useless, good and bad, true and false, beautiful and ugly alike.”

The politics of prestige determine how society assigns fame, manufactures eminence, propagates popularity, and ultimately judges one individual over another. “Due recognition is not just a courtesy we owe people. It is a vital human need,” notes scholar Charles Taylor, pointing to how identity is partly shaped or misshaped by the presence or absence of recognition. But this dynamic also subtly shifts how and what gets remembered about the world, causing society to overlook promising opportunities for the future. In engineering, the status accrued by high prestige may end up depriving society of visionaries who see the potential in building sewers and draining swamps to create better lives for all.

But there is more to be gained from the contrast between Viele and Olmsted. Viele envisioned a city that might grow willy-nilly but in which peoples’ lives could be improved by sanitation, by subways and elevated trains, and by canals. He lived in the real world—a working city with unruly aims and desires. Olmsted wanted to create a planned city with an orderly hierarchy of nature and commerce. It was a fantasy of pastoral hygiene—a contradiction that was at once idealistic and unrealistic for a polyglot city. In essence, Olmstead imagined a city crystallized around a park where time stopped before the Industrial Revolution. Viele’s New York was a protean space, relentlessly renovating and reconfiguring around its inhabitants and their needs. Today, the world may admire and amplify Olmstead’s vision, but New Yorkers live in the city that Viele imagined.

Today, the world may admire and amplify Olmstead’s vision, but New Yorkers live in the city that Viele imagined.

 “It is a difficult matter to persuade people to look forward to the comfort of generations to come after them, when they have to furnish the means for it,” Viele wrote in 1860. “And nothing is so essential to the success of a system of sewerage, as to make it sufficiently extensive and comprehensive in the beginning.” Viele’s forward-thinking work connected the built environment—improved drainage and public works—with promoting health-conscious behavior. He also urged a kind of self-sacrifice for future generations; investment in the lives of those who come after. “In this sense,” Logel observes in his history of New York City’s design, “Viele was a precursor to the progressives who emerged in the last two decades of the nineteenth century.” The persistent usefulness of Viele’s map should inspire investment in the kind of foundational, life-sustaining engineering that benefits everyone.

Between 1861 and 1863, Viele served in the Union army as a military governor in Virginia and commanded Civil War campaigns in Georgia and South Carolina. Although his service seems to have been unexceptional, in later years he waxed poetic about his experience accompanying Abraham Lincoln to a battlefield. Viele described the president as “kind, genial, thoughtful, tender-hearted, magnanimous,” with whom he enjoyed the “very closest intimacy” during his “wonderful fund of reminiscence and anecdote.”

Viele later became a congressman, but he felt he never got the public recognition he deserved. He ordered for himself a 31-foot-tall pyramid tomb with columns and sphinxes—then the largest in the West Point cemetery. Paranoid about the prospect of being buried alive in his marble coffin, he had a buzzer installed so he could get help if needed. It was never used, except by some pranksters.

A Look at Differential Tuition

In “Tools That Would Make STEM Degrees More Affordable Remain Unexamined” (Issues, Spring 2024), Dominique J. Baker makes important points regarding the state of college affordability for students pursuing STEM majors. As a fellow scholar of higher education finance, I wish to elaborate on the importance of disaggregating data within the broad fields of STEM due to differences in tuition charges and operating costs based on individual majors.

First, Baker notes that differential tuition is prevalent at public research universities, citing data indicating that just over half of all institutions charged differential tuition for at least one field of study in the 2015–16 academic year. I collected data on differential tuition policies across all public universities for 20 years and found that 56% of research universities and 27% of non-research universities charged differential tuition in engineering in the 2022–23 academic year, up from 23% and 7%, respectively, in 2003–04.

Differential tuition policies primarily affect programs located within engineering departments or colleges, with computer science programs also being frequently subject to differential tuition. There are two likely reasons why these programs most often charge higher tuition. The first is because student demand for these majors is strong and the market will bear higher charges. This is often why business schools choose to adopt differential tuition, and likely contributes to decisions to charge differential tuition in engineering and computer science.

Differential tuition policies primarily affect programs located within engineering departments or colleges, with computer science programs also being frequently subject to differential tuition.

The other reason is because engineering is the field with the highest instructional costs per student credit hour, based on research by Steven W. Hemelt and colleagues. They have estimated that the costs for electrical engineering are approximately twice as much as for mathematics and approximately 50% more than for STEM fields such as biology and computer science. Add in high operating expenses for research equipment and facilities, and it is not surprising that engineering programs often operate at a loss even with differential tuition.

The higher education community has become accustomed to detailed data on the debt and earnings of graduates by field of study, which has shown substantial variations in student outcomes within the broad umbrella of STEM fields. Yet there is also substantial variation by major in both the prices that students pay and the costs that universities face to educate students. Both of these areas deserve further attention from policymakers and researchers alike.

Professor and Head, Department of Educational Leadership and Policy Studies

University of Tennessee, Knoxville

What Can Artificial Intelligence Learn From Nature?

Refik Anadol Studio, "Living Archive: Nature"
Living Archive: Nature showcases the output from the Large Nature Model (LNM) by Refik Anadol Studio.

Refik Anadol Studio in Los Angeles maintains a research practice centered around discovering and developing novel approaches to data narratives and machine intelligence. Since 2014, the studio has been working at the intersection of art, science, and technology to advance creativity and imagination using big data while also investigating the architecture of space and perception.

To explore how the merging of human intuition and machine precision can help reimagine and even restore environments, the studio’s generative artificial intelligence project, the Large Nature Model (LNM), gathered more than a half billion data points about the fauna, flora, fungi, and landscapes of the world’s rainforests. These data are ethically sourced from publicly available archives in collaboration with the Smithsonian Institution, National Geographic Society, and London’s Natural History Museum.

In addition to working with existing image and sound archives in public domains and collaborating with renowned institutions, studio director Refik Anadol and his team ventured into rainforests in Amazonia, Indonesia, and Australia. They employed technologies such as LiDAR and photogrammetry, and captured ambisonic audio and high-resolution visuals to represent the essence of these ecosystems. With support from Google Cloud and NVIDIA, the team is processing this vast amount of data and plans to visit thirteen more locations around the world, developing a new understanding of the natural world through the lens of artificial intelligence. 

The team envisions generative reality as a complete fusion of technology and art, where AI is used to create immersive environments that integrate real-world elements with digital data. “Our vision for the Large Nature Model goes beyond being a repository or a creative research initiative,” says Anadol. “It is a tool for insight, education, and advocacy for the shared environment of humanity.” The LNM seeks to promote awareness about environmental concerns and stimulate inventive solutions by blending art, technology, and nature. The team trains the models to produce realistic artificial sounds and images, and showcases these outputs in art installations, educational programs, and interactive experiences.

Anadol sees the LNM’s potential to enrich society’s understanding and appreciation of nature as well as to supplement existing art therapy methods. Making the calming effects of nature available to people, even when they are unable to access natural environments directly, can be particularly beneficial in urban settings or for people with limited mobility.

In the future, the intersection of technology, art, and nature will become increasingly vital. Projects like the LNM exemplify how artificial intelligence might serve as a powerful tool for environmental advocacy, education, and creative expression. As the integration of sensory experiences and generative realities continues to push the boundaries of what is possible, the studio hopes to inspire collective action and a deeper appreciation for the environment.

REFIK ANADOL STUDIO, Living Archive: Nature.
The LNM transforms more than 100 million images of the Earth’s diverse flora, fauna, and fungi into breathtaking visuals.
Refik Anadol Studio, "Living Archive: Nature"
Processing extensive datasets from rainforest ecosystems, the LNM enables the creation of hyperrealistic environmental experiences.
Refik Anadol Studio, "Living Archive: Nature"
The development of the LNM is grounded in extensive interdisciplinary research and collaboration.
Refik Anadol Studio, "Living Archive: Nature"
Generative AI sets a new benchmark for how technology can be used to promote a deeper engagement with the planet’s ecosystems.

Bringing Communities In, Achieving AI for All

Hopes for the future of artificial intelligence would seem to be bright. There is, after all, a great deal of utility in AI already. AI tools can provide high-quality real-time translation for well-resourced languages like English—a major technological breakthrough. AI systems also are poised to enhance the accuracy of cancer screening and improve other areas of health care delivery.

Yet much of the discourse surrounding AI is rather gloomy. It’s not just that people worry about possible effects of generative AI on their livelihoods; innovation-driven employment disruptions are neither unusual nor insurmountable. More concerning is the mounting evidence showing that the output of AI models exacerbates social inequity and injustice.

Facial-recognition technology, famously, is proving to be a tool of oppression, as many have feared. Reports of AI triggering false arrests of Black people are becoming routine. Municipalities are using facial-recognition cameras to aggressively surveil and police residents in public housing, many of whom are Black. Against hopes that AI would reduce bias in criminal justice, its use so far has magnified the system’s structural inequalities. Meanwhile, major AI firms like OpenAI are exploiting overseas sweatshop labor to train algorithms. And AI tools meant to benefit people with disabilities are having the opposite effect. This situation is creating real harm for people who are already disadvantaged, while also amplifying distrust in science and government.

Proposed responses to these equity and justice concerns typically amount to small tweaks, often of a technical nature. The thinking seems to be that policymakers, academics, and the technical community can solve AI’s problems by identifying statistical biases in datasets, designing systems to be more transparent and explainable in their decisionmaking, and exercising oversight. For instance, experts ask how government agencies might evaluate the safety and efficacy of algorithms. In parallel, the technology industry has tried to educate developers about the impact of social biases on AI algorithms and has suggested minimal “fairness solutions” also focused on bias.

We must ask ourselves if we really believe that marginalized people should be content to leave their fates to the tinkering of governments and corporations when such measures have had little impact in the past. Where is the input, in this equation, of marginalized people themselves? If we are concerned by equity in the age of AI, shouldn’t those with the most at stake have an important role in shaping the governance agenda? Then too, relying only on governance by state authorities and commercial operatives means ratifying and reinforcing the concentrations of economic and political power that already accrue to a small number of well-connected businesses. Their technical revisions to the mechanics of AI may address some harms built into the technology so far but will always be behind the curve of inequities that emerge as AI makers exercise, and strive to protect, profit-seeking prerogatives that inevitably displace their stated commitments to just outcomes. And simply extending regulatory oversight does not encourage developers to design AI that promotes the welfare of the disadvantaged.

Technical revisions to the mechanics of AI may address some harms built into the technology so far but will always be behind the curve of inequities that emerge.

Better solutions lie in a more inclusive innovation ecosystem in which all players—not just regulators and lobbyists—take responsibility for creating equitable and just AI. It is important that AI not only not discriminate but also that it be proactively marshaled to the benefit of all of society. In other words, we should be thinking about how to ensure that AI is not just here for profit but also to serve those who, in being served, don’t generate financial returns.

To this end, public and philanthropic research funders, universities, and the tech industry should be seeking out partnerships with struggling communities, to learn what they need from AI and build it. Regulators, too, should have their ears to the ground, not just the C-suite. Typical members of a marginalized community—or, indeed, any nonexpert community—may not know the technical details of AI, but they understand better than anyone else the power imbalances at the root of concerns surrounding AI bias and discrimination. And so it is from communities marginalized by AI, and from scholars and organizations focused on understanding and ameliorating social disadvantage, that AI designers and regulators most need to hear.

A Community Agenda for AI: Design from the Bottom Up

Progress toward AI equity begins at the agenda-setting stage, when funders, engineers, and corporate leaders make decisions about research and development priorities. This is usually seen as a technical or management task, to be carried out by experts who understand the state of scientific play and the unmet needs of the market.

But do these experts really understand which needs are unmet? When experts steer innovation, they are deciding which problems are important and how they should be understood and solved. Often, the problems deemed important are those that, in being addressed, yield profit. But sometimes developers try to solve social problems, usually with minimal input from the populations most affected. This leads to misdiagnosis. Such is the story of the One Laptop per Child program, for instance. Developed by the MIT Media Lab and funded by international donors, the initiative was supposed to improve education for children from low-income families around the world by ensuring that the children had access to internet-connected computers. But the project failed because the computers were not easy for the children to use, broke frequently and were difficult to repair, and relied on electricity that was at best intermittently available. Even when the computers worked, the content built into them contributed little to the realization of local educational goals.

When experts steer innovation, they are deciding which problems are important and how they should be understood and solved.

Centering marginalized communities in AI agenda-setting would help to avoid such outcomes by increasing the probability that the design and deployment of new technologies reflects grassroots knowledge and concerns. This approach may be slower and harder to scale than technology development based solely on expert opinion, but it is more likely to produce social benefits.

A heartening example comes from Carnegie Mellon University, where computer scientists worked with residents in the institution’s home city of Pittsburgh to build a technology that monitored and visualized local air quality. The collaboration began when researchers attended community meetings where they heard from residents who were suffering the effects of air pollution from a nearby factory. The residents had struggled to get the attention of local and national officials because they were unable to provide the sort of data that would motivate interest in their case. The researchers got to work on prototype systems that could produce the needed data and refined their technology in response to community input. Eventually their system brought together heterogeneous information, including crowdsourced smell reports, video footage of factory smokestacks, and air-quality and wind data, which the residents then submitted to government entities. After reviewing the data, administrators at the Environmental Protection Agency agreed to review the factory’s compliance, and within a year the factory’s parent company announced that the facility would close.

Bottom-up design, in Pittsburgh and elsewhere, requires openness and humility from researchers, recognition of community expertise, and a desire to empower marginalized people. It means that the technical interests of engineers and the profit motives of corporations take a backseat to public interest: the needs of communities determine what, if anything, gets built. Researchers must be willing to cede authority to others who might be able to better serve community concerns. This approach not only helps to meet real needs but also fosters trust in science and technology among populations subject to mistreatment and neglect.

This approach may be slower and harder to scale than technology development based solely on expert opinion, but it is more likely to produce social benefits.

Processes like the Pittsburgh collaboration are unusual, typically initiated by technologists committed to community-driven research practices and by interdisciplinary teams of technical and social-science experts. But the institutions that support innovation can take steps to encourage bottom-up design. Research funders could provide special incentives for community-driven projects and endow programs dedicated to them. Universities could hire more researchers with community relationships and experience in bottom-up design and could provide these researchers additional support. Perhaps most importantly, required university coursework could train budding AI researchers in the methods, ethics, and power dynamics of community-engaged research. We believe that such training must start early—it should be foundational to the education of the next generation of AI innovators, so that they will not be content to simply follow in the footsteps of others but will instead be partners in transforming the AI innovation ecosystem.

Nurturing Socially Committed AI Research

As presently constituted, the AI industry is ill-equipped to respect the knowledge of marginalized communities. AI leaders, in business and academia, are a demographically narrow group, little influenced by public interest. Programs dedicated to social responsibility are treated as auxiliary rather than mission-critical and are the first to go when companies and universities cut budgets.

How to ensure that the current generation of technologists is the last to operate in this way? How to inculcate in engineers—and their leaders—a genuine interest in AI’s humanitarian benefits and greater sensitivity to its potential harms?

Required university coursework could train budding AI researchers in the methods, ethics, and power dynamics of community-engaged research.

One answer—though not the only one—is education. Universities can revamp the way they teach engineering, integrating the humanities and social sciences in the core curriculum. Today universities enforce a clear separation between engineering and the social good; they may require that STEM students take a single course on professional ethics, while the rest of the curriculum teaches students that technology is politically and morally neutral. More useful would be to design introductory science and engineering courses that help students understand the social and political assumptions underlying seemingly technical choices, as well as the consequences of these assumptions. For instance, when computer science students learn how to build datasets that inform an AI, they should learn at the same time that the contents of datasets—because they are based on historical records—could reflect racist practices.

Because it is essential that future AI researchers be empowered to reckon with the true social complexities of their work, humanizing education should be treated as no less important than technical education: it cannot be ghettoized in elective courses, where it is easily dismissed. Students must learn that technical decisions about research problems, datasets, and code are always value-laden. And humanists and social scientists must teach this lesson: it is they who offer deep knowledge of how technology works in society. Students need this knowledge, and they need intellectual models who disrupt the idea that only scientists and technologists have valuable expertise to offer in the development of AI. Accreditation bodies can play a crucial role in fostering change by predicating their approval on successful adoption of this educational approach.

The project of changing AI innovation through education should begin as soon as possible, but its rewards will take many years to realize. In the short term, there are opportunities for improvement through policy. As suggested by the case of Timnit Gebru, who says she was fired from Google for exposing deleterious ethical and equity consequences of the company’s software, AI development would benefit if researchers had stronger whistleblower protections. And funding agencies can support more socially conscious approaches right now, even requiring researchers to write proposals that champion the equity benefits of their projects. Doing so would encourage the most promising kind of AI development—development that does real good for all of society, including and especially underserved communities.

Building Capacity in Communities

Community participation is important not only because it can enable codesigned technologies like the Pittsburgh air-pollution system, but also because it fosters democratic engagement in decisionmaking surrounding emerging technologies. Whether these are decisions made by private companies or public officials, the people affected should have a say in them. Civil society organizations have a crucial role to play here, by amplifying voices drowned out by the industry din. Tech companies have effectively unlimited resources, as well as access to political power. To be heard, ordinary people need civic organizations to be their advocates, critically evaluating industry claims, anticipating the social effects of AI that corporations ignore, and using their own lobbying capacities to focus the attention of policymakers and regulators on the public good.

Humanizing education should be treated as no less important than technical education: it cannot be ghettoized in elective courses, where it is easily dismissed.

The Ford Foundation has been exemplary in this regard. The philanthropy funds multiple organizations that seek to improve the public conversation about AI and generate policy action. These include Fight for the Future and the Detroit Community Technology Project, which advocated for regulation of facial-recognition tools after Joy Buolamwini, a Black computer scientist, identified systemic biases in existing technologies. Other philanthropies have joined, and could still join, Ford in supporting nonprofits dedicated to the independent assessment and democratic control of AI.

Civic organizations have a further important role in providing bridges between communities and technologists by helping engineers and regulators understand AI through a social lens. Civil society, in other words, can do in the wider world what educators can do in the classroom, explaining how design can inadvertently exacerbate social problems and how it can be used with the goal of improving people’s lives. The University of Michigan’s Science, Technology, and Public Policy Program, for example, has established a Community Partnerships Initiative that serves this bridging function by working with advocacy organizations to develop proposals for technology policy in the public interest. For instance, the initiative helped We the People Michigan challenge the city of Detroit’s investment in acoustic gun detection, an unreliable AI-powered technology that threatened to exacerbate overpolicing of poor communities of color. Like bottom-up design, this partnership approach values communities as experts rather than simply consumers, nurturing a technological future that reflects public needs and democratic vision.

Limiting Inequity and Injustice through Regulation

Equitable design, driven by the needs of marginalized communities, will do much to promote beneficial adoption of AI and prevent harms associated with the technology. However, we also know that developers will pursue profits and the technical challenges that interest them, without much concern for equity. With this in mind, AI technologies must be subject to regulation, and this regulation should occur before technologies come to market. In particular, technologies that disproportionately harm marginalized communities should be prohibited. The Biden administration has already taken some steps in this direction with a 2023 executive order that, among other things, calls on federal agencies to issue guidance on the use of AI in law enforcement, hiring, housing, and health care and directs the Federal Trade Commission to specify that algorithmic discrimination in access to credit is illegal. But a systematic regulatory process could do more to disincentivize the creation of unjust and inequitable AI.

Regulation could be accomplished through impact assessments, inspired by the 1970 National Environmental Protection Act, which requires development projects to undergo an environmental assessment and demands more extensive review of higher-risk interventions. In particular, technology impact assessment should be focused on equity, which would involve auditing the datasets and algorithms underlying AI tools to determine whether the outputs might discriminate against or otherwise harm marginalized communities.

Such evaluation also must extend beyond technical components of AI, because even a well-functioning system designed to pursue a worthwhile goal like crime reduction can perpetuate structural inequalities. Thus an equity-focused impact assessment should consider the social context in which the technology will be used: regulation should attend to characteristics of AI users and ensure that they are adequately trained to mitigate bias.

Such sociotechnical evaluation will require the work of experts across disciplines, including science, law, humanities, and the social sciences. And regulation, like design, will be incomplete without participation from marginalized communities. The input of minoritized people in regulating AI is essential; that input should be solicited and valued, and participation should be voluntary and compensated. This is a means of building trust while alleviating structural inequities in AI and innovation generally.

The input of minoritized people in regulating AI is essential; that input should be solicited and valued, and participation should be voluntary and compensated.

What we propose is a serious commitment, but we know that thoughtful sociotechnical assessment is productive. Consider the response of New York officials to concerns about the use of facial recognition in K–12 schools. The state’s Office of Information Technology Services analyzed benefits and harms of this particular deployment of the technology, with an eye toward both technical accuracy and the likelihood that the technology would exacerbate bias. Drawing on evidence from legal cases and scientific and social scientific research, the office found that even accurate systems would violate civil rights. In response, the state legislature banned the use of facial recognition in schools.

Harnessing Benefits of AI through Intellectual and Moral Change

Ensuring that AI advances, rather than harms, progress toward social equity and justice entails intellectual and moral change, not just new rules. Educators and research funders must promote equitable design, so that developers want to work with marginalized communities to learn about their needs and together build technologies that provide meaningful benefit. With this in mind, engineers must be sensitized to systematic biases in datasets and algorithms but also to methodologies that promote community partnerships capable of correcting inequities resulting from discrimination. And policymakers must be prepared to think creatively, attuning regulations not only to technical characteristics of AI products but also to those products’ equity impacts in real-life scenarios.

Bottom-up knowledge and the humility to keep learning from those in need: these are tools for ensuring responsible AI but also for realizing the immense potential of this emerging technology. AI can exacerbate social problems, but it can also be used to solve them. Alongside their obligation to prevent harm, policymakers, research funders, tech and university leaders, and STEM professionals have an opportunity to foster equity through innovation. That is where the true promise of AI lies.

Medicine Means More Than Molecules

The Doris Duke Foundation, where we work, has a long track record of supporting early-career fellowships that allow physicians to pursue research to improve patient care. For the past 25 years, we’ve invested $204 million in these clinician-scientists. An assessment we conducted in 2019 found that four or more years after their initial grant funding, 73% of our fellows had received subsequent National Institutes of Health (NIH) research grants, and the majority remained in research as their main professional activity.

We’re also proud our fellows seem to be meaningfully changing health practices for the better. To gain insight into this difficult-to-measure outcome, we used an analysis that found only 25% of all biomedical research articles are cited in clinical trials or guidelines within two decades of publication. When we applied the same methodology to our fellows’ publications, their citation rate for a comparable period was twice as high.  

Recently, however, we have become concerned that we have been teaching to the wrong test, that our support is reinforcing an existing system of recognition and prestige tied to circumscribed paths of scientific inquiry. Specifically, our criteria for assessing fellowship applications are calibrated to a narrow set of markers for mainstream success and, in effect, exclude some research that may have incredible potential to improve human health.

Biased training paths

Here’s what we’ve seen in our programs, which are explicitly intended to help people with MD or equivalent degrees carve out time from clinical duties to conduct research and thus to help physician-scientists launch research careers.

A decade ago, we noticed that our fellowships favored individuals with specific credentials who were from well-resourced research environments. For example, 44% of those awarded Doris Duke fellowships held joint MD-PhD degrees versus 30% of unsuccessful applicants. And 81% of successful applicants hailed from an institution in the ninety-fifth percentile of NIH funding, compared to 54% of unsuccessful ones.

We have become concerned that we have been teaching to the wrong test, that our support is reinforcing an existing system of recognition and prestige tied to circumscribed paths of scientific inquiry.

More recently, we detected a potential unintended effect of our fellowship criteria. By supporting applicants and projects deemed most likely to draw future NIH funding, we also seemed to favor research questions that sought to tackle diseases through improved molecular understanding of underlying disease mechanisms rather than exploring interventions that might, say, prevent clinical encounters in the first place. Also apparently disfavored was research on practices that could make health care visits more effective or ways to treat disease that would result in more equitable outcomes.

When we analyzed our funding patterns for the last decade (2013–2023), we found that 40% of applicants proposed research to improve care, reduce disease, or boost the impact of proven interventions, but this group received fewer than 30% of our grants. Proposals in categories such as outcomes research, treatment, and prevention had success rates of 7%, while those focused on basic discovery or mechanisms of disease had a success rate of 11%—more than a third higher.

While applicants submitting etiologically focused proposals are doubtlessly pursuing profound and valuable science, our approach is likely sidelining other kinds of vital innovation—particularly innovation in how to prevent disease, care for patients, and implement health services and disease treatments.

When we reached out to individuals who subscribe to Doris Duke Foundation’s emails for anonymous input, their feedback affirmed our concerns: scientists reported that what they think will win funding and advance their careers shapes their research. One respondent said, “Much research activity is guided by available funding mechanisms rather than the most impactful ideas.” Another said, “The funding climate shapes the scientific community as a whole; it is really beyond individual decisionmaking.”

Our approach is likely sidelining other kinds of vital innovation—particularly innovation in how to prevent disease, care for patients, and implement health services and disease treatments.

Resources and status always matter. But, for physician-scientists, they can play a decisive role. The median age of researchers’ first NIH R01 grant—which signifies a researcher is able to lead independent projects—is around 42. Becoming a physician-scientist is an arduous path, requiring extraordinary talent, training, and dedication. In the face of these intrinsic difficulties, the vast majority of researchers will follow the material, psychic, and social rewards that are most readily available. If research on prevention, care, and implementation does not receive commensurate resources or provide the same professional traction as molecular research does, then those fields will struggle to garner the energy and attention necessary to deliver on their potential.

In some ways, the modern marvel that is the medical research enterprise seems to have succeeded too well. At a forum we convened last year, Christopher Austin, a pharmaceutical executive and former director of the NIH’s National Center for Advancing Translational Sciences, encapsulated our worry. First he pointed out how enormous investments had brought technologies and understanding that opened incredible medical and scientific opportunities. Then he emphasized the need to apply that ingenuity and effort to later stages of translation: “I fear sometimes that we have gotten so enamored of our abilities, of the kinds of experiments that we do in model organisms. Models for what? Humans are the target species, and we now have the opportunity and the obligation to focus more on whole humans.”

What falls to the side

Part of the problem with the molecular preoccupation of medical research funding is that its pursuit too often impedes or precludes the effective investigation of questions that could yield more immediate improvements in health and health care than are possible through the extended process required for drug development and approval. Questions like “What genes do cancer cells need to survive?” fit the current mold and might, eventually, lead to effective new drugs and diagnostic tests. But other important questions such as “What eating habits reduce weight loss and nausea during chemoradiation?” or “Can Zoom calls or in-person visits shorten hospital stays?” tend to receive less enthusiasm within the medical research community.

In some ways, the modern marvel that is the medical research enterprise seems to have succeeded too well.

Consider research efforts such as those led by schools of nursing to improve care for people at risk of neglect, such as caregivers of patients with dementia, or people who experience health disparities across illnesses from heart disease to HIV. These clinicians’ frontline experience brings keen insight into what really matters for improving patient outcomes, but funding mechanisms—especially when disparate schools and departments are involved—can be scarce. This is exactly the kind of research that we need more of, but our funding and training systems effectively serve to discourage it.

This pattern of preference for molecular research extends beyond the Doris Duke Foundation. We are part of a subgroup of 33 nonprofit health research funders in the Health Research Alliance that have shared their 2012–2022 grant data to enable aggregate analyses. Overall, support for research on population and health services within this subgroup amounted to only 8% of the total over the 10-year span—compared to 77% for biomedical research and 15% for clinical research. A similar comparison of what NIH supports cannot be found in publicly available data. However, it is widely known that funding for biomedical research makes up a much larger share of NIH’s portfolio than population and health services research.

Unquestionably, many distinguished scientists are working to advance implementation, care, and prevention. The NIH and a raft of (mostly young) agencies are helping them: the Agency for Healthcare Research and Quality (AHRQ, founded in 1989), the Patient-Centered Outcomes Research Institute (PCORI, 2010), the National Center for Advancing Translational Sciences (NCATS, 2011), and even the Advanced Research Projects Agency for Health (ARPA-H, 2022). Still, in 2022, the research and development budgets for the Centers for Disease Control and Prevention, PCORI, and AHRQ were under $1.5 billion combined. (The fledgling ARPA-H, which had a $1.5 billion budget for 2023, is a welcome addition to the mix.)

These clinicians’ frontline experience brings keen insight into what really matters for improving patient outcomes, but funding mechanisms can be scarce.

In an editorial in Science this June, the NIH director recognized that biomedical research innovation alone is insufficient to improve population-level health. Director Monica Bertagnolli described two new initiatives explicitly intended to connect bench research to the clinic and to communities. She explained, “These initiatives will help translate scientific discoveries into effective health care,” but acknowledged that, to succeed, they “will require not only support from NIH but commitment from the biomedical research community, other governmental agencies, health care systems, and private citizens who participate in research.” 

What could be done

While we have been reflecting on our own funding priorities in light of what we’ve discovered, broader soul-searching within the medical research establishment is necessary. After all, the Doris Duke Foundation’s resources are less than 1% of federal research funding for health. Here are a few steps the research community might take to funnel more effort into research questions primed to boost health and well-being.

First, funders can rethink career awards by organizing funding around pressing health problems rather than career trajectories. Doing so would incentivize researchers to think less about which molecular questions they can “own” and instead direct their energies toward what afflicts human health—no matter whether knowledge falls into basic discovery or implementation of care. Such a shift would also encourage researchers to consider their work as part of a collective and collaborative endeavor across an array of disciplines. (Right now, researchers are overly incentivized to prioritize work that earns them individual recognition.)

Increased budgets for AHRQ, PCORI, and NCATS would certainly help advance this goal. But perhaps just as valuable would be greater coordination among these entities and the broader NIH in facilitating research that is at the intersections of agencies’ missions.

Second, there should be more prizes, fellowships, major awards, and other honors attached to research for care, implementation, and prevention. A 2021 analysis spanning four decades of over 400 scientific prizes and thousands of awards found that topics associated with prizes showed “unexpected and significant” growth in new knowledge and entrants. In other words, prizes really do influence the kinds of questions that whole generations of researchers pursue.

Funders can rethink career awards by organizing funding around pressing health problems rather than career trajectories.

The bad news, however, is that if prizes reflect a narrow understanding of what counts as valuable research, then they could further entrench that myopic worldview. And it’s clear that in medicine the dominant paradigm is an overwhelming focus on molecular science. Physician-scientists have received Nobel Prizes in Medicine for discovering that nitric oxide acts as a signaling molecule in blood vessels (an incredible achievement), but not for showing how home health workers can ensure seniors take the right medicines.

Third, academic institutions should more actively facilitate innovations that improve health outcomes. One possible approach is promotion and assessment programs that credit broader social benefit in addition to career progression. Another approach is helping different parts of a university work together. During the height of the pandemic, for example, federal funding sparked overdue realignment in some academic medical centers to mobilize all-hands-on-deck problem-solving. In just one example, this process brought together separate New York University research centers, faculty in global public health, as well as social workers, nurses, and community health workers, to plan and organize visits to families in public housing to offer and administer COVID-19 testing and flu vaccinations. But these kinds of innovations have been the exception rather than the rule. 

Fourth, we need to valorize work to improve care and disease prevention. The innovative new drugs and diagnostics coming from biomedical research in the last few decades happened in part because established researchers celebrated new methods, materials, and technologies. They can bring similar advocacy to research on implementation and outcomes to accelerate our progress toward this next frontier in innovation.

Our faith in the power of high-tech scientific research has brought vaccines, medicines, and precise diagnoses; investments in such research have yielded huge dividends across society. Basic, molecular science should continue to receive our society’s support. But it is also time to extend our belief in the power of science a little further—to the researchers who prioritize how to deliver care and improve health.

Enhancing Regional STEM Alliances

A 2011 report from the National Research Council, Successful K–12 STEM Education, identified characteristics of highly successful schools and programs. Key elements of effective STEM instruction included a rigorous and coherent curriculum, qualified and knowledgeable teachers, sufficient instructional time, assessment that supports instruction, and equal access to learning opportunities. What that report (which I led) did not say, however, was how to create highly effective schools and programs. A decade later, the National Academies’ 2021 Call to Action for Science Education: Building Opportunity for the Future helped answer that challenge.

In “Boost Opportunities for Science Learning With Regional Alliances” (Issues, Spring 2024), Susan Singer, Heidi Schweingruber, and Kerry Brenner elaborate on one of the key strategies for creating effective STEM learning opportunities. Regional STEM alliances—what the authors call “Alliances for STEM Opportunity”—can enhance learning conditions by increasing coordination among the different sectors with interests in STEM education, including K–12 and postsecondary schools, informal education, business and workforce development, research, and philanthropy.

Coordination is valuable because of the alignment it promotes. For example, aligning school experiences with workforce opportunities creates a better fit between schooling and jobs; aligning K–12 with postsecondary learning, including through dual enrollment, gives students a boost toward productive futures; and aligning research with practice means that research may actually make a difference for what happens in classrooms.

Working together on mutual aims helps us find common ground instead of highlighting divisions.

In calling for regional alliances, the authors are building on the recent expansion of education research-practice partnerships (RPPs), which are “long-term, mutually beneficial collaborations that promote the production and use of rigorous research about problems of practice.” In RPPs, research helps to strengthen practice because the investigations pursued are jointly determined and the findings are interpreted with a collaborative lens. The National Network of Education Research-Practice Partnerships now includes over 50 partnerships across the country. The Issues authors have expanded the partnership notion by embedding it in the full education ecosystem, including educational institutions, communities, and the workforce.

In these polarized times, alliances that surround STEM education are particularly important. Working together on mutual aims helps us find common ground instead of highlighting divisions. Allied activities help to build social capital, that is, relations of trust and shared expectations that serve as a resource to foster success. Regional alliances can help create both “bridging social capital,” in which members of different constituencies forge ties based on interdependent interests, and “bonding social capital,” in which connections among individuals within the same organizations are strengthened as they work together with outside groups. In these ways, regional alliances can help defuse the tensions that surround education so that educators can focus on the core work of teaching and learning.

While workforce development is a strong rationale for regional alliances, Singer, Schweingruber, and Brenner note that this is not their only goal. Effective STEM education is essential for all students, whatever their future trajectories. Once again reflecting the times we live in, young people need scientific literacy to understand the challenges and opportunities of daily life, whether in technology, health, nutrition, or the environment. Alliances for STEM Opportunity can promote a pathway to better living as much as an avenue to productive work.

President

William T. Grant Foundation

Building on the many salient points that Susan Singer, Heidi Schweingruber, and Kerry Brenner raise, I would like to emphasize the unique potential of community colleges to respond to the challenge of creating a robust twenty-first-century STEM workforce and science-literate citizenry. The authors rightfully point out how regional alliances can boost dual enrollment and improve the alignment of community college programs. And I applaud their mention of Valencia College in Orlando, Florida, an Aspen Institute College Excellence Prize-winning institution that many others could continue to learn from.

I would add that embracing the “community” dimensions of community colleges would accelerate the nation on the path to the authors’ goals. A growing set of regional collective impact initiatives ask colleges to be community-serving partners in efforts to build thriving places to live for young people and their families. An emphasis on alleviating student barriers, exacerbated by the COVID-19 pandemic, has put pressure on these institutions to build out basic needs services (e.g., food supports, counseling, benefit navigation) for students and community members. Incidentally, I hope we don’t soon forget the thousands of lifesaving COVID shots delivered at these schools.

Many community colleges have mission statements that are community-oriented, such as Central Community College in Nebraska, whose mission is to maximize student and community success. Moreover, because students of color disproportionally enroll in community colleges, these institutions often play an outsize role in advancing racial equity, offering paths to upward mobility that must overcome longstanding structural barriers.

Despite these many roles, community colleges are judged—and funded—primarily based on enrollment and the academic success of their students. These measures miss key benefits that these colleges provide to communities and don’t encourage colleges to focus their efforts on community well-being, including the cultivation of science literacy.

Underneath this misalignment lies the opportunity. While open access schools typically can’t compete on traditional completion, earnings, and selectivity metrics that four-year colleges are often judged on, they can compete much better on community measures because their primary audience and dollars stay more local. By highlighting how valuable they truly are locally through regional alliances, these schools could secure more sustained public investment and support more students and community members in a virtuous cycle.

While open access schools typically can’t compete on traditional completion, earnings, and selectivity metrics that four-year colleges are often judged on, they can compete much better on community measures because their primary audience and dollars stay more local.

Additionally, emerging leaders of community colleges who have risen through the ranks during the student success movement of the past 20 years are eager for “next level” success measures to drive their institutions forward. Instead of prioritizing only enrollment and completion rates, institutional leaders could set goals with regional alliance partners for scaling science learning pathways from kindergarten through college, then work together to address unmet basic needs through partnerships with local community-based organizations, ultimately helping more BIPOC (Black, Indigenous, and People of Color) students obtain meaningful and family sustaining careers—in STEM and other high demand fields.

If we truly aspire to have a STEM workforce that is more representative of the country and equity in STEM education more broadly, regional alliances must intentionally engage and support the institutions where students of color are enrolling—and for many, that is community colleges.

Director, Building America’s Workforce

Urban Institute

It has long been observed that collaborations, alliances, and strategic partnerships are able to accomplish greater systemic change related to science, technology, engineering, and mathematics (STEM) education and research. There is an imperative for the nation’s competitiveness that we cultivate and harness the talent of individuals with a breadth of knowledge, backgrounds, and expertise.

The American Association for the Advancement for Science has spearheaded the development of a national strategy referred to as the STEMM Opportunity Alliance—the extra M refers to medicine—to increase access and enhance the inclusion of all the nation’s talent to accelerate scientific and medical innovations and discoveries. AAAS collaborates with the Doris Duke Foundation and the White House Office of Science and Technology Policy in this effort. The alliance’s stated goal, set for 2050, is to “bring together cross-sector partners in a strategic effort to achieve equity and excellence in STEMM.”

There is an imperative for the nation’s competitiveness that we cultivate and harness the talent of individuals with a breadth of knowledge, backgrounds, and expertise.

Susan Singer, Heidi Schweingruber, and Kerry Brenner offer a similar approach. What is compelling about their essay is not only the delineation of the positive impact of different cross-sector collaborations across the nation on outcomes for science teaching and learning, but also the focus on the local community or region. The authors advocate for “Alliances for STEM Opportunity” along with a coordinating hub to ensure strong connections, a clear (consistent) understanding of regional and local priorities, and a collaborative action plan for addressing the needs of the community through effective and integrated science education.

This recommendation is reminiscent of the National Science Foundation’s Math and Science Partnerships program, started in 2002 but now discontinued. One of its focal areas, “Community Enterprise for STEM Learning,” was designed to expand partnerships “in order to provide and integrate necessary supports for students.” Singer, Schweingruber, and Brenner make a strong case and provide evidence for why regional alliances could lead (and have led) to improvements, which include enhanced teacher preparation, increased scores on standardized tests, a more knowledgeable workforce with relevant skills for industry, and a stronger STEM infrastructure in the region. Not only does this approach make sense; it has also shown to be effective. I know firsthand the significant benefits of alliances and partnerships from my former role as an NSF program officer, where I served as the co-lead of the Louis Stokes Alliances for Minority Participation Program and a member of the inaugural group of program officers that implemented the INCLUDES program, a comprehensive effort to enhance US leadership in STEM discovery and innovation.

As a member of the executive committee for the National Academies of Sciences, Engineering, and Medicine’s Roundtable on Systemic Change in Undergraduate STEM Education, I have engaged in wide discussions about the various factors that have been shown to contribute to the transformation of the STEM education ecosystem for the benefit of the students we are preparing to be STEM professionals, researchers, innovators, and leaders. Systemic change does not occur in silos; it occurs through intentional collaborations and a commitment from all stakeholders to transform infrastructure and culture.

Vice Provost for Research

Spelman College

It is a delight to see Alliances for STEM Opportunity highlighted by Susan Singer, Heidi Schweingruber, and Kerry Brenner. Over the past three years, serving as the executive director of one of the nation’s first STEM Learning Ecosystems (a term coined by the Teaching Institute for Excellence in STEM), in Tulsa, Oklahoma, I’ve witnessed the Tulsa Regional STEM Alliance address enduring challenges in STEM education—issues that surpass local reforms and political shifts.

The authors rightly highlight that alliances are uniquely positioned to address persistent problems, even as reforms, politics, and priorities fluctuate. Improving learning pathways, reducing teacher shortages, increasing access to teacher resources and evidence-based teaching, promoting internal accountability, and supporting continuous improvement are all issues that might be partially resolved at the local level. However, these solutions require an infrastructure that allows for their dissemination and scaling to achieve systemic equity.

This vision represents a shift from workforce-centric thinking toward holistic youth development thinking.

At the Tulsa Regional STEM Alliance—our iteration of the Alliances for STEM Opportunity—we agree that articulating a shared vision is the first step. Ours has evolved over the past decade, and we have found great alignment around our stated quest to “inspire and prepare all youth for their STEM-enabled future.” This vision represents a shift from workforce-centric thinking toward holistic youth development thinking.

To reach our goal, we collaborate with 300 partners to ensure all youth have access to excellent STEM experiences in school, out of school, and in professional settings. This entails numerous collaborations; funding and resourcing educators and partners; leading or hosting professional learning; supporting program planning and evaluation; and creating youth, family, and community events that ensure all stakeholders understand and truly feel connected to our motto: “STEM is Everywhere. STEM is Everyone. All are Welcome.”

By continually defining our shared work around excellent experiences and how they feed into our shared vision, we raise awareness and support an ambitious view of STEM education that advances learning in its individual and integrated disciplines. This enables us to advocate more effectively for funding, development, implementation, and improvement efforts from a principled and consistent position—both of which are increasingly needed in education.

With clarity on the value of STEM as a vehicle for ensuring foundational disciplinary understandings, we can carefully align stakeholders around a simple idea: STEM aims to address the issue of too few students graduating with competence in the STEM disciplines, confidence in themselves, and a pathway to the STEM workforce. STEM cannot meet this demand if the experiences in which we invest our time, talent, and resources do not advance our excellent experiences (shared work) and move us closer to inspired and prepared youth (our shared vision).

I echo the authors’ call for expanded funding and research into this evolving infrastructure and encourage others to connect with their local alliances by visiting https://stemecosystems.org/ecosystems.

Executive Director

Tulsa Regional STEM Alliance

Susan Singer, Heidi Schweingruber, and Kerry Brenner describe the importance of local collaborations among schools, postsecondary institutions, informal education, businesses, philanthropies, and community groups for improving science education from kindergarten through postsecondary education. Regional alliances bring together diverse stakeholders to improve science education in a local context, which is a powerful strategy for achieving both workforce development and goals for science literacy. As the authors also note, regional alliances contribute to the development of a better civic society. These alliances provide a venue for people to find common ground so that progress does not get lost to political polarization.

Opening pathways to STEM careers through alliances has broad societal benefits beyond just creating more scientists—it makes science more accessible and relevant to students’ lives, which is crucial for individual and societal well-being and effective participation in democracy. Science education emphasizes the importance of critical thinking, questioning assumptions, and evidence-based conclusions. These skills are essential for effective civic participation, as they enable individuals to evaluate claims, consider multiple perspectives, and engage in constructive dialogue.

Regional alliances can contribute to the development of a better civic society that fosters informed, engaged, and socially responsible citizens.

Regional alliances can promote the integration of these skills throughout a school’s science curriculum and in community-based learning experiences. They can engage students in authentic, community-based science projects that address local issues, such as environmental conservation, public health, or sustainable development. By participating in these projects, students can develop a sense of agency, empathy, and social responsibility, as well as practical skills in problem-solving, collaboration, and communication. I want to highlight three ways regional alliances can contribute to the development of a better civic society that fosters informed, engaged, and socially responsible citizens.

First, regional alliances can bring together schools, businesses, government agencies, and community organizations to collaborate on science-based initiatives that enhance community resilience. For example, alliances can work on projects related to disaster preparedness, climate change adaptation, or public health emergencies. These partnerships can strengthen social capital, trust, and collective problem-solving capacity, which are essential for a thriving civic society.

Second, regional alliances can demonstrate ways to engage in respectful, evidence-based dialogue around controversial issues. This can include providing professional learning for teachers on facilitating difficult conversations, hosting community forums that model constructive discourse, and encouraging students to practice active listening and perspective-taking.

Third, regional alliances can create opportunities for students to take on leadership roles, express their ideas, and advocate for change in their communities. For example, alliances can support student-led science communication campaigns, development of policy recommendations, or community service projects. By empowering youth to be active participants in shaping their communities, alliances can contribute to the development of a more vibrant and participatory civic society.

Regional alliances focused on all levels of science education can play a vital role in building a better civic society by fostering scientific literacy, critical thinking, community engagement, and lifelong learning. By preparing students to be informed, engaged, and socially responsible citizens, these alliances can contribute to a more resilient, inclusive, and democratic society.

Program Director, Education

Carnegie Corporation of New York

Susan Singer, Heidi Schweingruber, and Kerry Brenner’s essay and theory regarding regional alliances resonate within the funder community. In 2014, several STEM funders helped launch the STEM Learning Ecosystems Community of Practice (SLECoP). These leaders recognized the value of collective impact and the tenets of a regional model. Fast forward, philanthropic commitments in regionalized initiatives continue today. Singularly, funders cannot support all aspects of a regional alliance. However, hybrid investment portfolios or philanthropic collaboratives can illuminate the interdependencies throughout the continuum from kindergarten through career and collectively support various aspects of a centralized regional model.

The authors’ assessment offers a compelling response to the National Center for Science and Engineering Statistics 2019 data that illustrated the status of the education-to-labor market pipelines throughout the country. The state-specific labor force data reflect exemplars and chasms in the continuum. The data indicate that 24 states lack a high concentration of STEM workers relative to the total employment within their respective states. Concentration is measured by those in the skilled technical workforce or those in the STEM workforce with a bachelor’s degree or above. The data also reveal that only 13 states have workforce in which 11.2% to 15% of participants have STEM bachelor’s degrees. Such regional inequalities threaten the nation’s capacity to close education, opportunity, and poverty gaps; meet the demands of a technology-driven economy; ensure national security; and maintain preeminence in scientific research and technological innovation.

Regional inequalities threaten the nation’s capacity to close education, opportunity, and poverty gaps; meet the demands of a technology-driven economy; ensure national security; and maintain preeminence in scientific research and technological innovation.

Many socioeconomically disadvantaged communities lie within the lowest educational and workforce STEM concentrations. Implementing regionalized STEM pathway models would close opportunity gaps. The labor needed by 2030 dictates the need for collective impact, thought partners, and strategic alliances. Regional alliances would enable an inversion of the current STEM pathway status. Regional partnerships that begin with early education; ensure STEM teacher growth, support, and retention; guarantee equitable access; and end with industry engagement will ensure that the nation’s workforce supply outpaces its workforce demand.

As strategic partners, corporate and private philanthropy can fortify structural needs and build capacity for regional alliances. If the authors’ recommendations hold, consistent philanthropy can guarantee the sustainability of the principles of a regional model. I appreciate the authors’ emphasis for regional engagement. National centralization is always valued, but regional implementation has a greater propensity for viable execution. Regional activation allows local partners to tailor solutions and address the specific STEM workforce needs in their geography. Localized assessments will yield the best and wisest practices.

However, the key to bringing the authors’ recommendations to fruition is mutual interest and motivation by the constituents within a region to do so. Similar to regionalized interests and constituencies, most philanthropic investments are also regionalized. Regional funding partners can catalyze impetus for synergizing their STEM ecosystem allies. Therefore, as we consider the fate of the nation, I hope regional leaders and philanthropists will continue to take stock of the value and promise of the authors’ justified theory.

Executive Director

STEM Funders Network

Susan Singer, Heidi Schweingruber, and Kerry Brenner provide current examples and evidence to support and advance the central theme of the National Academies’ 2021 report Call to Action for Science Education: Building Opportunity for the Future. In reading the essay, the familiar saying that “all politics is local” came to mind as I thought about how broad national priorities—such as the report’s push for “better, more equitable science education”—can be used in the development of systems, practices, and supports that are focused regionally and locally. It also made me think about classroom connections and some of the recent instructional changes that foreground locality.

Imagine how empowering it is to begin to answer questions that have personal and communal relevance and resonance.

Over the past few years, the science education community has continued to make shifts in teaching and learning to center students’ ideas, communities, and culture as means to reach that Call to Action goal. Many of the educational resources published lately offer students and teachers the opportunity to consider a phenomenon, an observable event or problem, to begin the science learning experience. Students are provided with current data and information in videos and articles, then given the opportunity to ask questions that can be investigated. In the process of answering the students’ questions, science ideas are developed and explained that underlie the phenomenon being considered. Imagine how empowering it is to begin to answer questions that have personal and communal relevance and resonance. This type of science teaching and learning connects with the types of partnerships and experiences essential in the local and regional alliances, and serves to enrich and enliven the relevance and relatability to science as a career opportunity and civic necessity.

Additionally, it would be great to find ways to connect these local and regional alliances to make them even stronger and more common, by identifying ways to scale and sustain efforts, celebrate accomplishments, and share resources. One possibility might be some type of national convening that would provide the time and space where representatives from local and regional alliances could discuss what is working, seek support to solve challenges, and create other types of alliances through cooperation and collaboration. Science Alliance Opportunity Maps could be created to ensure that all students and their communities are being served and supported. The only competition would be the numerous and varied ways to make equitable science education a reality for every student, from kindergarten through the undergraduate years, in every region and locale of the nation. This would be a major step toward achieving Singer, Schweingruber, and Brenner’s hope for “not just a competitive workforce, but also a better civic society.”

Associate Director for Program Impact

Senior Science Educator

BSCS Science Learning

A Road Map for a New Era in Biology and Medicine

Most people are familiar with DNA, but its cousin, RNA, has become widely known only recently. In 2020, of course, RNA was in the news all the time: the COVID-19 virus is made of RNA, as are the vaccines to combat it. Technologies based on RNA could lead to innovations in biology, medicine, agriculture, and beyond, but researchers have only scratched the surface of understanding what RNA is capable of. 

A new report from the National Academies, Charting a Future for Sequencing RNA and Its Modifications: A New Era for Biology and Medicine, proposes an ambitious road map for coordinated projects to understand RNA. This large-scale effort is inspired by what was achieved for DNA two decades ago by the Human Genome Project. 

On this episode, host Monya Baker is joined by Lydia Contreras, professor of chemical engineering at the University of Texas, Austin, and one of the authors of the report. Contreras talks about what RNA is, the challenges and potential of this effort, and what lessons could be learned from previous efforts with the Human Genome Project.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Monya Baker: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

If you’re like most people, you’ve heard a whole lot more about DNA than about RNA, DNA’s more dynamic, less stable cousin. But in 2020, RNA was in the news all the time. The COVID-19 virus was made of RNA. So were the vaccines that taught our bodies how to combat it. Technologies based on RNA could bring more innovation in medicine, agriculture, and beyond. Still, science has barely begun to catalog what RNAs occur in what tissues and under what conditions?

Earlier this year, the National Academies put out a report called Charting a Future for Sequencing RNA and Its Modifications: A New Era for Biology and Medicine. The report proposes an ambitious roadmap for coordinated efforts to understand RNA.

My name is Monya Baker. I’m joined by Lydia Contreras, a professor at the University of Texas, Austin, and one of the authors of the report. Lydia tells us what RNA is, the challenges and potential of this effort, and what lessons could come from the Human Genome Project, the pioneering large-scale effort that was completed two decades ago.

Lydia, welcome.

Lydia Contreras: Thank you, Monya. It’s exciting to be here today.

Baker: Tell me a little bit about yourself. How did you come to study RNA?

This must be the hottest and coolest area of biology to study.

Contreras: Science was always my favorite subject since I was in middle school. I loved science, loved biology, chemistry. In college, I have my background is in chemical engineering with a concentration in biochemistry. So I always had a fascination of how biological molecules function, how they work together, and I think it really hit on RNA a summer that I spent abroad. It was 2009 and it was the year where the Nobel Prize in Chemistry was awarded for studies of structure and function of the ribosome. This was to Dr. Ada Yonath and Thomas Steitz and Venkatraman Ramakrishnan. And that year I was lucky enough to hear a talk by Dr. Yonath on ribosomes and just to hear, see the molecules. These are 3,000 base pairs of RNAs intertwined with proteins producing something that was so useful to the cell and hear the story was fascinating for me that summer. And I really realized how open widely the field of RNA was that I was just looking at one out of many, many RNA molecules. And I think that was the summer that I said, “This must be the hottest and coolest area of biology to study.”

Baker: That’s great. Yeah. RNA is such a complex machine. But before we get into talking about RNA, one thing that I found really interesting and inspiring is that you’re really committed to outreach. Tell me more about that.

Contreras: So I grew up in an environment where… Thinking back, I think there were many students, many young students that would’ve been great scientists, but those were not careers that we were aware of in general. We were not told about science. We were not had open discussions about discovery and how that could really make an impact. And I think it’s harder to see relative to other professions where you see people all the time and they’re visually community leaders and helpers. And I think other professions have done a better job personifying and giving emotions and bodies and faces to what they do.

So a big commitment for me once I started having my own lab, doing my own research was really community awareness. I really get excited when I get opportunities to bring a little bit of awareness about scientists, how much of different things scientists do, what the profession is about, and using that excitement to really promote new individuals and new people coming into the field. I think that we need a new generation to get excited about these new things. And of course, we’re going to need a whole new labor force that is really trained on these issues and creative ways to build our capacity in these areas to be competent technologically around the world. So that’s a big thing that really drives my excitement of being in committees like this.

Baker: Tell me just a little bit about this committee that you were a part of. What were you called to do?

Contreras: So this was a committee, that was the meaning a group of scientists came together. It was about 16 in different fields, and this was put together by the National Academies of Science and Engineering and Medicine. And the goal of the committee, what we were called to do, was to discuss opportunities and challenges having to do with the initial discovery, initial studies that have been emerging more and more on this concept of the chemical varieties of RNAs that actually exist in nature. This committee was actually a follow-up discussion of another committee that in 2022 was led by the National Institute of Health by Dr. Fred Tyson, where the need to really come in early to expand on questions that had to do with, “How do you capture all these vast of RNA sequences that now we know are there and the diversity of all these sequences of RNAs? How do we capture that? How do we study? What does that mean?”

How do we integrate these in the classroom, change the curriculum, build a workforce? And ultimately how does this change the world?

This committee got called to do a more in-depth study of those questions by the National Academy and involving, again, people in academia, in the industry as well as a great staff there that led a lot of these discussions and interactions. So questions that we were called to think about were: “What is really this chemical diversity of RNAs that we see? How are we finally going to get to this? How is this data going to be stored, storage, maintaining it? Who’s responsible? How do we inform the public? How do we create awareness? How do we integrate these in the classroom, change the curriculum, build a workforce? And ultimately how does this change the world as far as technology, agriculture, medicine, health, et cetera.” So there were a series of very in-depth studies that went into each of those areas. It was fascinating to be part of this group.

Baker: It reminds me of what I understand from the Human Genome Project where it was years of meetings before this thing actually started. Let’s back up a little bit and let’s talk about RNA. What does this mean? Chemical diversity of RNA and RNA and DNA they’re very different molecules, right?

Contreras: Absolutely. Most people recognize DNA as a very stable blueprint that is in every single organism to dictate, “How things go? What molecules get produced? What proteins are made, et cetera.” So when it comes to RNA and DNA, both are nucleic acids, but their key differences that really get to the functional differences as to why RNA is so versatile, easy to manipulate, and has caught so much excitement lately. So RNA it’s usually one chain, typically single-stranded versus DNA is typically double-stranded. So there’s two chains intertwined that makes the molecule pretty stable. So that means that for RNA, there’s these large spaces or grooves that form which give it a lot of dynamics in terms of molecules interacting with it, being able to degrade it, being able to make it more stable, et cetera. So as I said, DNA it’s really genetic storing information, but RNA can convert that information and make it dynamic.

What’s important is that RNA becomes then a lot more reactive than DNA, easier to manipulate, and you can change it around without necessarily affecting permanent changes to the cell.

So that’s what’s really exciting is that it can read DNA, but then depending on what’s going on in the cell on environmental factors and the type of the cells, it can make it dynamically and it moves around in the cell. So it’s not contained to the nucleus like DNA, but it leads to different locations outside of the nucleus. And so what this turns out, with the addition, that RNAs tend to be shorter molecules. DNA, if you really think of DNA, when you stretch it out could be several centimeters long, but RNAs can have a variety of sizes. There can be many as small as tens of nucleotides, which is tiny, to a few thousand, so they are much smaller. What’s important is that RNA becomes then a lot more reactive than DNA, easier to manipulate, and you can change it around without necessarily affecting permanent changes to the cell. So you’re not affecting that blueprint.

So when we talk about this versatility of RNAs, we’re talking about the types of RNA molecules because RNAs get formed by this process called transcription. They make several different molecules and excitingly in many different flavors that can have many different chemistries depending on environments, cell type, organism, and that’s what we refer to as this repertoire of RNAs in the cell.

Baker: I’m old enough that I remember the Human Genome Project and I remember how surprised people were at how few genes’ humans actually had. It’s not necessarily the DNA, it’s all these things that the RNA can do and that makes this project to understand RNA, I think a lot more complex. So in terms of getting a comprehensive understanding of RNA, what are the biggest differences between that goal and the Human Genome Project?

Contreras: I think it’s important to first highlight the similarities of the two projects and the goals and ways that they were similar. So I think both goals are to get this more complete understanding of the repertoire of sequences. And in both projects, I think it was this recognition that this was underexplored and that this would be critical for advancing basic knowledge of every single living system for manipulating health, humans, plants, animals, preventing disease, improving crop yields, stimulating economies.

Where this could be a lot more complex is in the fact that you have to imagine that for every DNA sequence that encodes the gene or information in terms of this blueprint, there are multiple and multiple and multiple RNAs that can be synthesized of different sizes of different chemistries. And that is also dynamic depending on the cell type, depending on where it goes to the cell, depending on the timing that it gets synthesized on the age, on the level of stress. So this is hugely far more complex as opposed to the genome that we can say even for one organism, there is a vast number of multiple RNA sets that can be present depending on the time. So this is where the complexity really arises is in the vast repertoire of RNA types and chemistries that can be derived from even one piece of DNA. So not to say that DNA cannot be chemically changed, but for example, we know on the order of 17ish or so DNA modifications, for RNA is almost 200.

Baker: There is so much more to understand about RNA than DNA. It seems to me are there even experimental techniques to understand all those chemistries and all those shapes and all those different lengths of sequences?

We don’t have the tools or the infrastructure, the technology to even get at this question of characterizing this vast repertoire of RNAs that we know exist.

Contreras: The answer is no. And the other answer is that most of the techniques that are out there, you have to extract RNA out of the cell which at that point raises the question of, “What does the real biological molecules look like?” So a major conclusion of this project that we did, of this committee, was the fact that developing technologies and infrastructure to enable exactly that, the understanding of all these types of RNAs in the cell, their function, who are they, where are they and their chemistries is probably the most impactful goal that we can have in the near future. So the answer to your question is really “No.” We don’t have the tools or the infrastructure, the technology to even get at this question of characterizing this vast repertoire of RNAs that we know exist.

Baker: And I know there’s a lot of people working on that. Assuming that there are techniques to characterize the RNA molecules, is there a good way to store the information and share the information?

Contreras: We spend so much time discussing data. We spend so much time discussing even the fact that there are clear guidelines that are needed to deposit data, to store data, to exchange robust and sustainable data, and to create platforms that are maintained, that are up-to-date, that are curated where RNAs are indexed. There is not such coordinated infrastructure yet, and the committee spend a lot of time drafting recommendations where it calls for people to have a coordinated effort to do this. It boils down to needing sustainable, funded, something centrally managed where you can access that information and that is maintained. Those resources are not currently things that we have available.

Baker: Somebody told me a story that when they started the Human Genome Project, people would need to get DNA, run a gel, keep the gel in a freezer for three days, and then you were lucky to get 100 base pairs, which is not the way to get a sequence of 3 billion sequences. So there’s precedence for starting a project before the necessary technologies exist because you’ll learn on the way.

The publication pipeline and timeline is so long that we want to access information quickly.

Contreras: A major issue that we have also in the field is this issue of developing standards. And so standards are key guidelines that we use when we run experiments as to how experiments are done so that we can allow the orange to orange, apples to apple comparisons when results are obtained by different labs using different techniques and methods so that you can actually validate and benchmark results. And so these efforts really should be highly coordinated and standardized so that as we decipher all this information, there’s less risk for being misled. The publication pipeline and timeline is so long that we want to access information quickly. So I think it’s definitely worth discussing data and data sets separately from waiting on that timeline that it takes to have completely published stories and results as long as we really work on strict guidelines and standardized ways to report and share.

Baker: I also wanted to ask you about the norms of data sharing. And my understanding is that when the Human Genome Project started, people wanted to keep their data until they had a publication and there was a very hard-fought agreement to release data almost on a daily level. And because they came to this agreement in Bermuda in the off season, these are called the Bermuda Principles. As an aside, there’s all these hilarious pictures of people in khakis and closed shoes lying on pool chairs. So it’s a bunch of people getting together talking about stuff and changing the norms to advance science. And I wonder what has to happen in RNA for that kind of progress to happen?

Contreras: Well, I can tell you that a lot of discussions happen about this, but not in Bermuda, not in any fancy island, not with any khakis and Hawaiian T-shirts. I think there’s been a big push in science overall for more openness and more readily publication of results. So before most of the papers would really wait for peer reviewed for you to have the perfect results and be reviewed, and then reviewers would come back with questions and you would do more work or experiments and return the paper again, and then the paper and the results can see the light. It’s more and more trending now that there is this decision to put results on public archives, immediately, before they’re even published. And so that the community can have access to data, to information, to things that are being done, and speed up hopefully the exchange of information.

So that might be something that really benefits these efforts on RNA, just the general trends in science to be more open with data more quickly. The concern is the same. The concern is making sure that in that model, we still have standardized ways of sharing that information. And this is, I think, one of the issues that’s being brought up is this whole idea of abandoning carefully curated databases and the risk that we take in limiting, at the end of the day, growth and real understanding and effort. So I think the balancing of those things is going to be important. The speed at which data is shared, the openness with the community, in a way that has some control for standards.

Baker: It just seems like such a tall order. And then making it even taller and more complicated is that in order to have equipment that can get this information about RNA chemistries and sequences, industry is going to need to be involved. Also, industry is going to need to be involved for some of the applications of using RNA and agriculture and medicine. Industry’s going to need to feel like there’s a pathway for them to develop and sell products. And I’m wondering how all of that plays into the need for standards and openness?

Contreras: So the nice thing here is that there’s a huge incentive for industry to jump in. There’s a huge need to develop reagents so that we can go into the lab and study these processes fundamentally and basically. There’s a new need to create ways that we can synthetically construct a lot of these molecules so that we can exploit them in the development of medicine, in the development of drugs. One example of this is the way we were able to synthesize mRNAs during COVID-19. So that saved the day to make an mRNA vaccine. So that’s a classic example. If you think about that industry that’s emerging when it comes to even mRNAs in health and as biomolecules that can be for medical purposes, there’s a huge incentive. There’s a huge incentive for developing technologies because there’s so much that we’re going to have to do and pay for these services just like we pay for sequences now after a lot of the results of the human genome sequence.

Industry will need a trained workforce, new people coming in that from the classroom can start learning the value of RNA science, how to do these technologies and getting interdisciplinary training so they can come and promote our workforce in this area.

So I think there is an incentive there in terms of the commercialization on a lot of this, but I think what’s important about what you’re asking is that there’s really a need for everyone to participate to make this growth. So one of the conclusions of the report is that for us to be able to enable this level of innovation that we need, we really need to completely understand the repertoire of this synthesis. There are chemistries, and this needs to be a largely coordinated effort. As I said, we need reagents from industry. We need technologies from collaborations between industry, government, the academy, and those efforts will be really successful, especially if they can be aligned by federal agencies.

So I think that these partnerships will be key when it comes to supporting research in labs, fundamental research, prioritizing these gaps in technology, synthesizing standards. We’re going to have to synthesize these molecules and use them to interpret results. But also, one of the other things is training the workforce. Industry will need a trained workforce, new people coming in that from the classroom can start learning the value of RNA science, how to do these technologies and getting interdisciplinary training so they can come and promote our workforce in this area. So I think there’s several interests that make this partnership have mutual excitement about working together.

Baker: I am reminded of how you became inspired to study RNA because of the work on ribosomes, and I’m thinking there’s a whole generation of scientists who have seen very recently the world shutting down because of an RNA virus and getting safer because of an RNA-based vaccine. So there should be a lot of people that really want to work on RNA.

Contreras: It’s a huge opportunity for the field, and the committees spend a lot of time talking about how do we capitalize this? How do we make the tragedy of the pandemic and the scientific successes that we’ve seen because they’ve been working on these technologies for years? How do we bring awareness? How do we get a whole new generation of people excited in these questions and change the classroom curriculum, incorporate a lot more RNA science?

There’s definitely the public interest, Monya, and I think it’s a great time for us to capitalize that awareness and that collaboration with our entire community to get excited about potential impacts of this type of science.

Baker: I want to read another quote from the report, and it has a lot of technical words in it, but the vision is eye-popping of what the goal is. Tell me about what the situation is now and what’s envisioned.

“If the recommendations are followed, the committee envisions that within 15 years, affordable oligonucleotides of any custom order sequence, length modification, stoichiometry, and structure could be readily available for research and technology development.”

Contreras: This definitely speaks to this huge grand vision that with increased technology and understanding of what the vast repertoire of RNAs look like when they’re in the cell, we can achieve additional capabilities that would allow us not just understanding, but building these molecules synthetically to explore their potential in medicine, technology, synthetic biology, and even for allowing a lot more detailed studies that touch on the functionality of this vast set of molecules. So I think this is what this is really speaking in here. It’s really the potential of what this field can really achieve.

This could be a really powerful technology that can emerge from this level of understanding.

And specifically, when it comes to this custom order idea that you could have the ability just like we do for DNA now. For DNA, now you can envision a sequence and we have figured out now how to synthetically make them, and not just synthetically make them, but synthetically, throw them in a living system so that that living system can have a new synthetic blueprint of what we want that organism or that living system to do. So if you can imagine that with as many more diverse molecules, with as many more diverse chemistries and combinations of that chemistry and length and where they can go, this could be a really powerful technology that can emerge from this level of understanding.

Baker: This report is coming out after the completion of the Human Genome Project, after the completion of a lot of other big science projects. How has the fact that these other big science projects have been completed changed, how people can go about grappling with RNA?

Contreras: I think what we learned from the Human Project was really the potential of how much coordination of efforts across different disciplines and across different sectors can really help to build infrastructure and technology that can quickly pick up to ask a number of questions that we’re not even imagining right now. So along with the human genome, we’re still capitalizing from the Human Genome Project. We’re still spinning off to now know about sequencing microorganisms. So now we’re talking about organisms that live within humans and what are their sequences and talking about microbiomes on all aspects of the earth around us.

So I think that project has had major ramifications on problems that we couldn’t really even imagine that we would be able to solve back when the project started. I envision that something really similar can happen with the level of impact that this RNA sequencing project can have. So I think what we have really learned is how the needs, what it takes to pull something like this off such a complex project. Again, the coordination, the technology, the databases, the openness with data, the standards, and I hope that those are lessons that can really be translating so that we can develop technology that can allow us to do this.

Baker: What would you like to see happen over the next year and over the next decade?

Contreras: I think it will be really nice to have some high-level coordination of how this is going to happen. I like to see some of our governmental agencies really champion some of these efforts that bring together academia, bring together industry, bring together government, and really coordinate this high umbrella of efforts that we’re going to need. We have discussed, again, the need for data organization and sharing. We have discussed the need for technology, which is a major one. We have discussed the need for organizing all these resources, training a new workforce, motivating, making the public aware, and I think we’re going to need people from all sorts of walks of life to really come together and realize the potential, the excitement, but also how they can contribute. So we’re talking, yes, CEOs and companies, we’re talking directors of agencies, but we’re also talking schoolteachers and parents at home, and we’re also talking about your general neighbor that can really understand the impact that these type of technologies can have even so far with a little bit that we’ve known.

And again, I go back to some of the examples of the pandemic, but there are drugs being developed right now that are using chemically change RNAs that are on the pipeline. So this is a huge frontier for our generation that everybody can contribute to. So what I like to see is that effort at the level that is highly coordinated and organized so that we can really move this field forward. I think that the small, single lab, isolated efforts or the big companies that are working with this together isolated without much communication with the rest of the sectors, I don’t think that’s going to be how we’re going to get to the answers as quickly as we can.

Baker: So it’s all about organizing people and effort.

Contreras: Absolutely. And I think that’s one of the big lessons of the Human Genome Project.

Baker: Thank you so much for talking with me. And thank you for all the work you do that’s not strict science, but that helps broaden the community.

Contreras: Well, I think that definitely if we can create more young people that can dream about seeing themselves in the shoes of a scientist and building something that can change the world, I think we’ll have a lot more people put in the effort, the time, and the energy into these type of careers. It’s super rewarding. And I thank you for giving the opportunity for more people to hear about what we really do behind the scenes and how exciting every day can be with any new little thing that you learn, that you know nobody else in the world has figured out, but that one day it’ll be part of a puzzle that would really have a huge impact around the world. I think that message really needs to get out to all of our students.

Baker: Check out our show notes to find links to other resources, including the report, Charting a Future for Sequencing RNA and Its Modifications.

If you’re listening to this, you’re probably passionate about science policy. Please visit issues.org/survey to participate in our survey of the science policy community. And please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producer Kimberly Quach and our audio engineer Shannon Lynch. My name is Monya Baker. Thank you for listening.

“This Is Also a Time of Great Possibility and Great Capability.”

Astrophysicist Saul Perlmutter is best known for his groundbreaking discovery that the expansion of the universe is accelerating—for which he shares the 2011 Nobel Prize in Physics. But Perlmutter, a professor of physics at the University of California, Berkeley, and senior scientist at Lawrence Berkeley National Laboratory, has also thought deeply about the nature of science and how it can be employed to advance society. A new book he coauthored with philosopher John Campbell and social psychologist Robert MacCoun, Third Millennium Thinking: Creating Sense in a World of Nonsense, explores how the tools and frameworks that scientists use “to stop us from fooling ourselves” can help improve decisionmaking and problem-solving more broadly. In addition, as a member of the National Academy of Sciences and the President’s Council of Advisors in Science and Technology (PCAST), he has a unique view into how science policy is shaped. In an interview with Issues contributing editor Molly Galvin, he discusses how physics and music inform each other, how the culture of science encourages sticking with problems, and the sources of his optimism.

Saul Perlmutter
Illustration by Shonagh Rae.

What has playing the violin taught you about science?

Perlmutter: Well, science is such a social activity. I was always interested in chamber music, not just playing music by myself, and I was looking for that interaction in science. I’ve tended to gravitate toward working with people. Nowadays, that’s a lot of what experimental science, especially physics, has become. I think music played a big role in teaching me what happens when a group’s working really well together.

When I first started at Berkeley as a faculty member, I was asked to teach the Physics and Music course. At first I thought it would be the most boring parts of physics meet the most boring parts of music, but by the end of planning the course I realized that you can teach a whole lot of really fundamental aspects of how you think about the world, using music as the way in. I did a lecture the last day of the course in which I gave them what we now know about cosmology using all these tools of thinking that we had developed over the course of studying music. And I realized that you get a much more sophisticated understanding of what it is that we’re doing in cosmology by using the physics concepts taught to understand music.

You spent a decade doing the research that led to your Nobel Prize. What was that like for you?

Perlmutter: We had set out to find out how much the expansion of the universe was slowing down due to gravity, because that was the big key question. Do we live in a universe that will last forever, and do we live in a space that’s infinite? Or is it curved in on itself? These are fun, almost philosophical questions.

You can teach a whole lot of really fundamental aspects of how you think about the world, using music as the way in.

We knew it was going to be a hard project. We thought it would take three years. Three years in, we appeared to have gotten nowhere. We had only learned pieces of how we would solve the problem. But every step along the way, you could see how what we’d done so far was actually starting to consolidate understanding of what it was going to take to get where you wanted to go. We had a sense that this was a solvable problem, and that it was so important that it was really worth sticking with.

About five to six years in, we started figuring out how to turn this difficult problem into a repeatable solution. Over the next three or more years, we were just doing the operations we’d figured out how to do—collecting the dataset—and the last year was analyzing the data. It wasn’t until nine years in that we started seeing results that were shocking. Discovering that the expansion of the universe was, in fact, accelerating was the opposite of what we expected to be measuring—and that’s in some sense even better, because now there’s something new about physics that we hadn’t appreciated.

One of the things I learned as a graduate student was how the culture of science allows for sticking with problems much longer than most humans would ordinarily. It encourages you to ask, “Is this problem, in principle, solvable? Are we getting closer to solving it?”

You have said that the life of a scientist revolves around making mistakes and trying to fix them. How do you think that experience has shaped your worldview?

Perlmutter: Having that bit of diabolical contrariness is a weird pleasure of being a scientist. You’re always trying to figure out, “OK, how could I be fooling myself into a wrong conclusion?” Because the more you get those things right, the more chances you have of catching the universe doing something that our brains never would’ve imagined.

Scientists build out of what seems like a stance of weakness. So, one might think it’s terrible that scientists are always discovering new ways that they’re wrong, or it’s terrible that they’re only probabilistically sure of facts. But that’s really where scientists’ superpower has come from. We have been able to figure out amazing solutions to problems or surprises about the world. Much of that can be traced back to being willing to be wrong and being comfortable with finding the ways you’re wrong. And for this purpose, you want to build strong relationships with people who are going to tell you when you’re wrong, who will disagree with you, or who compete with you. They’re your best bet at figuring out where you’re making a mistake.

You work on theories of expansion of the universe and dark energy. Working in this community of cosmologists, do you have a theory of how people’s minds change?

Perlmutter: I don’t think I have an articulated theory of change. But I will say that I’ve been really interested to watch fairly dramatic changes happen in my own field. When I started, physicists were seen almost like carpetbaggers coming into the astronomy world. Now, for many of the big projects, the astrophysicists from the physics department and those from the astronomy department are seamlessly integrated.

One might think it’s terrible that scientists are always discovering new ways that they’re wrong, or it’s terrible that they’re only probabilistically sure of facts. But that’s really where scientists’ superpower has come from.

Individuals and small groups were always building their own analyses, and some open-source advocates were arguing that we needed to be able to share things more. I was pushing for that very strongly too—and then recently I find myself in the funny position of realizing that as a community we’ve been so successful at this that we’ve ended up in a world where sometimes everybody’s all in the same group, and we aren’t getting enough voices pushing against each other. We always said we should make sure the software is seamless and open so everybody can use it. But once you get to the point that there is a dominant software that everybody’s using, it’s much harder to check to make sure that it doesn’t have bugs in it. You can, but it’s dramatically more difficult because you don’t have several competing codes that have to be in agreement.

Third Millennium Thinking

One of the lessons from teaching the Sense & Sensibility & Science class at Berkeley and writing our new book, which came out of that curriculum, is that we keep learning new ways in which we fool ourselves and we keep learning new ways to do better. Maybe that is what science is—that constant ability to keep watching ourselves and improving our approaches to understanding the world.

For example, it’s only in recent decades that particle physicists started seeing evidence that a form of confirmation bias was affecting their measurement results, when scientists would stop looking for additional sources of error or additional computer bugs when they got the results that they expected to see. This has led to a new practice (called “blind analysis”) of hiding the results while hunting for errors and bugs. It’s now becoming a standard approach in cosmology measurements, too, and other fields of science are developing parallel methodologies.

You started teaching Sense & Sensibility & Science more than a decade ago. As a physicist, why did you get interested in teaching about better communication and decisionmaking?

Perlmutter: So 10 or 15 years ago, I would go to the lunch table with a bunch of scientists from the lab and they’d be talking about the politics of the day. But the conversations at the lunch table sounded so different from what you see in the newspapers. People were just using a whole different vocabulary of ideas. And I kept thinking, “Where do we learn all those ideas?”

It was pretty clear that they were not taught in any physics, biology, or chemistry course that I ever took—they were taught mostly by apprenticeship as people went through PhDs and postdocs. The scientific culture was teaching these ideas to students as part of that experience.

Maybe that is what science is—that constant ability to keep watching ourselves and improving our approaches to understanding the world.

So when Berkeley announced a new kind of course called “Big Ideas Courses” to work across disciplines, I thought, this is exactly the time to teach a course like this. Because the parts that I was already starting to think about, which I understood from training as a physicist, were not the whole story. A lot of the elements are coming from social psychology and what we’ve learned about group and individual dynamics in decisionmaking. Some of these things are actually philosophical questions: How do you want groups to be able to weigh priorities and values amidst the rational techniques that we’re teaching?

In other words, if you’re going to try to teach people how to think rationally, then you also have to ask how you’re going to weave that in with people’s values and fears and goals and emotions. Because the fears and goals and emotions are the things that drive decisionmaking at the end of the day.

Are there certain models or mechanisms that help people find a balance between scientific information, values, goals, and fears?

Perlmutter: One model that I thought was particularly exciting to watch is deliberative polling—the technique where you bring together a truly representative sample of the population—which is used by some citizen assemblies. It has to be randomly sampled, so that basically everybody will be in that microcosm. And they don’t just vote. The group starts to deliberate, with experts available, ideally from all sides of an issue, who answer questions and help them think through the problems in an informed, thoughtful way. And then after many hours of this, they start to home in on some views. Because they are a true representative sample, they represent the values of the broader population. So the resulting views should reflect the values of the people when well informed.

In the end, you see some really nice policies and results that have come out of that kind of process. In some countries, this is becoming a part of how the government works.

Has being a member of PCAST changed your thinking about how regulating or policymaking is done?

Perlmutter: Every time I’ve worked with government, either the legislative branch or in this case the executive branch, I’m reminded of how difficult it is to make progress because so many parts have to come together. But at the same time, you can make a difference. Anything you recommend is unlikely to be instantaneously effective—it may be that a number of years go by until people really absorb it and try and figure out how to use it.

For example, one of the earlier PCASTs recommended that hearing aids be made more of a commodity by taking it out of this specialized system of control and making it something that you can buy much more conveniently. And that ended up recently getting enacted. I’m assuming that pretty much all of us will at some point be using hearing aids.

What do you think scientists don’t understand about policymaking?

Perlmutter: The more that scientists have a chance to spend time with legislators and people in the other branches of government, the more they will be aware of the different ways in which people need to receive information. It isn’t just a matter of saying, “Here’s the answer,” but giving them insight into how the answer was reached and why they might come to that same conclusion.

I think we’re in a bad period for political figures themselves to act as the thought leaders. I don’t fault them, because if you’re a congressman, for example, you’re in a very tricky position to take on a new idea and then convince everybody to adopt it—especially if it goes against the orthodoxy of whatever party you’re in.

As scientists, we have an extra responsibility now to try to work harder to communicate about what we are doing. Scientists should try to spend time with the public before sending ideas or advice to legislators and executive branch members and agencies. That’s not something we’ve typically done because we’re very busy, like everybody. But I think there’s enough of a pleasure in it that scientists could feel that it was a good use of their time. That’s my secret hope.

You worked on some recent National Academies guidance on how to responsibly incorporate artificial intelligence into science. How do you see AI being used in cosmology?

Perlmutter: We’ve already been using many of the earlier versions of AI in cosmology, with new techniques using mathematics and statistical analysis. But the current version that got so much attention this past year—generative AI—raises a whole bunch of other ideas.

As scientists, we have an extra responsibility now to try to work harder to communicate about what we are doing.

I think it’ll speed our ability to talk across the subdisciplines. And that by itself may be very interesting for the sciences. We’re already using it in computer programming. I find myself programming in computer languages that I probably would not have bothered with if it weren’t for the fact that I can ask AI for help.

What are your concerns about how AI might be used in science?

Perlmutter: My concerns fall into the category of what happens when we automate anything. AI clearly provides many more opportunities and expectations for automation. However, all the safety engineering that you would do if you were designing a braking system for a car—we haven’t always done that to the same degree for automation.

We need to step back and ask, “Have we done the right due diligence? How could this automation go wrong? What are our indicators that it’s going wrong? And what are the fail-safes to make sure that we catch it if it does go wrong? Have we come up with the right fallbacks?”

You talk a lot about problem-solving, both as a scientist and as a citizen of the world. But let’s be realistic—we are facing some overwhelming problems as a society right now. What do you anticipate for the future?

Perlmutter: If we can at least partially heal our fractured society, then I would not be that worried about the huge problems of the world. We’ve demonstrated in just our own lifetime that we can take on gigantic problems that we never thought we could take on.

Much of the world was going to bed hungry when I was a child. But over the course of the last 50 years, percent by percent, we’ve brought the number of people who are chronically hungry down to 10%. And we never thought that that was possible (though progress isn’t always linear, and this number recently rose slightly).

We know now that we are capable of solving problems on this scale. But I think it only happens when people are really working well together. Right now, we walked ourselves into a bit of a dark corner where people aren’t collaborating with each other in a positive way. But if we just turn that corner, then I think we’re in an amazing position. We actually could be making a world to live in that everybody would feel proud of.

And people should be aware of this, that this isn’t simply a catastrophic time in history. This is also a time of great possibility and great capability that we’ve never had in front of us before. The Bulletin of the Atomic Scientists has this doomsday clock with the “minutes to midnight.” I keep saying that we need, on that same page, the “minutes to noon” clock—because I think we are remarkably close to being able to make a world that everybody would feel wonderful about living in.

Marie Curie Visits the National Academy of Sciences Building

A photograph captures a historic moment on the back steps of the National Academy of Sciences building: Marie Curie, codiscoverer of radium and polonium, stands alongside President Herbert Hoover in the fall of 1929. The president had presented her with a gift of $50,000, earmarked for purchasing a gram of radium for her oncology institute in Warsaw, Poland. The gift was the result of a fundraising campaign led by American journalist Marie Meloney, after her article in The Delineator, a popular women’s magazine, reported that Curie could not continue her groundbreaking research without more of the expensive element.

Curie, a Polish-born physicist and chemist, is renowned for her work on radioactivity. Not only was she the first woman to win a Nobel Prize, but she was also the first person to win a Nobel Prize twice in two scientific fields—physics in 1903 and chemistry in 1911. Her research led to the development of nuclear energy and radiotherapy for cancer treatment. Five years after her visit to the National Academy of Sciences, Curie died from leukemia, likely the direct result of her prolonged radiation exposure. Her life, while marked by tragic irony, continues to inspire generations with her unwavering dedication to science.

Principles for Fostering Health Data Integrity

Almost every generation is confronted with the effects of its past and must adapt. In his 1962 “We choose to go to the Moon” speech, President Kennedy juxtaposed the challenges of his postwar era—intelligence vs. ignorance, good vs. evil, leadership vs. fear-fueled passivity—and harnessed the national animus to achieve a lunar landing.

Today, our challenge categories are similar. We are confronted with the effects and portents of concurrent changes in medicine, science, and technology, which in turn change how we educate scientists, manage the implementation of new technology, and respond to the effects, both planned and unforeseen, of the application of our discoveries.

Computational and data science technologies, some rooted in JFK’s ’60s, have entered all facets of life at breakneck speed. Our understanding of the societal effects of emerging technologies is lagging. When data, data transfer, and artificial intelligence meet medicine, game-changing implementation effects—positive or negative—are imminent.

In “How Health Data Integrity Can Earn Trust and Advance Health” (Issues, Winter 2024), Jochen Lennerz, Nick Schneider, and Karl Lauterbach tackle this complex landscape and identify pivotal decisions needed to create a system that equitably benefits all stakeholders. They highlight a requisite culture shift: an international ethos of probity for everyone involved with health data at any level. They propose, in effect, a modern-day Hippocratic Oath for health data creation, utilization, and sharing—a framework that would simultaneously allow advancement in science and population health while adhering to moral and ethical standards that respect individuals, their privacy, and their medical needs.

Without this health data integrity framework, the promise of medical discovery through big data will be truncated.

When data, data transfer, and artificial intelligence meet medicine, game-changing implementation effects—positive or negative—are imminent.

Within this framework, we open new horizons for medical advancement, and we augment the safety of data and of tools such as artificial intelligence. AI is an oxymoron: it is neither artificial nor intelligence. AI determinations derive from real data scrutinized algorithmically and, at least currently, they appear intelligent only as the data evaluation is iterative and cumulative—temporally updated evaluations of compounding data sets—a heretofore quasi-definition of intelligence. These data serve us—patients, health care providers, researchers, epidemiologists, industry, developers, or regulators. With greater harmonization and data integrity, data utilization becomes globalized. Wider use of data sets can lead to more discoveries and reduce testing redundancies. Global data sharing can limit the biases of small numbers and identify populations of low prevalence (e.g., rare diseases), allowing the creation of larger, global cohorts.

Pathologists, like the article’s coauthor Jochen Lennerz, are physician specialists trained to understand data; we are responsible for the generation of roughly 70% of all medical data. Pathologists, along with ethicists, data scientists, data security specialists, and various other professionals, must be at the table when a health data integrity framework is being created.

Within this framework, we will benefit from a system of trust that recognizes and respects the rights of patients; understands, and supports, medical research; and ensures the safe, ethical, transfer and sharing of interoperable, harmonized medical data.

We must ensure the steps we take with health data are not just for a few “men,” to borrow again from the lunar-landing lexicon. Rather, we must create a health data ecosystem of integrity—a giant step for humankind.

Vice President for Medical Affairs, Sysmex America

Governor, College of American Pathologists (CAP)

Chair, CAP Council on Informatics and Pathology Innovation

Jochen Lennerz, Nick Schneider, and Karl Lauterbach report how efforts to share health data across national borders snag on legal and regulatory barriers and suggest that data integrity will help advance health.

In today’s digital transformation age, with our genomes fully sequenced and widely deployed electronic health record systems, addressing collaborative digital health data use presents a variety of challenges. There is, of course, the need to ensure data integrity, which will demand addressing such issues as the relative lack of well-defined data standards, poor implementation, and adherence, as well as the asymmetry of digital knowledge and innovation adoption in our society. But a more complex challenge arises from the propensity of humans to push major inventions beyond their benefits—and into the abyss. Therefore, we must engage together for human integrity in collaborative health data use.

Yet another challenge—one that the authors cite and I agree with—arises from deep-rooted conflicts of interest among all stakeholders (patients, health care professionals, the health management industry, payors, and governments) in health care. There also are generational differences between tech-savvy younger health care professionals, who are generally more open to structured data collection and documentation, and more senior ones, who struggle with technology and contribute health data that is more difficult to process.

There is, though, overall agreement among health care professionals that their foremost task is to serve as their patients’ advocate and go above and beyond to help them overcome or manage their medical problems using every available resource, which today would clearly include taking full advantage of digital health innovations, health data, and associated technologies such as artificial intelligence.

A more complex challenge arises from the propensity of humans to push major inventions beyond their benefits—and into the abyss. Therefore, we must engage together for human integrity in collaborative health data use.

However, since medicine has become such a complex profession, health professionals often practice in large care facilities embedded in organizations operated by corporations that seek profits, and where payors strictly regulate access to and extent of utilization of care on behalf of governments that struggle with expenditures. Unsurprisingly, the goals of nonpatients, administrators, and others outside of health care might not be what health professionals would view as ethical and responsible in terms of health data collection and use.

Among still other obstacles to the protection of health care data, cybercrime risks with hackers who either for personal gain or on behalf of third parties attack our increasingly digital world are a major threat. And then there is the important matter of people’s individual freedom, which at least in most Western democracies includes the right to informational self-determination and privacy. Ensuring these rights needs to be balanced with the societal goal of fostering increasingly data driven medical and scientific progress and health care delivery.

Once all stakeholders in medicine, health care, and biomedical research realize that our traditional approach to diagnosis, prognosis, and treatment can no longer process and transform the enormous volume of information into therapeutic success, innovative discovery, and health economic performance, we can join forces to unite for precision health. For details, I’ve laid out a vision for collaborative health data use and artificial intelligence development in the Nature Portfolio journal Digital Medicine.

Put briefly, precision health is the right treatment, for the right person, at the right time, in the right place. It is enabled through a learning health system in which medicine and multidisciplinary science, economic viability, diverse culture, and empowered patient’s preferences are digitally integrated and conceptually aligned for continuous improvement and maintenance of health, well-being, and equity.

Professor of Medicine and Adjunct Professor of Computing Science

University of Alberta

Director, Collaborative Research and Training Experience “From Data to Decision

Natural Sciences and Engineering Research Council of Canada

Preparing Researchers for an Era of Freer Information

If you Google my name along with “Monsanto,” you will find a series of allegations from 2013 that my scholarly work at the University of Saskatchewan, focused on technological change in the global food system, had been unduly influenced by corporations. The allegations made use of seven freedom of information (FOI) requests. Although leadership at my university determined that my publications were consistent with university policy, the ensuing media attention, I feel, has led some colleagues, students, and partners to distance themselves to avoid being implicated by association.

In the years since, I’ve realized that my experience is not unique. I have communicated with other academics who have experienced similar FOI requests related to genetically modified organisms in the United States, Canada, England, Netherlands, and Brazil. And my field is not the only one affected: a 2015 Union of Concerned Scientists report documented requests in multiple states and disciplines—from history to climate science to epidemiology—as well as across ideologies. In the University of California system alone, researchers have received open records requests related to research on the health effects of toxic chemicals, the safety of abortions performed by clinicians rather than doctors, and the green energy production infrastructure. These requests are made possible by laws that permit anyone, for any reason, to gain access to public agencies’ records.

These open records campaigns, which are conducted by individuals and groups across the political spectrum, arise in part from the confluence of two unrelated phenomena: the changing nature of academic research toward more translational, interdisciplinary, and/or team-based investigations and the push for more transparency in taxpayer-funded institutions. Neither phenomenon is inherently negative; in fact, there are strong advantages for science and society in both trends. But problems arise when scholars are caught between them—affecting the individuals involved and potentially influencing the ongoing conduct of research.

Academic institutions are often intimately involved in these situations as both state institutions and employers. They provide institutional incentives for individual researchers to be involved in industry and community research, and they also mediate public information requests and any public relations issues. Yet cases like mine often fall into a gray area where internal administrative processes for handling such requests are unclear, and researchers are left to navigate requests as well as controversies alone.

In an environment where the public is likely to continue to expect increasing transparency in the ways that research is conducted, I believe institutions must take steps to help scholars navigate the process of doing research and responding to open records requests. These steps include developing new standards for partnerships and then educating, guiding, and directly supporting researchers to comply efficiently and effectively with such requests. Perhaps most importantly, academic institutions must plan ahead for external challenges to research to be able to both fulfill legitimate requests for disclosure and support faculty researchers if requests are frivolous or vexatious.

The changing nature of academic research and the push for transparency

Over the past century, universities have evolved from being cloistered knowledge centers to multifaceted research enterprises. Sociologist Henry Etzkowitz asserts that the “entrepreneurial university” has an almost unbounded scope of activity. Today, both funders and researchers seek to advance knowledge that can be applied in the economy, society, or other arenas. Research has become more mission-oriented and interdisciplinary. Funders often require projects leverage additional resources from potential users and encourage proactive knowledge translation to generate real-world effects, along with the traditional metric of academic journal publications. Between 2000 and 2021, the business sector increased its funding of basic research in U.S. institutions from $10.4 billion to $36 billion (about 36% of the total), while federal funding remained relatively constant at $40 billion, according to the National Center for Science and Engineering Statistics. In Canada, universities performed about C$16 billion in research in 2022, with C$1.2 billion or 7.5% funded by businesses and C$1.5 billion or 9.4% funded by the not-for-profit sector. While external funding for Canadian basic research has not risen as much as in the United States, a 2017 report, Investing in Canada’s Future: Strengthening the Foundations of Canadian Research, concluded that scholars, scientists, and trainees wishing to pursue fully independent research saw a decline of available real resources per researcher of about 35% from 2007 to 2016. Over the same period, priority-driven funding grew by 35%.

Institutions must take steps to help scholars navigate the process of doing research and responding to open records requests.

Outside influence is growing also because of changes in the way research itself is done. Whereas in the past a funder might contract for directed research from a single researcher, today researchers often participate in larger networked teams. This trend is driven by explicit missions set by foundations (e.g., the Bill and Melinda Gates Foundation or Rockefeller Foundation), governments (e.g., the Human Genome Project), and long-standing and new granting agencies that define priority themes, demand leveraging, use merit rather than peer review, and require dissemination of results beyond peer-reviewed publication. The result is larger and more diverse teams of researchers, funders, and end users. These networked teams often overlap with other networks and teams both locally and globally, so that it is much more difficult to distinguish where lines of influence may lie than it was in the past.

This expansion is occurring in a larger context in which universities, peers, and funders increasingly encourage investigators to link up with governments, commercial enterprises, communities, philanthropic foundations, and advocacy groups to design research that delivers results for the good of society. Proponents of this approach argue that involving end users in research priority-setting, study design, and dissemination should improve the focus and impact of the efforts. When scholars are working with disadvantaged or at-risk communities, this approach has the potential to empower those actors and produce need-based results in new ways. However, when the external partner or funder is a corporation or government agency, important questions about undue influence or conflict of interest may be raised.

One fear is that incentives for researchers to produce findings, combined with the inducement of money, may enable the industry partner to bend research to its will, potentially leading to the distortion of research findings. In their 2010 book, Merchants of Doubt, Naomi Oreskes and Erik M. Conway provide a damning analysis of how the tobacco industry was able to do just that for an extended period from the 1950s to the 1970s. This influence becomes most problematic when industry, governments, and NGOs contribute to public interest research funded by peer-reviewed grants, as each may have an interest in results that favor, say, a particular product, program, or policy. Thus, conflicts can potentially arise between the researcher’s duty to create unbiased knowledge and the interests of the funders.

There is nothing new about university researchers being sponsored. Government bodies including the military have long commissioned research on medical topics, land management, ocean science, and space exploration; industry similarly has contributed significantly to research on agricultural productivity, resource extraction, and manufacturing technologies. To counter concerns about the potential for sponsorships to bias knowledge production, universities have established conflict of interest disclosure requirements. Many universities offer and sometimes mandate training in responsible research while university-based research ethics boards work to preemptively identify and address concerns arising from such arrangements.

As sponsored research proliferates, freedom of information laws have emerged as a powerful mechanism to scrutinize relationships between academics who receive public funding from external partners.

More than 100 countries have some form of freedom of information laws on the books; the oldest was established in Sweden in 1766. Federal laws in the United States and Canada went into effect, respectively, in 1966 (Freedom of Information Act, updated most recently in 2020) and 1985 (Access to Information Act, last amended in 2024). Such laws have also proliferated beyond the national level, with all 50 US states and all Canadian provinces establishing their own open records laws for state/provincial and local authorities. While public universities are covered under those national and subnational laws, provisions vary, sometimes widely, and the administration of those different provisions can lead to differences in outcomes. Private universities are notionally exempt except when they receive public funds, which is the case for nearly all of them. These variations can open the door to multiple interpretations. In my area of study, some universities in other jurisdictions are willing and able to block most FOI requests, while in other regions universities and the relevant oversight offices default to full and open access, sometimes beyond what is required in the law.

Many of these laws predate the widespread advent of electronic communications such as emails, texts, messaging apps, and collaboration tools like shared drives and the cloud. In addition to creating easily searched records of interactions, these electronic communications have also enabled academic collaborations among researchers at institutions that may span multiple jurisdictions that are subject to different laws. All of this creates further layers of complexity when researchers are subject to FOI requests.

Freedom of information laws have emerged as a powerful mechanism to scrutinize relationships between academics who receive public funding from external partners.

Complying with freedom of information laws requires academic institutions to balance transparency and accountability against potential damage to the culture of open scientific exchange. The University of California, Los Angeles (UCLA) Statement on the Principles of Scholarly Research and Public Records Requests spells out this tension, calling it “a matter of great concern that faculty at public universities throughout the country are increasingly the objects” of public records requests. The statement goes on to note that “these requests have increasingly been used for political purposes or to intimidate faculty working on controversial issues. These onerous, politically motivated, or frivolous requests may inhibit the very communications that nourish excellence in research and teaching, threatening the long-established principles of scholarly research.”

Scholars and institutions engaged with research that receives corporate or other non-public funding also need thorough guidance on which materials may be subject to open records requests. Third-party and commercial confidential materials are largely exempt from FOI statutes, but it helps if exemptions are clearly identified in correspondence as such. The main area of contention is unpublished research and scholarly communications (what some call “academic research work product,” often treated as trade secrets in other settings) that form the “scientific deliberative process.”

The distinction between which parts of the research process should be protected and which should be open to the full scrutiny of both peers and the broader community is subject to broad interpretation and usually assessed on a case-by-case basis. National and state/provincial statutes offer limited direction, as they are relatively general in order to be applicable to a wide range of actors. In this situation, each academic institution is left to develop its own compliance policies and procedures. Rather than leaving it to individual researchers or the courts, academic institutions must take a leadership role in defining appropriate boundaries and promulgating that information widely.

Preparing scholars and universities to balance transparency with academic freedom

Addressing these gray areas will require universities to develop processes to clarify norms and protect academic freedom. Recognizing that academic institutions do not control open records laws—legislatures make them while courts determine their application—academic leaders should work to help individual researchers navigate FOI systems and requirements. Research-engaged institutions should begin with three practical steps: developing new standards for engaged research contracts and partnerships; providing best practices and training to scholars; and planning ahead for controversies.

Institutions and funders must develop standards for research contracts and partnerships that explicitly acknowledge the power dynamics of sponsored research—specifically, the rights and obligations of each party to influence what is done—and to identify how the researcher and sponsors will manage any conflicts of expectations and interests. Notably, some individual scholars have begun proactively disclosing all of their funding, consulting, and other engagements in public talks and on their websites. Although these statements may be comprehensive, sometimes they offer little or no insight into how any specific research activities might be influenced. Universal declarations of funding are too nebulous to clarify the complex nature of research relationships.

Many journals, such as Nature, and professional associations, including the American Association for the Advancement of Science, provide guidelines for scholars on how to disclose their interests in more meaningful ways. But this is only part of the challenge. Rather than leaving it to individual researchers to decide what to declare, when, where, and how, research institutions should develop guidelines to specify transparent and auditable disclosures that are specific to particular research activities and outputs. These should include proactive disclosure of funders, research goals, management structures for the research, and knowledge transfer plans; processes for archiving management decisions; advice on effective research communications; and clear rules on which individuals should be responsible for managing information in each project, program, or unit.

Developing and disseminating such guidelines will also have the effect of causing those engaged in the research process to think more carefully about the roles assigned to community, nonprofit, or industry partners. While researchers know where they want full discretion to make choices about their work, and granting agencies assert they want to protect that autonomy, partners may believe they should be allowed to have a voice in or even direct where the research goes. Clarification of these responsibilities will help to establish norms that should reduce uncertainty and potential conflict.

Among the areas that need clarification is how research teams determine the timing, scope, methods, and reporting of research results, particularly when findings could have some material impact on funders’ interests. This is not an abstract concern: there is widespread evidence that some sponsored pharmaceutical researchers delay or never publish results of clinical studies that raise uncertainty about the efficacy of candidate drugs. Critics also point to the way research questions have been framed to deliver favorable findings in industry-sponsored research on the role of soft drinks in obesity and the health benefits of specific fruits, chocolate, and even alcohol. But industry is not the only research sponsor that may wish for favorable findings. As patient advocacy organizations have become involved in the funding and conduct of research, sometimes with funding from drug companies, questions about their motives and conflicts of interest have arisen. Although clearer statements of academic norms and principles are a step in the right direction, explicit contracts may be warranted in the future.

Rather than leaving it to individual researchers to decide what to declare, when, where, and how, research institutions should develop guidelines to specify transparent and auditable disclosures that are specific to particular research activities and outputs.

The US Department of Health and Human Services (HHS) Office of Research Integrity has a set of questions designed to help create transparency in relationships and expectations among the collaborating scientists. This office suggests an agreement that lays out the scientific issues, goals, and anticipated outcomes or products of the collaboration; the expected contributions of each participant, and the allocation of the rights to exploit any discoveries, as well as the standards for data handling. Such agreements should be explicit about the role each participant plays in defining the problem, the methods used in the research, and the ways the results will be interpreted. Granting agencies or host institutions should provide a template to use for disclosure and develop a system to manage a repository for these releases. Although the HHS recommendations are for agreements among scientists on a research team, the questions can also be useful in defining relationships between researchers and sponsors.

Just as importantly, the timing of releases matters. There is a trend in many scientific fields to preregister one’s research design to guard against researchers’ defining their hypothesis only after they know what the study has delivered. The Center for Open Science favors preregistration, noting that it separates hypothesis-generating (exploratory) research from hypothesis-testing (confirmatory) research.

These and other measures would help ensure that funders have a limited amount of input into and control over research direction, design, and publication of findings and would build in accountability for researchers as well.

Second, academic institutions should do more to set norms and educate researchers as they consider pursuing funding from industry, governments, and nonprofits. Universities can and should more clearly define standard operating principles and procedures for proper research engagement in these circumstances. Most investigators have had conversations with funders about what they can and cannot expect from their work and sometimes discover that funders without much experience seek to help identify the basic problem, define models and methods to be used, and either proscribe or predefine the conclusions. Managing these conversations about boundaries is not easy, especially for less senior researchers; but, in my experience, few institutions provide much guidance. In 2023, Harvard’s vice provost for research established two ad hoc committees to develop guidelines for corporate relations: one is focused on research policies and practices, and the other on relationships between researchers and corporate sponsors. In another positive step, some universities have established offices that help researchers establish and manage partnerships with industry, such as the University of Michigan’s office of Corporate Research Alliances.

Academic institutions should do more to set norms and educate researchers as they consider pursuing funding from industry, governments, and nonprofits.

Academic institutions should also educate researchers in the proper use of email and other types of electronic communication, with explicit attention to open records laws. Traditionally, many scholars have viewed email as a medium for informal interactions and banter, which can serve a legitimate role in advancing scholarship. As the UCLA statement puts it: “Scholars frequently test ideas in extreme form, explore possibilities through hypotheticals, or play ‘devil’s advocate,’ making claims they may not themselves believe in edgy, casual language not intended for public circulation or publication. These communications are frequent and diverse in nature because scholarship is a competitive and fast-paced process, requiring intensive communication among a diverse array of participants.” However, when such exchanges are made public either through FOI requests or by hacking, they can be taken out of context and used to impugn scholars’ credibility. In effect, researchers need to have situational awareness and remember that workplace-based content exchanged through any email system, or on apps like Slack, is not a personal expression but a use of public resources.

One eminently practical bit of advice from the University of North Carolina at Chapel Hill’s senior director for public records is that scholars share documents with colleagues using a shared drive like OneDrive instead of emailing versions around, so that any public release will include only the most recent version of the document. In my own case, some of the initial FOI searches stretched to thousands of pages because many versions of the same reports had been exchanged by email. All but the last version were removed manually, but that took a significant amount of time and effort by the university privacy officer to ensure we were only removing duplicates.

To help scholars, institutions should offer training and support on best practices in efficient and effective communications among researchers, their industrial or community partners, and the public. Just as resources to train scholars and citizens how to make FOIA requests are publicly available on websites, there should be more widely available resources to proactively educate researchers about obligations and best practices for managing their files and communications and for complying with information requests. The National Association for Biomedical Research, for example, has a guide for researchers doing research with animals who may be targeted by animal rights groups. Likewise, the Union of Concerned Scientists created an advisory booklet to help scientists distinguish legitimate questioning from harassment and provide advice on how to respond effectively to both. There is a clear need for individual academic institutions to take a more active role in educating their researchers about effective communications to make sure any problems that arise don’t decrease public trust in the university itself.

There should be more widely available resources to proactively educate researchers about obligations and best practices for managing their files and communications and for complying with information requests.

Third, institutions should anticipate and plan for the types of controversies that frequently emerge from funded research projects with the potential for significant impact and heightened public scrutiny. When scholars are asked—sometimes constructively, sometimes maliciously—to account for their actions, they often are responding to public questioning from the media and funders at the same time they are being reviewed internally by faculty councils and administrative units within their institutions. Researchers, sponsors, and the university should have a plan in place designating who will lead the response to avoid ad hoc, disordered, or inconsistent responses.

Today, universities too often handle controversial cases as one-offs about a specific researcher or a field, rather than seeing them as part of a systematic issue with the potential to undermine confidence in the usefulness of research findings for decisionmaking. Thoughtful structures and established procedures to steer researchers’ interactions with funders and to respond to public requests for transparency would go a long way to assure researchers, funders, those affected by the research, and the general public that these types of controversies are handled fairly and with appropriate deference to both the public interest and the intellectual independence of researchers.

The onus is on the academy and its partners to find a way to deliver engaged scholarship that respects and defends the unique strengths of each actor. Many of the compelling challenges facing our world require collaboration between scholars working in academic settings and people living and working in the nonacademic settings impacted by those scholars’ work. It is incumbent on all involved to manage research processes and dissemination of findings in ways that protect the integrity of the research itself. In the words of the University of Chicago Statement on Freedom of Expression, “without a vibrant commitment to free and open inquiry, a university ceases to be a university. The … long-standing commitment to this principle lies at the very core of our … greatness. That is our inheritance, and it is our promise to the future.”

Brent Blevins Makes Mars Policy in Congress

In this installment of Science Policy IRL, Lisa Margonelli goes behind the scenes of congressional policymaking with Brent Blevins. Blevins is a senior congressional staffer and staff director of the Space and Aeronautics Subcommittee, which is part of the US House of Representatives’ Committee on Space, Science, and Technology.

Blevins discusses his unusual path into science policy (he didn’t study science, and he wasn’t a AAAS fellow!) and what staffers in the House and Senate do in the science policy world. He also talks about the incredible experience of getting to set policy for things like sending humans to Mars, while at the same time having a staff job that can end with any two-year election cycle.

Are you involved in science and technology policy? From science for policy to policy for science, from the merely curious to full-on policy wonks, we would love to hear from all of you! Please visit our survey page to share your thoughts and provide a better understanding of who science policy professionals are, what they do, and why—along with a sense of how science policy is changing and what its future looks like.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and Arizona State University.

I’m Lisa Margonelli, editor-in-chief of Issues. On this installment of Science Policy IRL, we’re going behind the scenes of Congress with Brent Blevins. Brent is a senior congressional staffer and staff director of the Space and Aeronautics Subcommittee, which is part of the US House of Representatives Committee on Space, Science, and Technology. Brent talks to us about his unusual path into science policy, and what staffers in the House and Senate do in the science policy world. He also talks about the incredible experience of getting to set policy for, say, sending humans to Mars, while also having a staff job that can end with any two-year election cycle. Brent, welcome.

Brent Blevins: Hi, Lisa. Thank you for having me today.

Margonelli: We’re going to start with our usual first question which is: how do you define science policy?

Blevins: Well, that’s a million-dollar question, but if we’re talking in terms of the federal government, it’s a question of tens of billions of dollars, right? I don’t know that there is a great concise definition. I kind of view it as the intersection of governance in science. How does policymaking impact the scientific enterprise and vice versa? How does science inform policymaking? And it manifests itself in a lot of different ways: through funding for federal agencies, for research grants, workforce development, the construction of federal facilities. There’s a lot of different ways to define it. Not to use kind of the cliched Potter Stewart expression, “I know it when I see it.” It’s this very unique process that is often messy but is also very important for both our country, but also for the scientific enterprise in the United States as well.

Margonelli: It’s sort of a place where taxpayers have a voice, because taxpayers put up about $200 billion a year for science, and a lot of that runs through the House Science Committee and the Senate committees that deal with this. And so it is a place where the public is involved in a sort of an indirect way.

Blevins: Exactly. We meet often with members of the scientific community, and I think it’s very important that it’s kind of a two-way street. What insights do we have on behalf of the taxpayer to the work that’s going on at these federal agencies? But it also gives scientists a chance to explain why their work is important, and how the taxpayer is getting a good return on investment.

Margonelli: So, you work in this really kind of spectacular subcommittee, which is the Space and Aeronautics Subcommittee. It’s a subcommittee because it’s part of the House Science Committee. Tell us, what do you do there? For the past week, what have you been doing?

We actually don’t provide dollars to the agencies. We are a policy committee. We set policy direction, we provide guidance, we provide oversight to their activities.

Blevins: So right now what’s on our plate is a NASA authorization bill. So let me explain what that is and how Congress is structured. So, we actually don’t provide dollars to the agencies. We are a policy committee. We set policy direction, we provide guidance, we provide oversight to their activities to ensure that the agencies within our committee’s jurisdictions are acting within the law. And so, one of the ways we do that is we write authorizing legislation, enabling legislation. And so in this instance, NASA is a unique agency. They were created in 1958 as a result of Sputnik. And the Congress at the time gave NASA pretty broad authority. They were permanently authorized—not to use a jargony term—but “as such sums as necessary.” In other words, they were not given an upper limit to how much money they could be given.

But one of the ways that Congress asserts its oversight role is by writing authorization legislation. That’s where we provide policy direction, where we tell NASA, “Okay, you’re going to do a scientific mission to this planet in the solar system,” “You’re going to investigate this earthbound phenomenon,” “You’re going to send astronauts to the Moon and then Mars.” So that’s the bill that we’re working on right now. We’re going to be unveiling that this summer. And it’s the first time we’ve had a comprehensive NASA authorization bill since March of 2017. A lot has been happening at the agency since then. And so we’re working on this legislation quite a bit right now. It’s important to update these things from time to time just based on the work of the agencies.

Margonelli: So that authorization decides whether or not NASA should go to Mars, or whether it’s going to go back to the Moon and do things on the Moon, that sort of sets the direction and plans that then the agency carries out?

Blevins: Exactly. That’s exactly what we intend to do. And so once we set the policy, one of our other committees in Congress, it’s called the Appropriations Committee, they then fund the bill, and they provide the money for NASA to carry out the missions that we’ve directed them to do.

Margonelli: So, is it kind of wild that you get to work on the question of whether or not NASA goes to Mars? I mean, do you talk about it when you’re having a barbecue?

One of the great things about the House Science Committee is it’s pretty bipartisan.

Blevins: I tell people that I get to have meetings with astronauts. I get to go to rocket launches, I get to contemplate these larger questions. And it’s fascinating. One of the great things about the House Science Committee is it’s pretty bipartisan. We don’t have to wrestle with a lot of the issues, the contentious issues, you see on the news every night. We get to contemplate sort of these bigger picture ideas. And I think that’s really neat. I tell people that I get to work on NASA issues. Everyone I talk to is like, “Oh, okay, that’s pretty neat. That’s pretty cool.” And I never have a dull day in the office. I am always learning something. NASA’s always pushing the boundaries, and it forces me to keep up with what they’re doing. And that’s really invigorating and it’s really joyful.

Margonelli: So tell me just a little bit about what a typical day looks like for you.

Blevins: My day starts—I’m guessing like so many of your listeners—I subscribe to a lot of new services. I’m trying to make sure that I know what’s happening in the world, both in terms of say, space policy, but what’s happening on Capitol Hill, what are national stories, what’s happening internationally, because there’s a confluence where they all come together at times. And understanding that context is really important.

And so we have our committee. In Congress, we have two sort of distinct roles. I mean, one’s legislating, which I was just talking about, writing a NASA authorization bill that will hopefully be signed into law this year. But there’s also an oversight component as well, where we’re monitoring what the agency is doing on certain things, ensuring that they’re complying with prior law, ensuring that they’re acting consistent with congressional intent. Under the Constitution, Congress writes the laws, it’s executive branch’s job to carry them out. And so our job is to ensure that there is a smooth path there.

I meet a lot with stakeholders. I think sometimes people imagine that, if they have a negative stereotype of Congress or what people do, it’s all these lobbyists wearing three-piece suits and that sort of thing. And that’s really not the case at all. We meet with a lot of STEM advocates, university faculty. We meet with just a wide range of people. And they come to talk to us. Sometimes it’s just a brief about the research they’re doing. Sometimes it’s to advocate for something in a NASA authorization bill.

So my days are pretty full, but I love it. I have the kind of job where I don’t mind taking work home with me, particularly when it’s topics like this.

Margonelli: So, specifically you’re a Republican staffer on the committee. Can you explain how that works when power changes?

Blevins: Yeah, sure. So committees have a majority staff and they have a minority staff. The House and Senate operate a little bit differently. In the Senate, it is the breakdown of the majority staffs and the minority staffs are proportionate to the makeup of the Senate writ large. So right now, in the Senate it’s 51-49 Democrats to Republicans, or the conference ratios. So whoever the majority party is, in this case the Democrats, say, “We get 51% of the budget.”

In the House, it’s a little bit different. Whoever’s the majority party gets two thirds of the budget, and the minority party gets one third. And that means that the majority has a lot more responsibility. Not only do we have more policy responsibility, but logistical responsibility, just setting up hearings and doing paperwork, and that sort of thing.

So, in the House, unlike the Senate, our entire body’s up every two years. There’s a little bit more volatility in our makeup and what could happen from election to election. Prior to the November ’22 election, when the Republicans took the majority, I worked the prior four years for the committee in the minority capacity. So I was part of that one third.

It’s not really a traditional federal job in the sense of, if you have good performance, you have a certain amount of security.

So, just an example, last year we doubled our staff, and we took on a lot more responsibility. So when you have this role, you’re always sort of cognizant of the election, what could happen. Currently, the House, I believe the ratio is 217 Republicans, 213 Democrats. So I mean, anyone who claims to know what will happen in November, either outcome could happen. So, it’s entirely possible that the parties could flip again. And then frankly, a lot of our staff will be searching for a job.

Margonelli: On the one hand, you’re super, super educated, and highly dedicated, and working in this really particular system. And on the other hand, every two years you could be out. You don’t have tenure.

Blevins: That’s exactly right. It’s not really a traditional federal job in the sense of, if you have good performance, you have a certain amount of security. And that’s sort of the inherent nature of the role. And you don’t do so lightly, I think. But I’ve had the experience… I worked for a committee, the Agriculture Committee in the House prior and I lost my job as a result. And thankfully, I landed on my feet. But I think particularly when we hire people who don’t have as much political experience, it’s important to say, “Okay, I can guarantee you’ll have a job for 18 months, 20 months. I can’t promise you anything beyond that.” So it does create an interesting dynamic.

Margonelli: That’s really interesting. So let’s talk about how you got into this job. How do you end up with a career path that gets you to both be planning to go to Mars and also unable to know where you’re going to be in two years?

I came to DC, and I knew two people. That’s no hyperbole.

Blevins: Well, I’ll share my story, because I have a sort of a circuitous path. I’ve always been interested in science to some extent. I was a kid and I had this incredible knowledge of space. I could tell you how far Neptune’s orbit was from Earth, and I could tell you various minutiae. So I was always interested. I went to college, I started in computer science, and my GPA after my first year was 2.05. That zero in there is very important. It wasn’t working. I got into history, I picked up a political science as a second major, and I got a master’s of public administration. So I was kind of on that path. When I was in grad school, I got very interested in the political process, like the electoral process. And, “Oh, I’m going to go do campaigns.” And it’s one thing to go help out a campaign for a day and think, “Oh, that’s so great. That’s so invigorating.” It’s another thing when that’s your entire life. And I quickly realized, “Okay, I think I’m more interested in policy than getting people elected.”

So I actually served for a guy in the Virginia General Assembly, and I was like, “Okay, I think this is more of my liking, but I want to try to do this in Washington, DC if I’m going to do this.” I came to DC, and I knew two people. That’s no hyperbole. And one of them helped me get an internship. It eventually led to a job at the House Agriculture Committee. And that was great. But one day I was walking through one of the congressional buildings, the Rayburn building on Capitol Hill right across from the Capitol, and NASA had this exhibit. It was called NASA Spinoffs. And it’s one of the outreach activities that they do, which helps share the work that they’re doing, how it affects you on a daily basis, in your daily life.

And I just started talking to people. And I met somebody who worked for the House Science Committee, and I was like, “Wait, you can do that? I had no idea.” And I always sort of had in the back of my mind, I was like, “I would love to go work there. That’s my end goal, dream goal.”

I lost my job at the Agriculture Committee. I went to go work for another committee. And one day my boss said, “Hey, how would you like to go work in the Senate?” And I asked, “Well, are you trying to fire me?” He wasn’t. I think he was very cognizant that having Senate experience is good too, because if you’re not in DC and you don’t do this work a lot, you might not appreciate that there are differences between how the House and Senate operate. If you’re in the Senate, you get elected every six years, the House, every two years. I mean, it creates different dynamics.

So I went to go work in the Senate. And my first boss, the senator, he was appointed to his seat in Alabama, and he ended up losing his seat within a few months. And I hadn’t really anticipated that. So I’m out of a job. And I got very fortunate. I was scooped up by the then Senate Majority Whip, his name is John Cornyn from Texas. He actually hired me to do agriculture work. And during the interview they said, “We’re thinking about giving you the space portfolio,” which is a big deal for Texas for a number of reasons. And I just thought, “Oh, my gosh, that’s it. This is how I can reorient.”

There’s not that one correct, well-defined path.

And so I did that, and I was very content. I really liked it. And my former boss from the House Agriculture Committee, I knew he was probably going to be taking over the Science Committee as a top Republican. And so I just reached out and said, “Hey, I’d be really interested. If you were looking for someone to work on space, and other areas.” And it worked out. And so I’m not a scientist, I’m a liberal arts person, but it still allowed me to do my job. And so again, if you look at our committee staff, I mean, we have a wide range. We have engineers, we have people with very technical backgrounds. We have people who worked in industry who have come worked on the Hill. There’s not that one correct, well-defined path.

Margonelli: That’s interesting, because a lot of times on this podcast we actually talk to scientists who’ve made the crossover. But it’s really interesting to hear from your background in political science and history that you made the crossover into science policy also, because that’s also a whole set of information, and values, and ways of processing information and thinking about things that also is really important in making science policy. It’s not merely science experience. Can I ask why you wanted to go back to the House, even though it’s a little less secure than the Senate?

Blevins: So, I was making this decision in early 2019. And the House had flipped. Republicans had gone from being the majority to being in the minority. So not only was I going from the Senate to the House, I was going from the Senate majority to the House minority, which, if you had a pecking order, I was going from the top to the bottom. It’s a great question.

But one of the reasons I wanted to come back is, one, I knew my boss and I knew he was going to be outstanding in the role, and he’s absolutely proved to be the case. But it was also a chance at a Committee that can really focus in on issues in a way that if you work for an individual member of Congress or Senator, that you can’t. You just have these broad portfolios. When I worked for Senator Cornyn, I worked on appropriations, I worked on agriculture, space, I did post offices, I did some veterans issues. I would be trying to figure out 10 minutes before a meeting what was the underlying issue here. So I really valued that opportunity to come to a Committee where you can really just focus in on some smaller, more specific areas, and be able to do a deeper dive. And I think that sort of was more conducive to my skillset than having to have knowledge a mile wide and an inch deep, so to speak.

Margonelli: That’s really interesting. I hadn’t really thought about what that meant for individual staffers and what you needed to be able to answer to.

So here we are, we’re winding up with the last question. What are the big questions that keep you doing this work that excite you when you go to the office in the morning?

To be able to contemplate big questions like that, I think is a real privilege.

Blevins: I would say there are two tracks to that. I think a lot of the questions that I think about and really deal with are—I don’t want to sound like I’m engaging hyperbole, but I think some of these things are very existential questions in terms of space exploration. What is humanity’s future in the solar system? Are we going to establish a colony on the Moon, Mars, sort of thing? Are we laying the groundwork to eventually be a multi-planet species? To be able to contemplate big questions like that, I think is a real privilege.

One of the pieces of legislation I’m most proud of that I worked on—was passed and signed into law in October 2020—had to deal with space weather. And I’m guessing your listeners have a probably higher level of knowledge about space weather and kind of what that entails. And a lot of the population doesn’t know and they don’t necessarily understand the solar cycle, and that there are real potential implications. There’s a big coronal mass ejection from the Sun, and it has negative repercussions on our satellite fleet. In this bill that I worked on, we really helped to define federal roles and ensure better coordination between federal agencies. Not only forecasting, but what the response would be. And so these are questions that are, I think, very important for our future.

I think the second track I would mention too is, how are we ensuring the success of the American scientific enterprise? Because it’s a competitiveness issue. Whether it’s ensuring that we’re engaged in the level basic research to build on for the future, or having an adequate workforce in these areas. These are very important questions, and we have international competitors. I know all the listeners are aware of that. Every hearing I think we’ve done in our committee this Congress has had a China focus in some either direct or indirect manner. And I’m not saying China’s the only thing we should be concerned about, of course, but it’s kind of one of the most imminent threats right now. I’m not saying the American ecosystem in every way is perfect, but I feel strongly that our way of doing things, our establishment of norms and standards, is far superior than anyone else.

So those are the two things that motivate me, that keep me going, and coming to work, and reading through things every morning. That’s really what drives me.

Margonelli: Thank you so much. This was a great conversation. Thank you, Brent.

Blevins: No, Lisa, thank you. I very much appreciate the chance to speak with you today. I do want to make the offer that hopefully you can put my contact information. I want to hear from people. I have an open inbox, and I’m happy to answer questions. I’m happy to talk with anyone who might be interested.

Margonelli: Thank you. I want to ask you one last question. How many emails do you get a day?

Blevins: I am the victim of… I think I subscribed to too many news clip services. I actually really appreciate when I get the weekly Issues email. You did not ask me to plug this! So it’s great to when I get these weekly, but when I get breaking news alerts, it really adds up between Politico and Roll Call and The Hill and Bloomberg, all these different news agencies. And sometimes it does sort of help to just put everything in a folder. I get hundreds a day. I’m afraid to actually try to count them, because I would probably just, become very depressed if I did so.

Margonelli: Thank you so much. That’s amazing. That is truly service to the nation.

If you want to add to Brent’s email inbox, you can reach him at (EDITOR’s NOTE: email redacted in transcript to reduce spam, but please listen to the episode to find his email if you would like to contact Blevins!). And if you want to try emailing us, you can contact us at podcast@issues.org, or by tagging us on social media using the hashtag #SciencePolicyIRL.

If you’re listening to this, you’re probably pretty passionate about science policy. Please visit issues.org/survey to participate in our survey of the science policy community. We really want to know who you are.

Please subscribe to The Ongoing Transformation wherever you get your podcast. Thanks to our podcast producer, Kimberly Quach, and our audio engineer, Shannon Lynch. I’m Lisa Margonelli, editor-in-chief of Issues and Science and Technology. Thank you for listening.

Needed: A Vision and Strategy for Biotech Education

It is consistently true that as new career fields and business centers arrive, a portion of the population is left on the sidelines. This holds especially true in the biotechnology, medical technology, genomics, and synthetic biology investments we see today. Urban centers, which often have a high concentration of university graduates, are primed for success in the emerging bioeconomy. But even there, career and educational opportunities are often out of reach for young women and people of color. In rural communities and in regions that have traditionally supported fishing, forestry, farming, and mining, all residents are less likely to track into careers in science, technology, engineering, or mathematics.

In “A Great Bioeconomy for the Great Lakes” (Issues, Winter 2024), Devin Camenares, Sakti Subramanian, and Eric Petersen report on some targeted and hyperlocal interventions that stimulated a bioinnovation community in the Midwest and Great Lakes areas. They found that connecting students in regional high schools and local colleges with experts in industry and community labs increased the students’ appetites for further involvement. What a boon for the educators and young innovators who successfully discovered this opportunity.

In our work through the BioBuilder Educational Foundation, we can attest to the need for deliberate actions to overcome specific regional obstacles. Since 2019, BioBuilder has been engaged with high schools in East Tennessee. After several years laying a foundation in this rural region, BioBuilder is now integrated every year into biology classes in secondary schools spanning several counties. It is also integrated into some of the region’s BioSTEM pathways that Tennessee uses to bring early-college access and relevant work experience into career and technical education classrooms statewide. BioBuilder has built partnerships with local and federal funders to expand this work, and the success has spurred a much larger set of activities in the region, including post-secondary tracks at East Tennessee State University and local business opportunities such as the development of the Valleybrook Research Campus.

It must be recognized, however, that such hyperlocal approaches to building bioeconomies is not an ultimate solution. Regional approaches must be complemented with systemic educational change if the nation is to achieve the “holistic, decentralized, and integrated bioeconomy” that Camenares, Subramanian, and Petersen aim for.

Regional approaches must be complemented with systemic educational change if the nation is to achieve the “holistic, decentralized, and integrated bioeconomy” that Camenares, Subramanian, and Petersen aim for.

The K–12 public school system in the United States is an underutilized lever of change in this regard. With over 3 million students graduating each year, the nation is failing our children and collective future by not offering an on-ramp to sophisticated job sectors without the need for higher education. Public schools fulfilled the nation’s workforce needs in the past, diversifying the talent pool with an equitable geographic and racial distribution. Public schools fully reflect the nation’s diversity, and high school is the last formal education received by between one-third and one-half of all residents. Public schools operate in every state and so provide an established infrastructure for engaging every community.

With respect to the emerging bioeconomy, a vision and strategy for public education is needed. And it could be simple: providing easy-to-implement content that modernizes the teaching of life science, and then millions of young people can graduate high school with enough content knowledge and skills to join the workforce, spurring development of the bioeconomy everywhere.

Founder and Executive Director

BioBuilder Educational Foundation

National Program Coordinator

BioBuilder Educational Foundation

The “one-size-fits-all” curriculum common in many regions of the United States may fall short of capitalizing on local differences when building a successful bioeconomy, argue Devin Camenares, Sakti Subramanian, and Eric Petersen. The authors highlight the extent of programmatic structure that may or may not be helpful in seeding locally specialized educational initiatives. In this model, the authors propose that the uniqueness of a region is the key to unlocking local bioeconomic growth, turning current challenges into future opportunities.

This approach has proven fruitful in the Great Lakes region and beyond. For example, Beth Conerty at the University of Illinois Integrated Bioprocessing Research Laboratory takes advantage of its Midwest location to offer bioprocessing scale-up opportunities. Similar to the approach the authors propose, the facility couples science with educational opportunities for its students. Also, Scott Hamilton-Brehm of Southern Illinois University Carbondale founded a program called Research, Engagement, and Preparation for Students, which promotes accessibility, outreach, and communication in science, technology, engineering, and mathematics. The program’s strong student engagement grew into a company called Thermaquatica that converts biomass to value-added products including biostimulants and biofuels.

The uniqueness of a region is the key to unlocking local bioeconomic growth, turning current challenges into future opportunities.

Elsewhere, Ernesto Camilo Zuleta Suárez led several outreach and educational programs to prepare leaders for the future bioeconomy through the Consortium for Advanced Bioeconomy Leadership Education, based at Ohio State University. In Tennessee, the Oak Ridge Site Specific Advisory Board serves as a more policy-focused example, wherein student board members are strategically invited to take part in maintaining the local environment of the Oak Ridge Reservation, which still faces challenges from legacy wastes. Additionally, the Bredesen Center at the University of Tennessee established a strong program to teach students to incorporate outreach and public engagement into their scientific career.

Once established, these locally cultivated STEM programs can gain traction through science communication, which is an integral component in the field of synthetic biology (SynBio) and a determinative step of the scientific method. To highlight some examples, we have the International Genetically Engineered Machine (iGEM) and BioBuilder podcasts by Zeeshan Siddiqui and his team, the Mastering Science Communication course led by Larissa Markus, and the iGEM Digest authored by Hassnain Qasim Bokhari and Marissa Sumathipala. More recently, Tae Seok Moon has launched the SynBYSS: SynBio Young Speaker Series. And the Science for Georgia nonprofit hosts free science communication workshops and offers opportunities to share science with the community. Science communication not only educates the current generation but also transfers knowledge to future generations, thereby ensuring the sustainability of science.

Perhaps most important, these efforts are built on a student-centered approach designed to offer increasingly accessible means for students to participate in STEM education and related activities. The Global Open Genetic Engineering Competition and BioBuilder are already increasing accessible means for students to participate. Spurring interest and engagement in STEM, even at the middle or high school levels, can accelerate the development of career interests, especially in a field as interdisciplinary as synthetic biology. Such experiences may even spark interests beyond typical STEM careers and help catalyze a scientifically literate society. This educational proposition invites a people-focused approach as opposed to a project-focused one—the former of which is the key ingredient that will make the difference.

Mentor

iGEM

Embracing the Social in Social Science

To begin thinking about why all the sciences should embrace the social in social science, I would like to start with cupcakes.

In my research, context is a recurring theme, so let me give you some context for cupcakes as metaphor. A few months ago, when I was asked to respond to an article in this magazine, I wrote: “In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential. Social science is not the egg, the flour, or the sugar. Sprinkles are neither in the batter, nor do they see the oven. Sprinkles are a late addition. No matter the stylistic or aesthetic impact, they never alter the substance of the ‘cake’ in the cupcake.”

In writing these sentences, I was, and still am, hopeful that all kinds of future scientific research will make social science a key component of the scientific “batter” and bake social scientific knowledge, skill, and expertise into twenty-first-century scientific “cupcakes.”

But there are tensions and power differentials in the ways interdisciplinary science can be done. Most importantly, the formation of questions itself is a site of power. The questions we as a society ask science to address both reflect and create the values and power dynamics of social systems, whether the scientific disciplines recognize this influence or not. And some of those knowledge systems do not embrace the importance of insights from the social sciences because many institutions of science work hard to insulate the practice of science from the contingencies of society.

Moving forward, how do we, as researchers, develop questions that not only welcome intellectual variety within the sciences but also embrace the diversity represented in societies? As science continues to more powerfully blend, overlap, and intermix with society, embracing what social science can bring to the entire scientific enterprise is necessary. In order to accomplish these important goals, social concerns must be a key ingredient of the whole cupcake—not an afterthought, or decoration, but among the first thoughts.

The trust issue

Fundamentally integrating social scientific knowledge and perspectives into everything scientists do is essential to building societal trust in scientists as well as in science itself. As someone who studies technological change, I believe this moment looks different from the past. For instance, the National Science Foundation–supported 2022 General Social Survey found an appreciable drop in the public’s “overall confidence in the scientific community” compared to 2021. The Pew Research Center also discovered a decline in public confidence in both scientists and medical scientists from November 2020 to December 2021. These declines are not solely related to the COVID-19 pandemic.

In part, the decline in trust may be due to the increased murkiness of the boundaries between science and the public. Many do not see scientists as arbiters of truth because scientists no longer have exclusive access to the various types of evidence deployed to make scientific arguments. There are elements of insight in this. For example, citizen scientists have done work in environmental racism and biomedical research that would have previously been the exclusive domain of scientists—without them, those concerns might not have been recognized at all.

The trust issue may also have roots in the gap between the promises of science and the mundane realities of what science often delivers. Among those who conduct research and have been connected to, supported by, or helped distribute federal and private dollars, there is an understanding that scientific research can be risky and may not deliver expected or transformative results. But overall, most individuals and institutions involved in the enterprise believe that it is making a difference and worth the investment. 

However, if this research is viewed from outside the enterprise, especially considering the big promises that science communicators and the for-profit scientific industry have promoted, it’s possible to understand why some people might be disappointed in some of the outcomes. This may speak to larger questions about whether people feel that science is connected or relevant to their lives. The scientific enterprise overall needs to grapple with why people might distrust or be skeptical of science despite living in an amazing world made possible by human creativity and ingenuity, which is partially rooted in science and technology.

The formation of questions itself is a site of power. The questions we as a society ask science to address both reflect and create the values and power dynamics of social systems, whether the scientific disciplines recognize this influence or not.

In the research I have done about Black people’s relationship to science and technology, distrust runs deep. Many Black people feel exploited by scientists, and the historical record supports this sentiment. Science motivated and produced the thinking that brought us phrenology, eugenics, Henrietta Lacks’s unacknowledged cell line, racially biased algorithms, and facial recognition systems that do not see Black and Brown faces.

On top of that, too much science—and often pseudoscience—has been deployed to understand what was once called the “problem” of Blackness. As early as 1898, sociologist W. E. B. Du Bois, in his article in The Annals of the American Academy of Political and Social Science titled “The Study of the Negro Problems,” attempted to get science to not see Blackness and the Black “condition” as a problem, but as a set of social challenges precipitated by the long history of racism. Many others have tried to destabilize the well-worn narrative, but efforts to reinscribe the “Blackness as a problem” characterization—seen in more recent examples like the Moynihan Report and the Reagan-era War on Drugs—continue to recycle. It is also important to note that most Black folks are not particularly interested in the question. If this is true, why has it been necessary to keep asking it? Moreover, what kind of science could all this effort produce if it began from a position of equity? A position disinterested in proving the inferiority of Blackness and instead invested in ameliorating a set of institutionalized social conditions could have benefits for everyone regardless of race, gender, sexuality, or ethnicity.

Even though the scientific professions have, to some extent, found their way out of these discriminatory caverns reformed and repentant, the material impact is still felt by Black people. And it’s important to recognize that this is not solely a Black experience. In this regard, I fully understand where people who feel marginalized in science because of their identity are coming from when they say that science does not speak to them, for them, or with them.

Residues of inequity

So how can science—and this includes social science—do better? In part, the scientific enterprise needs social science’s help to be more reflective about science’s messy past before moving forward. Research must be done to understand the longitudinal impact of the residues of inequity.

What do I mean by the residues of inequity? Most are familiar with the big moments when science has done people wrong: the Tuskegee Study of Untreated Syphilis in the Negro Male or the use of Henrietta Lacks’s cells are prime examples. But much less time has been spent thinking and researching the moments when people feel that science is not interested in their concerns, their questions, or their lives. These are much smaller moments: a few minutes in a doctor’s office that fail to diagnose a loved one’s cancer, or the clinical use of a fingertip oximeter known to mismeasure when used on people with darker skin. At the same time, recent data shows a precipitous rise in maternal mortality for non-Hispanic Black women during pregnancy and childbirth.

In my study of Black people’s relationships with technology, it is clear that these seemingly benign oversights and omissions can add up. That sandy residues can make the gears of scientific trust move slowly and undermine efforts to regain trust. For some Black people, and others who feel marginalized, science is suspect. Let’s focus on these residues. 

In the 1960s and 1970s, artists produced many dreamy images celebrating the idea of living on a thriving space colony or settlement. These images looked pretty amazing. But, when I look closely at many of the images, I primarily see white, well-off, seemingly cisgender people with only an occasional hint of virtual integration. This portrayal highlights what those cinematic images subtly implied: that space was a refuge from the contentious issues of Earth, from armed conflict and environmental degradation to lousy neighbors. Going to space was a way to escape a troubled planet and start anew. But another reality was that not everyone could go. Some would certainly be left behind. 

As a child of the 1970s, I saw only white astronauts, although I did have a Black G.I. Joe doll. I also remember wondering if any people who looked like me were part of the space program. For me, this major American technoscientific effort planted an early seed of skepticism and distrust of science. Such feelings are residues of inequities—each disconnected and perhaps easily enough remedied—but over a lifetime they have the longitudinal effect of reducing one’s trust in the technoscientific complex.

And they do not stop. Recently, my wife asked me in a text: “Have you seen this?” It was a New York Times article about the exploitation of Black children in the development of RSV vaccines. According to the article, which quoted from a report published in the digital science magazine Undark, “In the 1960s, some of the first and youngest subjects to receive experimental shots, in a clinical trial of early attempts to develop RSV vaccines, were Black and poor children, some in foster care. And though questions remain about what parents knew, ‘archival documents housed at the NIH suggest that parents did not give informed consent—or in some cases, any consent at all—for their children to receive the largely untested shot.’” Two of these children died, and part of one’s lungs were removed and shared with the scientific community—for the good of science, of course.

In the article, New York Times columnist Charles Blow explains the effect of these residues perfectly when he writes that the “lack of surprise” among family members in learning about the vaccine’s likely role in the children’s deaths “is the scar tissue that Black Americans have built up—the knowledge that the worst is always possible. The mind and spirit continually make space for it, forever hoping, but preparing contingencies for hope’s inevitable betrayal.”

These residues are evident in a recent Pew Research Center survey of Black adults that found they see science and engineering as among the least welcoming to Black people of nine professions listed. Survey respondents with postgraduate degrees were even less likely to view scientists and engineers as welcoming to Black people. This is a damning result. One would expect those with advanced degrees to view science and engineering as welcoming to Black professionals, but they did not. These results may suggest that the highly educated Black respondents had developed these perceptions from first-hand experience.

These conclusions are disheartening because there are so many recent and historical examples of Black people successfully pursuing scientific and technical careers and participating in work that has had tremendous impact in many areas. Hidden Figures—the book by Margot Lee Shetterly, which was made into a film in 2016—revealed how Black women mathematicians at NASA played crucial roles in the early years of the space program. It is an amazing and compelling story that portrays Black contributions to science and technology. It is a story that I love. There are many others, such as Walter Lincoln Hawkins, a pioneer of polymer chemistry, and Gladys West, an early developer of satellite geodesy models, whose transformative work has shaped humans’ understanding of the world.

Sharing the stories of Black scientists’ contributions is important to shaping the narrative around who does science. But nonscientists also interact with science and technology every day. In my work, I developed the concept of Black vernacular technological creativity to recover and create space for understanding positive and optimistic Black engagements with science and technology. For example, hip-hop came from music enthusiasts who, based on their social experiences, cultural beliefs, and acoustic sensibilities, decided to redefine turntables and LP records from devices for listening to prerecorded sound into instruments used to create a new musical genre. Understanding hip-hop from social science perspectives creates opportunities to embrace this musical art form and understand how creative use of science and technology emanates from its embeddedness within society and culture.

But even these efforts do not fully rinse the residue of scientific inequity off Black bodies. What does it mean when the sciences are not concerned with your everyday existence? 

“Guided missiles and misguided men”

In 1967, while speaking at Stanford University about supporting the country’s poor, Martin Luther King Jr. leveled a biting critique of the space effort: “If we can spend $35 billion a year to fight an ill-considered war in Vietnam, and $20 billion to put a man on the moon, our nation can spend billions of dollars to put God’s children on their own two feet, right here on earth.”

King had a host of meaningful and prescient things to say about the relationships between science and society. In his 1967 book Where Do We Go from Here: Chaos or Community?, he wrote:

We must work passionately … to bridge the gulf between scientific progress and our moral progress. One of the great problems of mankind is that we suffer from a poverty of spirit which stands in glaring contrast to our scientific and technological abundance. The richer we have become materially, the poorer we have become morally and spiritually.… When scientific power outruns moral power, we end up with guided missiles and misguided men. 

Much of King’s writing and speaking called upon the scientific community to do a moral gut check by pondering if the goal of science is to create more things that destroy the planet or to build knowledge that supports the sustenance of life.

How can science embrace the social and ask equitable questions—not only for the good of science but also for humanity? I believe that, first, scientists and policymakers must end attempts to separate science from society by insulating scientific research from the rest of human endeavors. By embracing the social in social science, all kinds of scientists can conduct research with, for, and about people to coproduce scientific knowledge that responds to their pressing needs.

The challenge is for those invested in the future of the scientific enterprise to think deeply about how social research can aid in traversing the chasm between fundamental and applied scientific research.

This is not a particularly radical ambition. It is about doubling down on science in the service of humanity—and life and society—instead of science in service to itself. In terms of the cupcake metaphor, my goal is to get the scientific enterprise to consider how using social science as an egg rather than a sprinkle can help redirect research toward social relevance and social conditions to rebuild trust in science.

Radically inclusive questions

One way to create pathways to make the social a constituent element of all scientific research is to expand who gets to ask and frame the questions. We as a society need to think, in a big way, about what would be possible if the questions science addresses are coproduced with social scientists, affected communities, and other stakeholders. Instead, coproduction of science often means bringing others into the conversation once the research question has already been asked—sort of like putting sprinkles on a cupcake.

But what would it look like to rethink the questions and the process of question formation in a radically inclusive way? There is real power in forming the questions that scientific research asks. This is an important step to create and produce an equitable body of scientific knowledge in which those affected by the research are not on the outside or forgotten, not even solely on the inside as subjects or specimens, but equal actors in the work to be done. 

How would science change if affected people were part of research question formation? How would scientific trust be improved if partnering with communities to build research questions became standard? Mostly, I am interested in asking all people—because everyone engages with science in some way—how to improve the pursuit of scientific inquiry and the production of scientific knowledge.

Making community involvement a regularized part of scientific research is a useful starting place. The challenge is for those invested in the future of the scientific enterprise to think deeply about how social research can aid in traversing the chasm between fundamental and applied scientific research. Centering the social and human condition can produce new questions to expand the scale and scope of research.

Citizen science, mentioned briefly above, is another place where new participants are producing socially relevant scientific research within intellectual spaces and geographic places that academic and institutional science can overlook. Extending the work of citizen science and deliberately connecting it to the open science movement can allow more people to access much more data to construct, collate, and distribute scientific evidence. To support this work, policies such as those announced in the 2022 White House Office of Science and Technology Policy memorandum “Ensuring Free, Immediate, and Equitable Access to Federally Funded Research” can significantly diminish barriers to science and increase trust. Doing so can enable those with limited access to scientific research findings—not only in the United States but potentially throughout the world—to gain opportunities to examine, study, and contribute to research once hidden behind paywalls, embargoes, and a host of affordances. 

The ability for marginalized communities to gain increased access to scientific knowledge-making can only increase their trust in science by granting them access to research studies and results. Allowing a broader and more diverse group of people to contribute to science relevant to their social conditions will help improve trust between those who traditionally conduct the research and those who feel they are simply subject to scientific recommendations.

Building the foundations for change

These scientific engagements cannot be spoken into existence; people in all parts of the conversation must commit to building networks of invested actors working to use science to make the planet a more just and equitable place for all of its inhabitants. Building networks capable of engaging many stakeholders around the translation and application of knowledge to the world’s problems can create fertile places to intervene, to ask questions, and to shape science and technology’s relationship to society. What can this look like at a practical level? In this regard, I would like to highlight two networks of which I am a member.

The first is the Lewis Latimer Fellowship Program, sponsored by the Edison Awards and designed to support Black innovators to build a brighter future. The fellowship is named in honor of Lewis Latimer, a Black inventor who collaborated with both Alexander Graham Bell and Thomas Edison. The fellowship brings together a transdisciplinary team and leverages the collective intelligence of the fellows. Scientific, technical, and social synergies are the foundation of a network geared toward financing and creating businesses focused on making sustainable change. 

For example, science fiction writer Soton Rosanwo is a Latimer fellow who is using complex mathematical models to redesign insurance to cover outsized risks like climate disruption; Asegun Henry of the Massachusetts Institute of Technology has developed new ways to store energy at high temperatures; Lisa Dyson founded Air Protein, which extracts elements from the air to produce edible protein; and Ian Randall founded Maglev Aero, which developed a new aircraft propulsion system.

A lot of scientific efforts focus on supporting basic research, but this one is about building networks of people from academia, entertainment, and industry who wouldn’t otherwise be together in one space and connecting them to other networks that can provide capital and expertise. The network offers a lot of support, but it also offers an opportunity to open this process up to ask questions as a technology is being shaped. Moreover, part of the strength and value of this network is that it embraces the social as something that is central to its collective intellectual, scientific, and technical outputs.

A second network I belong to is the Digital Inquiry, Speculation, Collaboration, & Optimism (DISCO) Network, which brings together researchers, artists, technologists, policymakers, and practitioners to envision an alternative and inclusive digital future. Over the past two summers, members of the DISCO network authored two books, Digital Optimism and Technoskepticism. The network is using both optimism and skepticism as lenses for the sociotechnical future with the aim of producing important questions and pathways for alternative futures.

This network supports a multigenerational group of scholars focusing on the way society interweaves—for good and ill—race, gender, sexuality, disability, and other forms of difference into the digital platforms that mediate contemporary life. By better understanding society’s evolving social configurations, the network hopes to supply rich evidence rooted in social science that will inform how humans can live together more equitably and justly.

Networks can have powerful positive effects, but it is also necessary to think about ways to institutionalize change. Traditional scientific funding is more oriented toward supporting individual investigators than building networks, especially across disciplines, spaces, cultures, and time. At a high level, new structures and funding efforts are needed to build collaborations that imagine how to make life better for everyone on the planet.

At the meta level, changes to the actual practice of science and its incentive structures, currently measured by patents, drugs, and publications, are needed to include important outputs like improving human longevity and mitigating climate change. Instead of trying to disaggregate science and human life, the scientific enterprise needs to understand that questions concerned with how a society functions should be fundamental to the research it pursues.

Producing equitable science necessitates leaning into the social. It is no longer good enough to use small doses of social science to inoculate research in the natural sciences and allow it to build up antibodies and develop immunity to social realities.

At the local level, community-engaged research is a concept that many scientists subscribe to but do not necessarily practice. Scientists must have clearer incentives and formalized training to really involve themselves in the coproduction of knowledge with stakeholders. And that must become a norm that is sustained through implementation science: the scientific study of methods and strategies that facilitate the uptake of evidence-based research by practitioners, scientists, and policymakers. Organizations such as the Transforming Evidence Funders Network are already building foundations for this shift. The scientific enterprise must not only invest in studying how to better coproduce science, but should also develop models and tools to implement this work in a systematic fashion.

Federal agencies such as the National Science Foundation (NSF) are building the foundations to move in a positive direction. In its fiscal year 2024 budget request, NSF described its Create Opportunities Everywhere approach, which “focuses on expanding diversity, equity, inclusion, and access in STEM by including underrepresented and underserved individual, institutional, and geographic characteristics.” By addressing research equity, building capacity, fostering collaborations and partnerships, and supporting the next generation of researchers, the initiative seeks to address the problem referred to as the missing millions: “the difference between the demographics of the research community and the demographics of the nation.” These diversification efforts, embracing the contextual and substantive value of social science, will enable all of science to build a better world in which humanity can thrive.

Leaning into the social

Science and scientists live in a quickly changing world, which requires a rhetorical shift from talking about science “and” society and science “for” society to science “with” society. Producing equitable science necessitates leaning into the social. It is no longer good enough to use small doses of social science to inoculate research in the natural sciences and allow it to build up antibodies and develop immunity to social realities. This is, of course, certainly not to argue that the scientific enterprise should move away from basic and fundamental scientific inquiry; rather, the UNESCO Recommendation on Open Science can be a model to think about embracing the social. That document proposes that scientific knowledge should not only be widely accessible and easily shared, but that the production of knowledge should be inclusive, equitable, and sustainable. I am moved by the recommendation’s specific commitments to quality and integrity, collective benefit, equity and fairness, and diversity and inclusiveness. This approach demands more than supporting and doing better science or prodding scientific institutions to make commitments to society; it impels us to reshape, reconfigure, and reorient science to face society with the explicit intent of serving society well. By actively situating society at the center of scientific endeavors to produce research that is reflective and responsive to the human condition, these goals can produce science that is in the collaborative service of society.

Embracing the social in social science and connecting scientific research to everyday life can also support positive steps to regain the public’s trust in science and build it for the future. If scientists are in the business of truly making the world a better place for life in its myriad forms, it seems like a good idea to embrace and champion science that builds sustainable pathways for life to not only survive, but thrive.

Bringing equity to this process is not easy. The history of science itself—its traditions, beliefs, and institutions—makes fundamental change quite challenging. A substantial dose of epistemic humility, from all of us, is needed to embrace alternative and new ways of knowing. But I believe the collective scientific community is up to the task, and I look forward to working with you all to chart a new pathway forward.

Watcher

At first, there was nothing to do but watch.
For days, before the trucks arrived, before the work
of cleanup, my brother sat on the stoop and watched.

He watched the ambulances speed by, the police cars;
watched for the looters who’d come each day
to siphon gas from the car, take away the generator,

the air conditioner, whatever there was to be had.
He watched his phone for a signal, watched the sky
for signs of a storm, for rain so he could wash.

At the church, handing out diapers and water,
he watched the people line up, watched their faces
as they watched his. And when at last there was work,

he got a job, on the beach, as a watcher.
Behind safety goggles, he watched the sand for bones,
searched for debris that clogged the great machines.

Riding the prow of the cleaners, or walking ahead,
he watched for carcasses—chickens mostly, maybe
some cats or dogs. No one said remains. No one

had to. It was a kind of faith, that watching:
my brother trained his eyes to bear
the sharp erasure of sand and glass, prayed

there’d be nothing more to see.



Innovative, Opportunistic, Faster

It is safe to say that research into the production, distribution, and use of energy in the United States has emphasized the technological over the social. Let’s be clear: this focus has had its successes. We see physical improvements today in our homes and offices and in the growth of renewable sources in large part due to research and development investments begun in the 1970s. In some cases, these efforts were paired with inquiries into the economic, demographic, and behavioral contexts surrounding the technology in question. But this kind of comprehensive, multidisciplinary approach to our energy system has been rare—at least until recently.

As Evan Michelson and Isabella Gee demonstrate by example in “Lessons From a Decade of Philanthropy for Interdisciplinary Energy Research” (Issues, Winter 2024), the questions that social scientists, policymakers, the media, and consumers might have about the energy system extend far beyond resistors and wires. These questions are unwieldy. They are also challenging for researchers accustomed to working in their siloes. For example, many energy scholars are unfamiliar with our complex housing, property, utility, and household practices and their regulatory history. Likewise, social scientists have been sidelined not just due to their disciplinary silos and inability to engage with the engineers and scientists but because of the historical underinvestment in their methods.

Unfamiliarity has practical implications, such as not knowing which data are available, how to collect them, and whether indicators represented by these data are the most valid and aligned to the underlying concept in question. Put simply, humans—or more specifically, our understanding of humans and their energy use—are a missing link in energy research.

The questions that social scientists, policymakers, the media, and consumers might have about the energy system extend far beyond resistors and wires.

Enter philanthropy. Michelson and Gee rightfully point out the critical role of philanthropic funders based on their universal mission to improve social conditions. But they also note how philanthropy offers a unique vehicle compared with the public sector’s statutory restrictiveness and private sector’s profit motivation. Philanthropy can be innovative (funding risky propositions with potentially large societal benefit), opportunistic (targeting questions and researchers that have been excluded from methods and institutions), and, quite frankly, faster and nimbler, along with being more altruistic.

But philanthropy and, in turn, philanthropy’s reach is limited. In the broad and still-murky field of energy and its socioeconomic soup, there are few philanthropic energy R&D funders, often with very limited budgets in competition with foundations’ other pressing social program allocations. Federal funding’s crowding out of foundation contributions might convince some funders to simply stay out of the business altogether.

For the few funders that stay in the race, there can be real rewards. The subject matter and researcher pools supported by the two largest federal energy research funders—the National Science Foundation and US Department of Energy—have expanded. In some cases, this has been made explicit through interdisciplinary research calls as well as stated research questions that require collaboration across silos. Anecdotally, every energy conference I have attended in the last five years has consciously discussed the integration of social sciences as a fundamental component of energy research. While each philanthropic entity rightfully evaluates its impact—and in the Alfred P. Sloan Foundation’s case, quantitative indicators of those effects—we can see that these efforts have already had a massive qualitative effect.

Director of Remodeling Futures

Harvard Joint Center for Housing Studies

Channels for Arctic Diplomacy

Illustration by Shonagh Rae
Illustration by Shonagh Rae

During the summer of 2016, in Russia’s arctic Yamal Peninsula, a heat wave resurfaced anthrax bacteria long buried in the permafrost. The bacteria then killed thousands of reindeer and affected nearly a hundred local residents—the first large outbreak of anthrax in the region since 1941. The world reacted with concerned curiosity: What other viruses and bacteria could the thawing permafrost bring back to life? Microbiologists and cold region experts have since set out to deepen our understanding of how changes to climate and weather patterns in the Arctic may increase human and animal exposure to novel—or really, really old—pathogens freed from dormancy.

Over the last few decades, the international community has dedicated significant resources to building up networks to detect and combat tropical disease. But the 2016 anthrax outbreak, along with research on other viral strains recovered from permafrost since, shows just how important scientific cooperation and public health surveillance are in polar regions. 

Monitoring public health in the Arctic is challenging. The region is home to about 4 million people scattered across roughly 6 million square miles. Unlike the global commons of Antarctica, the Arctic is comprised of sovereign territories and exclusive economic zones of eight nations. Nonetheless, during the COVID-19 pandemic, Arctic public health analyses were based on data disaggregated at national, regional, and local levels; the cooperation, transparency, and communication required for this joint activity were themselves important feats of pan-Arctic collaboration. Another accomplishment is the International Circumpolar Surveillance program, which monitors the spread of pneumococcal, meningococcal, and Haemophilus influenzae bacteria through a network, launched in 1999, of hospitals and public health offices in seven of the eight Arctic states—all but Russia.

Until the 1990s, the Russian Federation’s infectious disease control systems evolved independently from Western public health systems, making cross-border surveillance and cooperation difficult. But when a sharp rise in diseases including HIV and tuberculosis in the Baltic Sea and Barents Sea regions demanded joint action on control and prevention, a process for collaboration began. Eventually, a rising interdependence on such international collaboration shifted Russia’s foreign policy strategy for health toward multilateral and bilateral diplomacy.

In mutually dependent regions like the Arctic, scientific research on climate and health can sustain channels for diplomacy.

Today, new geopolitical tensions are renewing complications in scientific and health cooperation with Russia. When Russia invaded Ukraine in February 2022, the Arctic states of Canada, Denmark, Finland, Iceland, Norway, Sweden, and the United States suspended the Arctic Council’s cooperation with Russia. In the months since, Russia has threatened to withdraw from the Arctic Council, the leading intergovernmental forum for the region. In September 2023, it formally withdrew from the 30-year-old Barents Euro-Arctic Council.

At the same time, warming temperatures in the Arctic are opening sea routes and allowing new access to natural resources, prompting increased traffic and cooperation between Arctic and non-Arctic states. Climate change is destabilizing the health, food security, and livelihoods of people in the area. We argue that, in mutually dependent regions like the Arctic, scientific research on climate and health can sustain channels for diplomacy. Indeed, new—even experimental—strategies for cooperation are necessary at this critical time. 

Cooperation and competition in the Arctic

The Arctic Council was established in 1996 as a high-level intergovernmental forum for the eight countries with Arctic territory and the six Indigenous people’s organizations granted “permanent participant” status. The council promotes cooperation, coordination, and communication among Arctic states, Indigenous communities, and the rest of the Arctic population on issues including sustainable development, health threats, and emergency response. The chair of the council rotates among the Arctic state members, with initiatives carried out by working groups, task forces, and expert groups.

The Arctic Council has successfully navigated diplomatic disagreements among member states, including the Iraq War in 2003, the Russo-Georgian War in 2008, and Russia’s annexation of Crimea in 2014. On March 3, 2022, in response to Russia’s invasion of Ukraine, the other seven member states announced a pause in their participation. The pause disrupted the council’s 130 projects related to environmental monitoring, data collection, equity, and search and rescue. It also put the brakes on critical work and communication related to Indigenous peoples, the environment, scientific research, and health safety.

How can collaboration with Russia happen when established diplomatic channels are diminished?

Following discussions, in June 2022 those seven member states of the council agreed to resume partial activity on existing projects not involving the Russian Federation. In August 2023, all the Arctic Council countries, including Russia, agreed to guidelines for resuming working group and expert group activities. Further discussions with the six permanent participant Indigenous peoples’ organizations in October 2023 encouraged resumption of working group–level activities with Russia. Then, in February 2024, Russia withheld its annual payment to the Arctic Council. This step raises urgent questions about the council’s future without Russian participation. How can collaboration with Russia happen when established diplomatic channels are diminished?

Understanding Russia’s aspirations in the Arctic

A key to navigating future collaboration on emerging diseases and other health risks in the Arctic is a deeper understanding of the future Russia sees for itself as a leading power in the region. Russia controls about 45% of the geographic territory of the Arctic, including the Northern Sea Route—which is in Russia’s exclusive economic zone—a particularly useful seaway for moving cargo between the North Pacific region and Northern Europe. Russia’s position in the Arctic and its role in current geopolitical crises contribute to broader debates on the effectiveness of “soft law” (such as recommendations and codes of conduct) and diplomacy in various areas of international scientific cooperation, particularly for health and climate issues.

Already, the effects of the COVID-19 pandemic and the war in Ukraine have shifted Russian foreign policy approaches to health and science deeper into the security realm. The invasion prompted many Western countries to impose a range of scientific sanctions on Russia, and though individual researchers are not banned from cross-border collaboration, travel between Russia and the West is more complicated. The number of research collaborations between Russian scientists and those from the United States and European countries has fallen, as has the attendance of Russian researchers at international academic conferences. 

A year into the war with Ukraine, Russian president Vladimir Putin presented a new Foreign Policy Concept (FPC), an outline of Russia’s national interests, strategic goals, challenges, and main directions of foreign policy, which had last been updated in 2016. The new FPC emphasizes Arctic policy in commitments to “counteracting the unfriendly states’ policy aimed at militarization of the region” and “establishing a mutually beneficial cooperation with the non-Arctic states pursuing a constructive policy toward Russia.” These statements signal Russia’s strategic interest in maintaining a formal presence on the Arctic Council to preserve its political power in the Arctic while testing new bilateral partnerships in the region.

Russia’s current inclination toward bilateral rather than multilateral cooperation, coupled with China’s ambitions for the region, could set back the Arctic Council’s work to build up necessary health security and cooperation networks.

Global health priorities in the FPC suggest Russia’s continuing interest in international discussions, but with variations in focus across regions. The document links global health to environmental protection and climate change, and goals include “increasing efficiency of international cooperation in the area of health care and preventing its politicization,” as well as “consolidating international efforts in order to prevent the extension of dangerous infectious diseases” and quickly responding to crises, epidemics, and pandemics. Russia remains a member of the World Health Organization (WHO) and is active in convenings related to WHO’s International Health Regulations and the pandemic accord led by its Intergovernmental Negotiating Body.

It is significant, however, that neither of the FPC’s agendas for health or the Arctic refers to the existing international platforms—WHO and the Arctic Council—for maintaining multilateral cooperation.

The lack of mention of multilateral platforms is notable alongside a joint Russian-Chinese political statement in March 2023—timed closely with the presentation of the Russian FPC—declaring the two countries’ intentions to act “in favor of preserving the Arctic as a territory of peace, stability, and constructive cooperation.” Though China has no territorial claim in the Arctic, its plans for a Polar Silk Road via Arctic waters to Europe, which is included in its Belt and Road Initiative, indicate a vision for leadership in the region. Official dialogue on the Arctic between Moscow and Beijing began in 2013. The cooperation has produced two large Arctic energy infrastructure projects and a memorandum of understanding between the two countries on maritime law enforcement in the Arctic zone.

Russia’s current inclination toward bilateral rather than multilateral cooperation, coupled with China’s ambitions for the region, could set back the Arctic Council’s work to build up necessary health security and cooperation networks among all Arctic states and peoples. We think the situation calls for consideration of new mechanisms for dialogue around emerging climate and health threats in the region.

The entangled future of the Arctic, global health, and science diplomacy

As geopolitical and military instability abuts nonmilitary humanitarian pursuits, Arctic states may be forced to allocate resources to protecting individual interests over supporting scientific cooperation. However, in other periods of tension, science diplomacy has been key to preserving cooperation between allies and rivals, as with the Pugwash Conferences on Science and World Affairs, the Edinburgh Conversations, Cold War–era polio vaccine cooperation, and the international effort to sequence SARS-CoV-2.

With the Arctic region in a precarious position, now may be the time to move beyond traditional forms of diplomacy and center the pursuit of alternative means of scientific cooperation around health and climate security as a unifying purpose for continued engagement.

An important component of any effective international science collaboration is the personal relationships cultivated between scientists from around the world, often nurtured over decades. To preserve those relationships under an Arctic Council with diminished capacity, we recommend Western policymakers and other stakeholders engage in bilateral initiatives extending beyond the Arctic Council framework.

Strategically expanding approaches to scientific cooperation in the polar regions, particularly through individual contacts, could solidify the Arctic’s significance as a focal point for twenty-first-century science diplomacy.

Nongovernmental, informal interactions (known as Track II diplomacy) among scientists from Arctic (and even non-Arctic) states could be a powerful strategy for keeping Russia engaged and communicating with the global scientific community. The first mechanism to try is for researchers in neighboring states or territories (for example, Alaska, the Russian district of Chukotka, and Yukon, a Canadian territory) to arrange for partnerships in climate research. Strategically expanding approaches to scientific cooperation in the polar regions, particularly through individual contacts, could solidify the Arctic’s significance as a focal point for twenty-first-century science diplomacy.

Second, the science diplomacy community, housed in universities and connected through national scientific academies, should continue to play a leading role in Arctic science diplomacy by incentivizing researchers to build new scientific partnerships across borders. This would require the European Union and NATO members that discontinued projects with Russian institutions after Russia invaded Ukraine to take a step forward in reestablishing collaborations with Russian partners. Resumed research partnerships should prioritize studies on the climate risks associated with permafrost thaw and the mitigation of potential reactivation of ancient microbiota and dormant pathogens. There should also be a much more significant focus on cooperation between Arctic states and Indigenous peoples’ organizations, with a research agenda that intertwines scientific and local knowledge.

And finally, the international community still working within the Arctic Council platform should prioritize establishing a network of monitoring stations in the high-latitude Arctic to swiftly identify pathogens in hot spots of microbial diversity, such as mass bird-nesting sites. Such monitoring activities can reinforce or create points of contact within Arctic states and Indigenous peoples’ organizations to buttress the goals of the Arctic Council’s 10-year-old One Arctic, One Health project, which aims to improve coordination and strategies for handling emerging threats. Improved monitoring and coordination would have multiple benefits: more opportunities to develop scientific diplomacy, stronger Arctic health and environmental security, more knowledge about the global impacts of climate change, and the potential to stimulate new initiatives to understand other microbial hot spots around the globe.

In one of the last examples of Arctic scientific cooperation with Russia before its invasion of Ukraine, a study conducted by a team of German, French, and Russian scholars identified 13 new viruses revived from ancient permafrost—the result of almost a decade of joint research. Keeping scientific connections like these alive among Arctic researchers should be a diplomatic imperative, both to deepen the global understanding of shared health and climate risks as well as to preserve peace, stability, and constructive cooperation in the region and beyond.