Julia Pollack, In Fragments No Longer: Cari Vanderpool and Julia Pollack, 2023, inkjet print, 24 x 36 inches.Julia Pollack, In Fragments No Longer: Cari Vanderpool and Julia Pollack 2, 2023, inkjet print, 24 x 36 inches.
Julia Pollack, a curator and creator at the Carl R. Woese Institute for Genomic Biology (IGB) at the University of Illinois Urbana-Champaign, makes art based on her conversations and collaborations with scientists. When Pollack engages in dialogues with researchers at IGB, she immerses herself in their work, and then uses that information along with related imagery to build concepts for her artistic interpretations.
Her series “In Fragments No Longer” is inspired by the microbial world that envelops all living things. When we brush past strangers, share a hug with a friend, or kiss our loved ones, we share millions of microbes. The series is comprised of digital prints depicting Lysogeny broth (LB) plates that hold the personal microbes of Pollack and four collaborators: science writer and microbiologist Ananya Sen, IGB outreach manager Claudia Lutz, IGB director of core microscopy facilities Glenn Fried, and University of Illinois Urbana-Champaign professor of microbiology Cari Vanderpool. In each pair of prints, Pollack and a collaborator imprinted their microbial communities on LB plates, which contain a nutritious jelly that helps bacteria grow—making visible the microbial world that binds us all together with a multitude of invisible connections.
Julia Pollack, In Fragments No Longer: Ananya Sen and Julia Pollack, 2023, inkjet print, 24 x 36 inches.Julia Pollack, In Fragments No Longer: Ananya Sen and Julia Pollack 2, 2023, inkjet print, 24 x 36 inches.
Pollack’s work highlights the power and aesthetics of science imagery while revealing the hidden labor of research and knowledge production. “In Fragments No Longer” is part of the IGB’s Art of Science program, currently in its fourteenth year. It celebrates common ground between science and art and is representative of IGB’s mission to bring science to the community.
“In Fragments No Longer,” a series in the exhibition Julia Pollack: Collaborative Ecologies, is on exhibit through June 7, 2024, at the National Academy of Sciences, 2101 Constitution Ave, NW, Washington, DC.
Julia Pollack, In Fragments No Longer: Claudia Lutz and Julia Pollack, 2023, inkjet print, 24 x 36 inches.Julia Pollack, In Fragments No Longer: Claudia Lutz and Julia Pollack 2, 2023, inkjet print, 24 x 36 inches.Julia Pollack, In Fragments No Longer: Glenn Fried and Julia Pollack, 2023, inkjet print, 24 x 36 inches.Julia Pollack, In Fragments No Longer: Glenn Fried and Julia Pollack 2, 2023, inkjet print, 24 x 36 inches.
Harvesting Insights From Crop Data
In “When Farmland Becomes the Front Line, Satellite Data and Analysis Can Fight Hunger” (Issues, Winter 2024), Inbal Becker-Reshef and Mary Mitkish outline how a standing facility using the latest satellite and machine learning technology could help to monitor the impacts of unexpected events on food supply around the world. They do an excellent job describing the current dearth of public real-time information and, through the example of Ukraine, demonstrating the potential power of such a monitoring system. I want to highlight three points the authors did not emphasize.
First, a standing facility of the type they describe would be incredibly low-cost relative to the benefit. A robust facility could likely be established for $10–20 million per year. This assumes that it would be based on a combination of public satellite data and commercial data accessed through larger government contracts that are now common. Given the potential national security benefits of having accurate information on production shortfalls around the world, the cost of the facility is extremely small, well below 0.1% of the national security spending of most developed countries.
Second, the benefits of the facility will likely grow quickly, because the number of unexpected events each year is very likely to increase. One well-understood reason is that climate changes are making severe events such as droughts, heat waves, and flooding more common. Less appreciated is the continued drag that climate trends are having on global productivity, which puts upward pressure on prices of food staples. The impact of geopolitical events such as the Ukraine invasion then occur on top of an already stressed food system, magnifying the impact of the event on global food markets and social stability. The ability to quickly assess and respond to shocks around the world should be viewed as an essential part of climate adaptation, even if every individual shock is not traceable to climate change. Again, even the facility’s upper-end price tag is small relative to the overall adaptation needs, which are estimated at over $200 billion for developing countries alone.
Third, a common refrain is that the private sector (e.g., food companies, commodity traders) and national security outfits are already monitoring the global food supply in real time. My experience is that they are not doing it with the sophistication and scope that a public facility would have. But even if they could, having estimates in the public domain is critical to achieving the public benefit. This is why the US Department of Agriculture regularly releases both its domestic and foreign production assessments.
The era of Earth observations arguably began roughly 50 years ago with the launching of the original Landsat satellite in 1972. That same year, the United States was caught by surprise by a large shortfall in Russian wheat production, a surprise that reoccurred five years later. By the end of the decade the quest to monitor food supply was a key motivation for further investment in Earth observations. We are now awash in satellite observations of Earth’s surface, yet we have still not realized the vision of real-time, public insight on food supply around the world. The facility that Becker-Reshef and Mitkish propose would help to finally realize that vision, and it has never been more needed than now.
David Lobell
Professor, Department of Earth System Science
Director, Center on Food Security and the Environment
Stanford University
Member, National Academy of Sciences
Given the current global food situation, the importance of the work that Inbal Becker-Reshef and Mary Mitkish describe cannot be emphasized enough. In 2024, some 309 million people are estimated to be acutely food insecure in the 72 countries with World Food Program operations and where data are available. Though lower than the 2023 estimate of 333 million, this marks a massive increase from pre-pandemic levels. The number of acutely hungry people in the world has more than doubled in the last five years.
Conflict is one of the key drivers of food insecurity. State-based armed conflicts have increased sharply over the past decade, from 33 conflicts in 2012 to 55 conflicts in 2022. Seven out of 10 people who are acutely food insecure currently live in fragile or conflict-affected settings. Food production in these settings is usually disrupted, making it difficult to understand how much food they are likely to produce. While Becker-Reshef and Mitkish focus on “crop production data aggregated from local to global levels,” having local-level data is critical for any groups trying to provide humanitarian aid. It is this close link between conflict and food insecurity that makes satellite-based techniques for estimating the extent of croplands and their production so vital.
This underpins the important potential of the facility the authors propose for monitoring the impacts of unexpected events on food supply around the world. Data collected by the facility could lead to a faster and more comprehensive assessment of crop production shortfalls in complex emergencies. Importantly, the facility should take a consensual, collaborative approach involving a variety of stakeholder institutions, such as the World Food Program, that not only have direct operational interest in the facility’s results, but also frequently possess critical ancillary datasets that can help analysts better understand the situation.
While satellite data is an indispensable component of modern agricultural assessments, estimation of cropland area (particularly by type) still faces considerable challenges, especially regarding smallholder farming systems that underpin the livelihoods of the most vulnerable rural populations. The preponderance of small fields with poorly defined boundaries, wide use of mixed cropping with local varieties, and shifting agricultural patterns make analyzing food production in these areas notoriously difficult. Research into approaches that can overcome these limitations will take on ever greater importance in helping the proposed facility’s output have the widest possible application.
In order to maximize the impact of the proposed facility and turn the evidence from rapid satellite-based assessments into actionable recommendations for humanitarians, close integration of its results with other streams of evidence and analysis is vital. Crop production alone does not determine whether people go hungry. Other important factors that can influence local food availability include a country’s stocks of basic foodstuffs or the availability of foreign exchange reserves to allow importation of food from international markets. And even when food is available, lack of access to food, for either economic or physical reasons, or inability to properly utilize it can push people into food insecurity. By combining evidence on a country’s capacity to handle production shortfalls with data on various other factors that influence food security, rapid assessment of crop production will be able to fully unfold its power.
Friederike Greb
Head, Market and Economic
Analysis Unit
Rogerio Bonifacio
Head, Climate and Earth
Observation Unit
World Food Program
Rome, Italy
Inbal Becker-Reshef and Mary Mitkish use Ukraine to reveal an often-overlooked impact of warfare on the environment. But it is important to remember that soil, particularly the topsoil of productive farmlands, can be lost or diminished in other equally devastating ways.
Globally, there are about 18,000 distinct types of soil. Soils have their own taxonomy, and the different soil types are sorted into one of 12 orders, with no two being the same. In the case of Ukraine, it has an agricultural belt that serves as a “breadbasket” for wheat and other crops. This region sustains its productivity in large part because of its particular soil base, called chernozem, which is rich in humus, contains high percentages of phosphorus and ammonia, and has a high moisture storage capacity—all factors that promote crop productivity.
Even as the world has so many types of soil, the pressures on soil are remarkably consistent across the globe. Among the major source of pressures, urbanization is devouring farmland, as such areas are typically flat and easy to design upon, making them widely marketable. Soil is lost from erosion, which can be gradual and almost unrecognized, or sudden, as following a natural disaster. And soil is lost or degraded from salinization and desertification.
So rather than waiting for a war to inflict damage to soils and flash warning signs about soil health, are there not things that can be done now? As Becker-Reshef and Mitkish mention, “severe climate-related events and armed conflicts are expected to increase.” And while managing such food disruptions is key to ensuring food security, forward-looking polices and enforcements to protect the planet’s base foundation for agriculture would seem to be an important part of food security planning.
In the United States, farmland is being lost at an alarming rate; one reported study found that 11 million acres were lost or paved over between 2001 and 2016. Based on those calculations, it is estimated that another 18.4 million acres could be lost between 2016 and 2040. As for topsoil, researchers agree that it can take from 200 to 1,000 years to form and add an additional inch in depth, which means that topsoil is disappearing faster than it can be replenished.
While the authors clearly show the loss of cultivated acreage from warfare, to fully capture the story would require equivalent projections for agricultural land lost to urbanization and to erosion or runoff. This would then paint a fuller picture as to how one vital resource, that of topsoil, is faring during this time of farmland reduction, coupled with greater expectations for what each acre can produce.
Joel I. Cohen
Visiting Scholar, Nicholas School of the Environment
Duke University
A Fond Farewell to the Anthropocene
In February 2024, an international scientific committee voted against creating a new geologic time period called the Anthropocene. The move, coming after two decades of debate, dashed the hopes of many in the environmental community who wanted a scientific endorsement of the notion that human-driven changes had shifted the trajectory of the planet. Although it was disheartening to many, I believe this rejection should not be considered a setback for an ambitious environmental agenda. It is, rather, an opportunity to reflect and learn.
The Anthropocene, or “the age of human beings,” combines two Latin words: anthropos, meaning human, and ocene, meaning new. Nobel Prize-winning chemist Paul Crutzen popularized the term in a 2000 essay in which he and biologist Eugene Stoermer argued that Earth had left the Holocene and entered a new epoch characterized by human impact on the planet. The term soon became ubiquitous in the environmental policy community and beyond.
I think the environmental policy community has expected both too much and too little of the Anthropocene label. It is meant to be precise enough for scientific imprimatur and yet squishy enough to encompass many aspects of human-driven environmental damage, from the destruction of biodiversity to greenhouse gas emissions.
The environmental policy community has expected both too much and too little of the Anthropocene label.
This application of the term has not only tied the future of environmental policy to highly technical debates and processes, but it has also roiled the scientific community. With the February decision to reject defining this period as a new epoch of geologic time, the policy community has an opportunity to wrestle with a bigger question on how it engages with science when setting policy priorities and strategies.
A geologists’ affair
The decision on whether humans now live in the Anthropocene officially fell to the International Union of Geological Sciences (IUGS). The IUGS traces its roots to the late 1800s and is among many similar global scientific institutions that coordinate research across countries and languages. One core responsibility of the IUGS is to codify Earth’s geologic timelines. This process looks less like a scientific inquiry and more like a United Nations commission: it involves setting up committees and subcommittees and a process of votes, ratifications, and formal appeals. After years of cold-shouldering proposals like Crutzen’s to define a new epoch, the relevant body within the IUGS, the International Commission on Stratigraphy (ICS), established the Anthropocene Working Group (AWG) in 2009 to make recommendations on whether to declare an end to the current Holocene and the start of the Anthropocene. In essence, the AWG was asked to determine whether human actions were changing the planet at a similar scale as, say, the end of the ice age that launched the Holocene epoch.
Geologists are themselves divided. Some argue that the field is being pushed into political provocation. A 2012 commentary stated, “Anthropocene provides eye-catching jargon, but terminology alone does not produce a useful stratigraphic concept.” Others, such as environmental scientist Erle Ellis, a member of the AWG from its beginning, argued the opposite, saying that it is important to recognize the Anthropocene epoch because such a move would communicate the overwhelming scientific consensus that humans have caused a large-scale transformation of Earth’s climate, atmospheric composition, and ecosystems. The AWG advocated for the renaming by following technical criteria of stratigraphic concepts and physical signatures.
To arbitrate the creation of this new epoch, the AWG had to determine when it began and find a geological signature marking that beginning. After a series of procedural votes, in 2023, the AWG voted in favor of a start date associated with the radioactive fallout from the first nuclear weapons tests. Traces of radionuclides are globally synchronous and clearly human-derived, which made them a good geological signature for decoupling the Anthropocene from the Holocene. Plutonium isotopes found in Canada’s uniquely preserved Lake Crawford were used as the physical site of this nuclear fallout.
Geologists are themselves divided. Some argue that the field is being pushed into political provocation.
Although many accepted this narrow and technical definition of the Anthropocene, others argued that such a definition does more harm than good by neglecting other human-driven changes such as large-scale deforestation and greenhouse gas emissions. The debates were so heated that Ellis and two other scientists eventually quit the working group. In his resignation letter, Ellis protested, “The AWG’s choice to systematically ignore overwhelming evidence of Earth’s long-term anthropogenic transformation is not just bad science, it’s bad for public understanding and action on global change.” In the end, the IUGS approved the ICS committee’s vote to reject the AWG’s proposal, determining that the criteria were too narrow and that establishing another epoch was not useful for the advancement of international scientific research. Three geologists who supported the official rejection of a new geological epoch argued that the term has more value as an informal concept, unburdened by narrow geological definitions.
There is no doubt that the term “Anthropocene” will live on informally, but its rejection as a distinct epoch should also not be disregarded as irrelevant to environmental discourse. To me, the very fact this debate became so important to environmental policymakers provides some of the most important lessons about the interplay of climate science and climate politics over the last two decades.
Science-politics impasse
Many in the environmental policy community hoped that formal recognition of the Anthropocene would spur bold actions. After the legislative successes of the 1960s and ’70s (for example, passage of the Clean Air and Clear Water Acts), the policy community has struggled to make a successful publiccase for ratcheting up environmental regulations even as climate change and other environmental problems have grown more urgent. Terms such as “the environment” and “climate change” or even grander terms like “Gaia” have been bandied about, but have so far failed to coalesce public support. In contrast, the Anthropocene, with its technical authority and grand symbolism, offered a fresh launchpad to mobilize public support at a time when misinformation and climate denialism threatened action.
While I see the importance of engaging science and experts in policymaking, I also think that environmental policy must look for legitimacy beyond institutions of science and scientific expertise. The goal is to move policy forward, and for that, advocates should move away from an overreliance on science to justify a tougher stance against environmental degradation and greenhouse gas emissions. The starting point for climate action should not be debating whether human-driven changes to the planet are equivalent to an ice age—it should be helping people who are already suffering the consequences of environmental change to reverse the policies that are harming them.
Sociologist Peter Weingart has argued that the expanded use of science to defend policy actions can paradoxically backfire by destabilizing confidence in both scientific and political institutions. He contends that rather than strengthen the case for action, this intensified pressure on experts pushes them to go beyond the realm of consensus-based conclusions—and into frontiers where claims are contested and uncertain. This science-politics impasse closely echoes what is happening with the Anthropocene.
The Anthropocene, with its technical authority and grand symbolism, offered a fresh launchpad to mobilize public support at a time when misinformation and climate denialism threatened action.
Such a reliance on science alone to drive the policy agenda is also problematic because it fails to acknowledge the ways science is socially constructed. It overlooks heterogeneity within the scientific community, assumes science is value-free, and encourages excessive deference to conventional research agendas. Policymakers end up privileging top-down knowledge generation and thus underappreciate experiential and place-based ways people understand how earth systems are changing.
You can see this effect in the popular attention to annual Intergovernmental Panel on Climate Change (IPCC) reports and related Conference of the Parties (COP) meetings at the expense of, say, localized social movements working to reduce dependence on fossil fuels by providing transportation options or groups working to reduce pollution in neighborhoods near oil refineries. The IPCC does important work, and it was particularly valuable in the decades when signals of climate change were less obvious. But waiting on scientific consensus as an irrefutable authority perpetuates the idea that science is outside society, even though scientific consensus depends on modern international scientific bodies and their highly sophisticated bureaucracies. Undeniably social, these bodies of experts are as much a part of the scientific process as randomized experiments, statistical modeling, and peer review.
Pinning policy actions on official scientific declarations may limit ambitions and crimp views of consensus and could steer policymakers toward grand gestures, pulling focus from more impactful incremental and local change.
What the Anthropocene can’t say
The Anthropocene has also come to represent a particularly Western view of environmental degradation. Ascribing blame to humanity at large, via the anthropos, is a framing that fails to hold industrialized countries, large fossil-fuel companies, and those who profit from environmental damage as particularly responsible for human-caused changes. This argument is summed up in a book by sociologist John Bellamy Foster, which asserts it is not overall humanity that has erred, but capitalism—“a system that inherently and irredeemably fouls its own nest.” A recent Oxfam report found that the poorest half of the global population accounted for 7% of global greenhouse gas emissions from 1990 to 2015—less than half the approximately 15% of emissions attributed to the richest 1%.
Perhaps the Anthropocene was rapidly accepted among Western academics and public actors precisely because it allowed the discourse to shift away from the problems caused by Western ideas of progress, science, and modernity toward a more global concept of humanity at large. Environmental historian Jason Moore gets at the source of the problem in renaming the era the “Capitalocene,” noting that policies remain faithful to top-down capitalist thinking.
In framing humanity as the problem, the term “Anthropocene” mirrors the Biblical concept that all time periods before humans were “Eden before the fall” while also downplaying historical injustices. Scholars of Indigenous knowledge Heather Davis and Zoe Todd, for instance, argue that colonialism, genocide, and dispossession, along with the Industrial Revolution, caused the kinds of environmental degradation that are summed up in the label. Others, including science, technology, and society scholar Eileen Crist, suggest that more effective climate politics lie in decentering ideas of human progress and inevitable expansion.
All these lines of thought suggest a need to be more precise than just saying “anthropos” when assigning a cause to the planet’s current predicament. When policies fail to recognize the social and economic causes of the Anthropocene, they may also end up perpetuating the injustices of the past. For example, the Convention on Biological Diversity’s 30×30 initiative aspires to turn 30% of Earth’s surface into protected areas by 2030. Although a well-intentioned aim, this top-down goal-setting has failed to accommodate or recognize the sustainable use of these lands by Indigenous communities. The convention has drawn criticism from rights-based groups like Survival International and intensified calls to explicitly engage local and Indigenous communities in environmental policies.
Setting policy free
In the aftermath of the IUGS decision, the policy community now has an opportunity to break from this overreliance on official scientific consensus. Supporters of environmental actions find themselves hamstrung by, to borrow from Weingart, the politicization of science on one end, and the scientization of politics on the other. With the two-decade effort to tie climate policy to a stratigraphic decision concluded, there is an opportunity to think more imaginatively about engaging publics in environmental policies.
These lines of thought suggest a need to be more precise than just saying “anthropos” when assigning a cause to the planet’s current predicament.
In this complicated and changing world, the hard job of forging political consensus is different from the hard work of forging scientific consensus, and one cannot be privileged over the other. Rather than waiting for some god-trick of scientific authority, advocates and policymakers must find ways to proceed despite uncertainties and contingencies. It is past time to learn how to govern the diversity of human interactions with nature amid many unknown environmental risks.
Environmental challenges are multifold; solutions must be as well, and so are the strategies and arguments needed to gain support for solutions. For some people, the future economic costs and risks to inaction may be compelling. Others may be swayed by the need to help the many people already suffering because of environmental changes—say, extreme weather that floods city streets, or species loss that disrupts Indigenous food sources. Still others may be persuaded by ethical or moral arguments. Fully engaging with all of these messy human concerns may help policymakers find the paths to effective policies that have so far been elusive.
Appeal to scientific expertise is but one of the tools of persuasion. There is already a strong public case for bold policy action; recognizing this can set environmental policy free.
Mud, Muddling, and Science Policy
Looking across the clam flat and the remains of a fishing weir toward the tidal marsh at Squirrel Point, Arrowsic, Maine. Photo by Lisa Margonelli.
Most days, I think about science policy writ large. Where does the $200 billion US taxpayers put toward science go, and how can science help craft better policy and better lives? But at home, I approach the subject at the level of the clam. The Maine island where I live has 477 human residents, and my role in town governance is to chair the shellfish conservation committee. As a clam decisionmaker, I do science policy at its smallest and muddiest.
It’s not easy figuring out how best to manage mollusks. My committee is charged with ensuring “the protection and optimum utilization of shellfish resources.” People in the communities around us depend on digging clams to make a living, continuing family traditions that go back many generations.
But since 2013, our surveys have dug up fewer and fewer clams in the town’s three flats. We don’t know of any single cause for the decline, which has been observed all along the Maine coast. Perhaps it’s due to the rapidly warming waters of the Gulf of Maine or the sea’s changing chemistry. Green crabs, voracious predators that arrived from Novia Scotia, Canada, appear a major culprit. But others lurk in the tides. The flat, putty-colored milky ribbon worm slips its proboscis into a clam’s siphon, injects a toxin that dissolves the mollusk’s tissues, and then slurps it out of its shell like a clam milkshake. And without any idea of what will show up next, it’s hard to predict how the behavior of the ecosystem as a whole may change. At our last meeting, a marine biologist from the nonprofit Manomet explained that newly arrived blue crabs may be eating the green ones. She asked us to document them for a citizen science project.
Amid so many unknowns and so little knowledge, we are left to muddle through the process of figuring out what to do. Should we license more clammers? Plant a clam farm? Close the flats for conservation? Kill crabs? Which ones, and how many? Meanwhile, the town has also created a climate action plan to figure out how to handle road flooding, possible sea water contamination of local aquifers, and the effect of rising waters on local salt marshes. It’s a lot for 477 people.
As a clam decisionmaker, I do science policy at its smallest and muddiest.
Until recently, climate policy was primarily vexing for national and international policymakers who struggled to agree on limits for greenhouse gas emissions. Today it is increasingly a local matter, as every hamlet’s clam committee begins to craft a response. Some argue that finding workable solutions among people who share land, highways, and values may be easier and more effective than global and national efforts. But for the scientific enterprise, the devolution of big policy to small places poses new challenges around establishing spaces for democratic decisionmaking, building knowledge to inform those decisions, and effectively linking the two.
Climate is not the only subject where policy has shifted to focus on the small. After all, the combined spending of state and local governments approaches that of the federal government, giving them a prominent role in decisionmaking, particularly on infrastructure, education, and environmental issues.
Political scientist Jacob Grumbach observes that “over the past generation, the state level has really become the main policymaker and the central battleground in American public policymaking, in contrast to the national level.” This dynamic, he argues, arose out of federal gridlock, but is now altering the way local and national political systems work—while favoring some interests at the expense of others. As on the clam flats, when new players arrive, the behavior of the whole system changes.
Some argue that finding workable solutions among people who share land, highways, and values may be easier and more effective than global and national efforts.
Since the end of World War II, the scientific enterprise has looked to the federal government for funding, orienting itself around national priorities. As decisionmaking moves toward states and localities, science leaders will need to understand how the landscape of opportunity is shifting and build the capacity to answer questions posed by specific geographic communities.
Education is one area where science is already witnessing this shift in opportunity. After Sputnik launched the beginning of the space race, Congress passed the National Defense Education Act in 1958, which spurred the National Science Foundation to develop national curriculums, textbooks, and even films. By 1983, that program had ended, and there was new fear that US science education was falling further behind. A Nation at Risk, a report from the National Commission on Excellence in Education, diagnosed the problem in the dire terms of the times: “We have, in effect, been committing an act of unthinking, unilateral educational disarmament.”
But a 2021 report from the National Academies of Sciences, Engineering, and Medicine, Call to Action for Science Education, argues that highly local collaborations are the way to improve science education. In this issue, Susan Singer, Heidi Schweingruber, and Kerry Brenner, who worked on the report, relate that “across the nation, we have seen a path to achieve both an informed citizenry and capable workforce by recruiting local industry, community, and philanthropy into supporting science education and allowing learners’ experiences to be tailored to their local context.” These local efforts can identify their own priorities, secure resources, and draw on community connections. In southeastern Tennessee, for example, a STEM alliance between school systems, universities, employers (including Volkswagen), and philanthropy worked together to build teachers’ skills, supply resources, and reinforce regional connections. The alliance is credited with helping raise student scores on the Ready Graduate indicator to 76%—in contrast to an average of 40% for other Tennessee schools.
One strength of these regional STEM alliances is that they may sidestep some of the pitfalls of national partisanship. “Importantly, they provide a venue for people to find common ground,” Singer and coauthors write, “so that progress does not get lost to political polarization.”
One strength of these regional STEM alliances is that they may sidestep some of the pitfalls of national partisanship.
This insight applies beyond education: superlocal green energy projects could create new political alliances if they’re carefully tailored to local needs and culture. Ariel Kagan and Mike Reese describe an “elegant” pilot project that harnesses wind power to produce ammonia at the University of Minnesota West Central Research and Outreach Center. The project aims to help farmers save money on expensive imported fertilizer while lowering the carbon footprint of their crops. It also builds on a long local history of farmer cooperatives that organized to fight the power of railroads and grain monopolies—and are now partners in the pilot. Its achievements offer a glimpse of how uniquely local energy solutions could create new economic and political alliances around climate-friendly technologies.
This bespoke wind-to-hydrogen-to-ammonia plant required significant local knowledge production. Scientists and engineers at the University of Minnesota worked to optimize processes to electrolyze water, separate nitrogen from the air, and then combine hydrogen and nitrogen to make ammonia using the region’s stranded wind energy. Another unit at the university developed a tractor and a grain dryer fueled by ammonia. That meant small-scale, locally produced ammonia could be used to fertilize, grow, and dry corn. The project estimates that by combining these technologies, the grain’s carbon emissions can be reduced by nearly 80%.
Going local opens up interesting new possibilities for policy, but it also challenges the science enterprise to produce evidence for decisionmakers. Some initiatives that are already underway could help this transition. Issues has published articles on state- and county-level programs to bring evidence to policymakers in North Carolina, California, Missouri, Maine, and upstate New York. Movements for engaged research and citizen science could be expanded to produce knowledge fit to community needs. And, as Rayvon Fouché argues in this issue, involving more social scientists and non-scientists in forming the questions that science tries to answer could be a powerful tool for transformation.
Another possible tool for creating relevant knowledge is developing theory that can be applied to diverse circumstances. Samantha Montano reflects on the pace of disasters in the Gulf of Mexico and calls for building the capacity of local emergency management agencies in the region. But to boost the effectiveness of the response system as a whole, she recommends investing in more research on disaster theory to inform best practices so that local efforts can learn from and build upon the experiences of other management agencies and communities.
Going local opens up interesting new possibilities for policy, but it also challenges the science enterprise to produce evidence for decisionmakers.
Dipping into Issues’ 40-year archive, the shift toward the local is readily apparent. In the magazine’s early days, proposals for arms control, agriculture, ozone, health, and climate policy were regularly aimed at national or international policymakers and institutions. By contrast, in this issue, an article on monitoring biosecurity in the melting Arctic argues that researcher-to-researcher science diplomacy can be a powerful tool at a time when global bodies are constrained by geopolitical tensions. Nataliya Shok and Katherine Ginsbach write that “keeping scientific connections like these alive among Arctic researchers should be a diplomatic imperative, both to deepen the global understanding of shared health and climate risks as well as to preserve peace, stability, and constructive cooperation in the region and beyond.”
When I’m wearing my clam hat, the switch to local focus feels inescapable. But in my Issues hat I sometimes mourn the eclipse of Big Policy, and the way society has traded the possibility of doing big things for a raft of small ones—the “art of the possible.”
In his 1959 essay “The Science of ‘Muddling Through,’” social scientist Charles Lindblom provided an antidote to a similar nostalgia for a past age of big ideas. Lindblom wrote to clarify that incremental policies reflected real-world decisionmaking practices, even though the policy community at the time derogatorily called this “muddling through” and attached more credibility to so-called rational policies.
Revisiting Lindblom’s essay offers a window into a moment when ideals of centralized planning, which had been integral to the New Deal, were being superseded by incremental approaches that reflected shifting social values and goals. Lindblom mentioned in passing that congressional interest in creating Medicare, now regarded as a success of Big Policy, was powered by divergent ideals: Democrats wanted to strengthen federal welfare programs, while Republicans wanted to counter unions’ demands for pensions. Lindblom argued that “muddling through,” in its ability to handle complexity, opposing values, and hazards, constituted a legitimate and in many ways superior system—“not a failure of method for which administrators ought to apologize.”
I sometimes mourn the eclipse of Big Policy, and the way society has traded the possibility of doing big things for a raft of small ones—the “art of the possible.”
Lindblom’s article turns 65 years old this spring—old enough to apply for Medicare—and its citation rates have accelerated, from 5,382 on Google Scholar in 2011 to nearly 19,000 as this went to press. If it seems difficult to imagine that anyone would have had to strongly defend what is now an established method of policymaking, consider that we may be in a similar place with local policymaking today. And, following on Lindblom’s insight, having identified new dynamics between local, national, and international policy, it will take time and research to understand what new opportunities and hazards this murky, shifting ecosystem will hold.
As for the clam committee, last year we realized that obsessing over lost clams was not making those that remain any happier. At the urging of two teens who attended meetings of the committee, we scavenged piles of marine debris out of the salt marshes surrounding the flats and turned them into a sculpture in front of the town hall. By encouraging residents and summer tourists to talk about the enormous quantity of plastic in our estuary, the teens hope to influence policy at higher levels.
Amanda Arnold Sees the Innovation Ecosystem from a Unique Perch
In this installment of Science Policy IRL, we explore another sector of science policy: private industry. Amanda Arnold is the vice president of governmental affairs and policy at Valneva, a private vaccine development company, where she works on policy for creating, manufacturing, and distributing vaccines that address unmet medical needs, such as for Lyme and Zika.
Arnold has worked in the science policy realm for over twenty years, first as a policy staffer for a US senator, then as a legislative liaison for the National Institutes of Health, and as a senior policy advisor at the Massachusetts Institute of Technology. Arnold talks to editor Megan Nicholson about the role industry plays in the science policy enterprise and what she has learned about the US innovation ecosystem from working across sectors.
Is there something about science policy you’d like us to explore? Let us know by emailing us at podcast@issues.org, or by tagging us on social media using the hashtag #SciencePolicyIRL.
Resources
Read Amanda Arnold’s Issues article, “Rules for Operating at Warp Speed,” to learn about how the government can work to rapidly respond to future crises.
Megan Nicholson: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.
I’m Megan Nicholson, Senior Editor at Issues. For this installment of our Science Policy IRL mini-series, I’m joined by Amanda Arnold, Vice President of Governmental Affairs and Policy at the vaccine development company Valneva. In previous episodes of this series, we’ve learned about the science policy world by looking at federal agencies and non-profits. Today, we’re exploring a new frontier: private industry. We’ll talk to Amanda about the role industry plays in the science policy enterprise, her experience as a policy staffer on Capitol Hill, and what she has learned about the US innovation ecosystem from working across sectors.
Hi Amanda! It’s so great to have you on the podcast.
Amanda Arnold: Megan, I’m so happy to do this. Thank you for inviting me.
Nicholson: It’s great to have you. We have been opening all of the interviews for this series with the same question, which is kind of a tricky one. How do you define science policy?
From my point of view, science policy is when you have the role of innovation in society in mind, and you’re thinking about how the policies we make or the widgets we make or the things we do every day are related to science as you walk through your day.
Arnold: I have sat in so many conversations where people in this community were attempting to define it for almost two decades now. You hear a lot of science for policy and policy for science which I have never felt made any sense to me. So, from my point of view, science policy is when you have the role of innovation in society in mind, and you’re thinking about how the policies we make or the widgets we make or the things we do every day are related to science as you walk through your day. So for me, I walk through my day in policy in Washington and I think a lot about science as a component of this innovation context within which I work. And I’m not sure that’s a clearer answer, but it’s the one that makes sense to me. And I do think that there is a question about it not being a very well-defined space—this science policy space—but if you get all the people in Washington who do science policy in a room, it’s not a big room. So in the end, it might not seem very clear to each of us exactly what the definition is, but there’s definitely a unique set of people who do this thing referred to as science policy in Washington.
Nicholson: So let’s dive in a little bit more and have you talk more about the kind of science policy that you are doing day to day. What does a Vice President for Government Affairs and Policy do?
Arnold: When that Vice President for Government Affairs and Policy works at a vaccine maker, then we think about how our company fits into the larger innovation context about vaccines. In my particular case, I work for a really special company that works specifically to develop vaccines for neglected and tropical disease. So these are vaccines where there’s no clear immediate commercial market and there’s very few commercial companies working in that space. We’re publicly traded. And I was so drawn to that, because I’m usually in the government. I was also in the higher ed sector for a long time advocating for research and development funding. And now I’m in this space where the research and development funding is married with clinical development. We actually make a product and then figure out what is the business model for making the next product. And so in my world, it’s a little bit different than what people may be familiar with when they think about, for instance, big pharma.
In my world, yes, there is some component of a commercial market for those who travel abroad or for protecting our war fighters abroad from incredibly challenging infectious diseases for which there are medical countermeasures like vaccines. But there’s also this broader space of some amount of stockpiling, which absolutely needs to happen. And I spend a lot of time advocating for smart stockpiling that makes sense. And then there’s this other component of ensuring that people in countries who are actually being impacted by these tropical and other infectious diseases have access. And so we do a lot of licensing agreements to ensure that lower and middle income, LMIC, countries are able to access the products we’re making. I did my master’s thesis a long time ago on rare and elective disease vaccine development and it took me a long time to then circle into, in Washington, the actual policy I wanted to do, which is in relation to vaccine development with specific focus on infectious disease and outbreaks.
Nicholson: So it does seem like you’re bringing sort of a perspective from the research enterprise, this view— this overarching view—in your work day to day.
Arnold: I am, and I think that comes from a couple places because I have had a few folks more on the science side than on the policy side say, “What are you? You’re a weird hybrid.” But I think that comes from two places. One, my first science policy degree was in ’03, so I’ve been focused on this thing called science policy for 20 years. And I’ve been looking at what that means in Washington for the last 16. And during that time, I worked on the Hill for a Senator, and then I went over to work at the National Institutes of Health. So I had a sense for how does policy gets made and then how does policy gets implemented, and then something about the research dollars. And from then I went off to work with specifically MIT—which was the university that got me into this R&D ecosystem world—but there’s such a rich, wonderful population of people in Washington focused on that research enterprise. And, of course, I got my PhD from ASU, which is a shining beacon for support of the research enterprise. And so taking all of that and then going to work in this fantastic little private sector entity, I think I brought all of that with me and I’m not sure that would’ve happened had I not (A) gotten my PhD, but also worked in that higher ed community where so much of what you do is about that R&D ecosystem.
Nicholson: Let’s talk a little bit more about that, about universities having a presence in Washington and which universities are sort of the big players. You’ve worked for some of them, I think.
Washington is about knowing what you want, having a good darn reason for it, and following up with the papers so that people can follow up and make good policy.
Arnold: Yeah, I did a little “tour de universities” in Washington for about 10 years of my career. So it really depends on the goals. Universities are very local and that’s the greatest thing about them, right? I worked for public and private universities. I will say that there is such a legacy of engagement, and it’s your life for four years or for however many years you spend at the university. And so you automatically, when you walk into a room, you have a connection as someone who works on behalf of a university with anyone who went there, graduated from there, has a child there, et cetera. And I think that really… Washington, I don’t think it’s all about networking. I don’t think it’s all about relationships. Usually Washington is about knowing what you want, having a good darn reason for it, and following up with the papers so that people can follow up and make good policy.
But, people also say it’s about who you know. And I do think it’s helpful to be able to immediately connect with people so that you can deliver really important advocacy requests. For instance, I was part of a group that worked from 2010 or so to double NIH, to double that budget. And working with that university—it wasn’t just universities, it was universities in partnership with these other partners—I could see that universities, they have a different foot in the door in Congress. They have a different foot in the door in the federal agencies. From Congress’s point of view, it’s very much the hometown of where the Congress person is from. And there’s a lot of connection, there’s a lot of support to ensure that that university is successful. And then in Washington, those universities, especially a Research One university, does so much important work on that basic research and then that early research then gets magnified and turned into products. So universities are part of this engine, part of this innovation system, that we work with in Washington. And so they are revered in many ways. Not all of ’em, but many are revered. You’re wearing that white hat when you walk in the door.
Now I’ll speak from the national level. So many universities do a wonderful job at the state level. Now that’s critical. If you don’t have the support of your governor, if you don’t have the support of your local leadership over many, many decades, you’re not going to have a successful campus, a successful researcher ecosystem. So that’s critical. But then there are universities that want to invest in Washington, and I think they do that for different reasons. I did work for a couple of universities. I’ve worked for MIT, Texas A&M, and Arizona State. And I’d say Arizona State’s really in this conversation in Washington to really adjust the conversation about the future of higher ed. Texas A&M is here because Texas A&M is very connected to our national defense presence and very much connected to preparedness and response. And that is just something that permeates much of that state and its institutions. And then MIT is really in Washington in order to provide advice and ideas for the future of R&D direction. Really to set a pathway to say, “This is where we’ve been. If you need an idea about what’s lingering at the intersections of work that won’t be done otherwise, here it is.” They offer that advisory role to say, “If you’re interested in the next step, in funding the next step, here’s where that can be found.” Not just for, of course, MIT, but for research one universities engaged in the R&D ecosystem in a real way.
Nicholson: Thank you for giving us a picture of the objectives of those universities. And it’s interesting how different they can be. It also shows sort of an awareness of the political inclinations within the research ecosystem. I’m wondering if you started to develop that when you were on the Hill, or if you thought about that before or after?
Arnold: When I came here, I had already built an interest in politics over time. I had already worked in state level politics in Arizona. I had already been on a Senate campaign. I had already been on multiple House campaigns in New Hampshire. And I had done an internship with Senator Daschle and when he was the majority leader, which totally was great.
So really back then, which I hate to say, you had to make a choice about which side you were on, which seemed really daunting for me because prior to going to Arizona, I was in Montana. And in Montana you voted not the party, you voted for the person. Person not the party. And so that was a little bit anathema to me to have to choose the side so early on. And so I came to Washington then pretty seasoned with how I felt. And then I got into the details really when I started working on the Hill and started realizing there’s this broad spectrum of people in Washington. I mean, I’ll tell you, Congress is a great cross section of the people across America, right? I mean, you got all kinds. And I was on the Senate side. But you get a sense that, oh, I’m a certain party, but actually I’m this spectrum end of that party. And so where you figure out where you stand party politics-wise is really in the details of policy and the decisions your boss makes, right? You start thinking, “Hmm, I’m not sure that I totally support that.” And so it doesn’t matter, you have to go forth and do what the member of Congress wants to do because they’re the elected person. But it is an aspect on the Hill.
You stop personally being engaged so much in each other’s politics. You’re just there to make good policy on an important topic that you believe, that your institution believes, or that your company believes, will move that good policy ahead and make things better for Americans, which is the goal.
When you get to the agency, it’s not about a party unless you’re appointee. When you get to the agency, it’s about service to the U.S. And then when you leave the agency and you go into the private sector or you go into higher ed, it’s really about: I don’t really mind what your point of view is on X vote. Would I need you on is this agreement. For instance, I’ll give you an example of a critical priority for higher ed which is the R&D tax credit, research and development tax credit. It’s always been an important lingering priority. It’s a hard one to explain, but it’s not so much how that person voted on some challenging vote. It’s helping them understand that R&D tax credit, where they need to be in support of it, which bills are moving. So you really get into the mechanics and how to move something through a system, and you stop personally being engaged so much in each other’s politics. You’re just there to make good policy on an important topic that you believe, that your institution believes, or that your company believes, will move that good policy ahead and make things better for Americans, which is the goal.
Nicholson: At that first Hill job, were you working in health policy right away, or was that just science more broadly?
Arnold: Oh, you’re going to laugh. So I had just done this fancy science policy degree and I came on. I had come off of a nine months of field work, which is literally knocking doors. I mean, it is a tough gig, but I loved it. It was great. I was in Montana and I got hired on after I was doing a stint in the state legislature. And you just get handed a portfolio. Your portfolio—and I think the AAAS scholars will definitely hear and perhaps laugh at too—because you’re just handed a bunch of stuff when you’re a congressional staffer based on the fact that you’re an intelligent human and you can figure it out. I’ll give you one example. I ended up handling a critical component for Montana, which was transportation policy, and that included appropriations at the time. You changed lives by doing this. And I remember the senator came up to me… well, I’ll tell you. So it’s a quick story. Can I tell you a quick story?
Nicholson: Please! Yes.
Arnold: Okay. Okay. The senator, his first year, was put in the Russell Senate office building, which is beautiful and wonderful, but just because of his hierarchy, et cetera, we weren’t very close to his main office. His main office, his chief of staff, was separate from the legislative team, the mail, et cetera. We were upstairs and around the corner. Usually you’re right next to each other. And so I didn’t realize how lucky I was because when he would, or when the chief staff would, write and say, “We’ve got X in the office, I need to know Y.” Then I could Google quickly and figure out if I hadn’t known it from my own personal expertise. I’m a little bit light on transportation. Better now! But was then, a little bit light on transportation policy. And so I would oftentimes be Googling while running downstairs trying to figure out, “Where is this? Okay, that’s this organization.” To give you some tips to kind of walk in the right direction, to give him some sense for who he’s about to meet with, et cetera.
And we moved, when he got higher up in the echelon of Senators, we moved up to the Hart building. We were all in one office, in one space. I got a nice little cubicle and I was so happy. And then the Senator just rolls up to my cubicle and says, “Yeah, Amanda, I need to know: tell me everything you know about Red Diesel.” I said, “Sir, I know nothing about Red Diesel, but give me five minutes and I will connect what I know about other things. What exactly are you looking for?” Right? So that was the downside.
Nicholson: You lost your commute.
Arnold: I lost my Googling commute. That’s what happened.
Nicholson: What a kind of transformative learning experience. Supporting someone in a really pivotal role, but really challenging yourself at the same time.
So we’ve talked about your Hill experience. We talked a little bit about the agencies and their role. We talked about your work in academia and its representation in Washington. I know that you’ve worked a little bit with the National Academies on convergence. I’m wondering, can you talk a little bit about that work and your involvement in convergence-based research?
Arnold: Yes, that was the honor of my lifetime to date. It was such an honor to participate in that conversation which I think is still going and facilitating new pathways. So this was back when I was working for MIT. My first day in this group of very talented professors at MIT, and they basically said: the future of biology is integrated very closely with engineering, but NIH doesn’t know that, it’s all biology folks-focused. We need to integrate some additional capacities into this sort of research pathway because NIH has a lot of money and makes decisions about what gets funded. And if you’re not funding research because you don’t understand it, because you’re not an engineer or it’s outside of your expertise in general, then you’re not going to fund it, right? Because it’s peer reviewed research. And so I was tasked with supporting that crew of amazing people, including Phil Sharp and Bob Langer and others, amazing people who have founded amazing companies and done amazing things for the U.S. and for the people of the U.S.
I was tasked with putting that into words. And so I’ve done that a couple times for National Academies. And what happened was basically, we put a white paper out and then the National Academies had a couple opportunities to dig into it. I was participating on one convergence report the National Academies did, and then supportive of another convergence report the National Academies did. And there were two other major reports at the time. And then we were working… there was some advocacy associated with that convergence effort, which this integration of life, physical, engineering sciences through the peer review process, getting those capacities engaged.
Bioeconomy is sort of a buzzword in Washington, and at least it is now. It makes me laugh because Mary Maxon wrote the first bioeconomy report for the Office of Science and Technology Policy at the White House in, I don’t know, 2011 or something. And at that time, we were looking at really making the argument to the White House and specifically to Mary at OSTP, that convergence was an aspect. It was a pathway to facilitate the bioeconomy, that we needed to get alternative products we won’t get otherwise because we’re not doing this integrated research at the outset. There were some really interesting pathways that developed from the convergence work, including a lot of the team science reports. There were a couple of teams reports that emerged, and then the personalized medicine and precision medicine. Those two kind of aspects also were included in and came with convergence. And so I still see convergence manifesting, but I think in a lot of ways it’s happening because technology has been integrated in a way that it just wasn’t 15 years ago when the good folks at MIT started having this complaint about their very good peer reviewed research not being funded because it just simply was not sufficiently pure bio.
These little projects you’re given at work become integrated with the kind of expertise you build, and they make you into a professional that you’re going to be. You can’t get it anywhere other than just being the staffer who does the work. So it’s a real honor and a real aspect of being in Washington that I think is absolutely priceless.
Really cool. If I drop some ink in that pathway through my life, it’s really cool to watch all the connections that have emerged from that. In Washington, you have these times where you are doing something that’s important, that’s really going to change lives. It is really catching steam. It’s important. Convergence was one of them. And then the work I do in Biodefense is sort of another. And you’ll be in these rooms with subsets of the science policy community totally focused on those. And I can walk in a room and know where my convergence people are, and I can walk in a room and know where my biodefense people are. And I spend a lot more time on emerging infectious disease and biodefense readiness preparedness today than I do with convergence. But these things become integrated with the kind of expertise you build, right? These little projects you’re given at work become integrated with the kind of expertise you build, and they make you into a professional that you’re going to be. You can’t get it anywhere other than just being the staffer who does the work. So it’s a real honor and a real aspect of being in Washington that I think is absolutely priceless.
Nicholson: The mix of people, the convergence of people.
Arnold: The convergence of people. Yes.
Nicholson: Our last sort of big question is: what motivates you to do this work? What are your motivations, what gets you up in the morning and keeps you up at night?
Arnold: So we were all living through the pandemic. And I was just thinking, “Yeah, sure, I guess we’ll solve this problem, but how? Are we going to get a vaccine and what does that mean and how did this work? And can we do it again?” So I think my own interest in rare, neglected tropical disease started when I was quite young because I thought, “Well, people shouldn’t die of something that a product can be developed for. That shouldn’t be a thing.” But now it’s much more central to who I am, which is that I don’t want to go through another pandemic. I want to prevent pandemics. And so at this point now, really making sure that we do everything we can to prevent them. There’s so many different little tweaks to cold viruses and different infectious diseases that are out there, and there’s so much work to be done.
So now I get to not only be part of a company that’s creating vaccines as an answer, but I also get to be part of the discussion: how do we make sure these vaccines are ready? How do we make sure these vaccines can be manufactured? What if we need to surge in crisis? And I get to have much bigger discussions that include all the different outbreaks that are happening sort of as we move. We just went through, we’re in an RSV, we’re coming out of the RSV sort of effort. We had the monkeypox sort of nervousness. These kind of flow through our worlds now pretty consistently. And so I get to actually affect that, change that, and support answers to that. And I mean: does it get better? Right? No, is the answer. It doesn’t get any better than that.
Nicholson: I know that you’ve thought about this a lot. I would love to just hear some of your reflections on the way that, from a research enterprise standpoint, the pandemic has changed the way that we think about vaccine development.
You can absolutely say the next outbreak is a plane right away and people immediately understand the real value of thinking globally versus locally on that particular topic.
Arnold: Oh my gosh. It’s such a good question, and I could probably write another big pile of research on it because we don’t fully know yet. So on how it’s affected vaccine development is that we realized… The biggest impact, I think, is that we realized that we cannot have a national response. We have to have a global response. So that means we’re not being nationally prepared, we’re being globally prepared. And I think if I had to choose one change that has happened that the pandemic forced, it was that move away from nationalism and the move toward globalism. On a very basic level, we’re not having those discussions so much about protecting America. We’re talking about protecting our citizenry, sure. But we’re much more having those conversations about what are the delivery mechanisms in place to get these vaccines to the places where we know some of these potential contagions are endemic. How do we provide vaccines, for instance, to people in those regions so that (A), there isn’t a breakout there, but (B), that doesn’t manifest, we don’t manifest a breakout here. I mean, you can’t talk about the pandemic on the Hill. It’s too political. But you can absolutely say the next outbreak is a plane right away and people immediately understand the real value of thinking globally versus locally on that particular topic.
Nicholson: And in your current role now in the private sector, do you get to work in that more globalized space in vaccine development?
Arnold: I do. Oh my goodness. So I do very local things. We’re working on developing and supporting something that was authorized in the 2023 budget, which is a state stockpile opportunity to augment the strategic national stockpile. So making sure that’s done well. So that’s very much at the local level. Engaging with state level health department folks about what would you stockpile if you had X dollars and a grant from the United States. So there’s that discussion. But then I was also communicating with a lot of my friends—who are my friends now because we all have been working together for so long—saying, okay, we watched some of these organizations try to provide vaccines globally. And there were a lot of challenges with that, right? There were complaints about how it was done. We had sort of real time learning about what that looks like. And we had never attempted it before. Not we, but the world, had never attempted that necessarily before.
And so, now I’m actually working—I worked with my company and several other organizations to talk about what can we do to build this ability to turn turnout stockpiled components, countermeasures, that we’re not going to use in the US right before they expire. How can we give those or sell those or make them usable internationally instead of just letting them go bad. There’s quite a lot that we are stockpiling as a country that we could give to areas where there’s an endemic issue. And so it is absolutely a jurisdictional issue, but I get to work on that right now. And it’s because all of us are personally and professionally engaged in making sure that we’re protected, globally and locally. So I’m really excited about that. We’re actually working on report language for FY25 and just finished a white paper on it. So exciting times. This is the things I get excited about. White papers. I can frame the white papers I’m super excited about in this world. There’s like three.
Nicholson: Yeah, that is all really wonderful. You have such an interesting cross sectoral background, and I’d love to know a little bit about the motivations for moving between those sectors and if that was a trajectory you saw for yourself at the beginning of your career or if that was sort of opportunity based along the way.
Arnold: Absolutely. That’s such a good question. And I was just thinking about it the other day because I was just thinking like, “Oh, I’ve actually been doing this for a while now. I wonder if this is what I’m going to do for a while now.” And I think I started very much thinking I was going to be all politics all the time and be in elected politics and maybe work in state politics or national politics, but I knew I wanted to be political. And when I got political, I realized that ho-ho, to get anything done is a lot of work to take a minor step. And that’s probably how it should be, frankly. It shouldn’t be easy to change the law or create new law.
While I appreciated the deliberatory aspect of it, it didn’t fit my need to make good change in this world. I went over to the federal agencies from the Hill and thought, “Oh, well, maybe this is it, right?” They do the implementation. They do—in this case, I went to NIH, and so it was a matter of this is helping. No one person does anything but helping to shape what gets discovered based on what Blue Sky Science, for instance, is funded. And that was great and was interesting, but I felt like I was just passing a lot of paper around, right? Saying, “Is this the right word? Is that the right word?” Because there is a lot of just due diligence that happens at the federal agencies where you have to actually make sure things are right. And that was fine, but not driving my need because that was implementing someone else’s sort of recommendations.
So then I went over to MIT, and that’s when I could really start this creative process of working with experts in the field to create a pathway, an idea for what’s next. And I did that for a long time. I loved working for universities because of that creativity, because of that space where you’re working with people who know the world, the corner of the world that they’re focused on, and actually can say, “Yeah, this policy would be better if.” And so that was really valuable to me, and I probably would’ve stayed there in higher ed if pandemic hadn’t happened. And so the pandemic happened. And because I was in higher ed, I was doing my PhD at the time and studying what happened in the pandemic and how the private companies responded and really getting a vast understanding, at least from my own point of view, about how different companies responded.
I began to get really interested in industry. I’d never been interested in industry. I never thought, I mean looked. I was like, I looked at the business school of kids. I remember when I was in college. They’re like, this is not me. And so all politics all the way. And so then I started, I got into that through talking to some of my friends from the biodefense world and doing some consulting, and then started bumping into a couple of companies that I thought were really neat, and then ended up where I am, and specifically because I’m marrying all of this work I did on the development process with now understanding this new space about commercialization and manufacturing, which is great.
I’m at a point where I really feel like I can do some really good work and be really supportive of the people making decisions from a perch in DC. And that’s the goal.
I’m at a point where I really feel like I can do some really good work and be really supportive of the people making decisions from a perch in DC. And that’s the goal. You don’t realize it when you’re young and you’re coming here, but really in the end, you’re looking for a perch from which you’re comfortable, from which you can support this process. And for me, also supporting the innovation ecosystem that’s so critical to me, which I think just to bring us around at the beginning, I think that’s why I define my career in science policy as based in this support for the R&D and innovation ecosystem because the two are intricately related in my life.
Nicholson: Thank you so much, Amanda. It’s been really great to talk to you. I think that you’ve really highlighted some of the dynamism of the research enterprise and the opportunities that there are for working and learning from it. So I really appreciate it.
Arnold: Yes! Come do science policy!
Nicholson: If you would like to learn more about Amanda Arnold’s work, check out the resources in our show notes. Is there something about Science Policy, you’d like to know? Let us know by emailing us at podcast@issues.org or by tagging us on social media using the hashtag #SciencePolicyIRL.
Please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producer, Kimberly Quach, and our audio engineer Shannon Lynch. I’m Megan Nicholson, Senior Editor at Issues in Science and Technology. Thank you for listening.
Forks in the Road to Sustainable Chemistry
In “A Road Map for Sustainable Chemistry” (Issues, Winter 2024), Joel Tickner and Ben Dunham convincingly argue that coordinated government action involving all federal funding agencies is needed for realizing the goal of a sustainable chemical industry that eliminates adverse impacts on the environment and human health. But any road map should be examined to make sure it heads us in the right direction.
At the outset, it is important to clear misinterpretations about the definition of sustainable chemistry stated in the Sustainable Chemistry Report the authors examine. They opine that the definition is “too permissive in failing to exclude activities that create risks to human health and environment.” On the contrary, the definition is quite clear in including only processes and products that “do not adversely impact human health and the environment” across the overall life cycle. Further, the report’s conclusions align with the United Nations Sustainable Development Goals, against which progress and impacts of sustainable chemistry and technologies are often assessed.
The nation’s planned transition in the energy sector toward net-zero emissions of carbon dioxide, spurred by the passage of several congressional acts during the Biden administration, is likely to cause major shifts in many industry sectors. While the exact nature of these shifts and their ramifications are difficult to predict, it is nevertheless vital to consider them in road-mapping efforts aimed at an effective transition to a sustainable chemical industry. Although some of these shifts could be detrimental to one industry sector, they could give rise to entirely new and sustainable industry sectors.
As an example, as consumers increasingly switch to electric cars, the government-subsidized bioethanol industry will face challenges as demand for ethanol as a fuel additive for combustion-engine vehicles erodes. But bioethanol may be repurposed as a renewable chemical feedstock to make a variety of platform chemicals with significantly more value compared to its value as a fuel. Agricultural leftovers such as corn stover and corn cobs can also be harnessed as alternate feedstocks to make renewable chemicals and materials, further boosting ethanol biorefinery economics. Such biorefineries can spur thriving agro-based economies.
Although some of these shifts could be detrimental to one industry sector, they could give rise to entirely new and sustainable industry sectors.
Another major development in decarbonizing the energy sector involves the government’s recent investments in hydrogen hubs. The hydrogen produced from carbon-free energy sources is expected to decarbonize fertilizer production, now a significant source of carbon emissions. The hydrogen can also find other outlets, including its reaction with carbon dioxide captured and sequestered in removal operations to produce green methanol as either a fuel or a platform chemical. Carbon-free oxygen, a byproduct of electrolytic hydrogen production in these hubs, can be a valuable reagent for processing biogenic feedstocks to make renewable chemicals.
Another untapped and copious source of chemical feedstock is end-of-use plastics. For example, technologies are being developed to convert used polyolefin plastics into a hydrocarbon crude that can be processed as a chemical feedstock in conventional refineries. In other words, the capital assets in existing petroleum refineries may be repurposed to process recycled carbon sources into chemical feedstocks, thereby converting them into circular refineries. There could well be other paradigm-shifting possibilities for a sustainable chemical industry that could emerge from a carefully coordinated road-mapping strategy that involves essential stakeholders across the chemical value chain.
Bala Subramaniam
Dan F. Servey Distinguished Professor, Department of Chemical and Petroleum Engineering
Director, Center for Environmentally Beneficial Catalysis
University of Kansas
Joel Tickner and Ben Dunham describe the current opportunity “to better coordinate federal and private sector investments in sustainable chemistry research and development, commercialization, and scaling” through the forthcoming federal strategic plan to advance sustainable chemistry. They highlight the unfortunate separation in many federal efforts between “decarbonization” of the chemical industry (reducing and eliminating the sector’s massive contribution to climate change) and “detoxification” (ending the harm to people and the environment caused by the industry’s reliance on toxic chemistries).
As Tickner and Dunham note, transformative change is urgently needed, and will not result from voluntary industry measures or greenwashing efforts. So-called chemical recycling (which is simply a fancy name for incineration of plastic waste, with all the toxic emissions and climate harm that implies), and other false solutions (such as carbon capture and sequestration) that don’t change the underlying toxic chemistry and production models of the US chemical industry will fail to deliver real change and a sustainable industry that isn’t poisoning people and the planet.
Transformative change is urgently needed, and will not result from voluntary industry measures or greenwashing efforts.
The 125-plus diverse organizations that have endorsed the Louisville Charter would agree with Tickner and Dunham. As the Charter states: “Fundamental reform is possible. We can protect children, workers, communities, and the environment. We can shift market and government actions to phase out fossil fuels and the most dangerous chemicals. We can spur the economy by developing safer alternatives. By investing in safer chemicals, we will protect peoples’ health and create healthy, sustainable jobs.”
Among other essential policy directions to advance sustainable chemistry and transform the chemical industry so that it is no longer a source of harm, the Charter calls for:
preventing disproportionate and cumulative impacts that harm environmental justice communities;
addressing the significant impacts of chemical production and use on climate change;
acting quickly on early warnings of harm;
taking urgent action to stop the harms occurring now, and to protect and restore impacted communities;
ensuring that the public and workers have full rights to know, participate, and decide;
ending subsidies for toxic, polluting industries, and replacing them with incentives for safe and sustainable production; and
building an equitable and health-based economy.
Federal leadership on sustainable chemistry that advances the vision and policy recommendations of the Louisville Charter would be a welcome addition to ongoing efforts for chemical industry transformation.
Steve Taylor
Program Director
Coming Clean
Joel Tickner and Ben Dunham offer welcome and long-overdue support for sustainable chemistry, but the article only scratches the surface of societal concerns we should have about toxicants that result from exposure to fossil fuel emissions, to plastics and other products derived from petrochemicals, and to toxic molds or algal blooms. Their proposals continue to rely on the current classical dose-response approach to regulating chemical exposures. But contemporary governmental standards and industrial policies built on this model are inadequate for protecting us from a variety of compounds that can disrupt the endocrine system or act epigenetically to modify specific genes or gene-associated proteins. And critically, present practices ignore a mechanism of toxicity called toxicant-induced loss of tolerance (TILT), which Claudia Miller and I first described a quarter-century ago.
TILT involves the alteration, likely epigenetically, of the immune system’s “first responders”—mast cells. Mast cells evolved 500 million years ago to protect the internal milieu from the external chemical environment. In contrast, our exposures to fossil fuels are new since the Industrial Revolution, a mere 300 years ago. Once altered and sensitized by substances foreign to our bodies, tiny quantities (parts per billion or less) of formerly tolerated chemicals, foods, and drugs trigger degranulation of mast cells, resulting in multisystem symptoms. TILT and mast cell sensitization offer an expanded understanding of toxicity occurring at far lower levels than those arrived at by customary dose-response estimates (usually in the parts per million range). Evidence is emerging that TILT modifications of mast cells explain seemingly unrelated health conditions such as autism, attention deficit hyperactivity disorder (ADHD), chronic fatigue syndrome, and long COVID, as well as chronic symptoms resulting from exposure to toxic molds, burn pits, breast implants, volatile organic compounds (VOCs) in indoor air, and pesticides.
Most concerning is evidence from a recent peer-reviewed study suggesting transgenerational transmission of epigenetic alterations in parents’ mast cells, which may lead to previously unexplained conditions such as autism and ADHD in their children and future generations. The two-stage TILT mechanism is illustrated in the figure below, drawn from the study cited. We cannot hope to make chemistry sustainable until we recognize the results of this and other recent studies, including by our group, that go beyond classical dose-response models of harm and acknowledge the complexity of multistep causation.
Nicholas A. Ashford
Professor of Technology and Policy
Director, Technology and Law Program
Massachusetts Institute of Technology
Tools That Would Make STEM Degrees More Affordable Remain Unexamined
Last fall’s update from the National Center for Science and Engineering Statistics once again warned that Black, Hispanic, American Indian, and Alaska Native students continue to be among the “missing millions” in science and engineering fields. Despite gradual gains among Black and Hispanic students, all four groups remain disproportionately underrepresented among those earning STEM degrees in the United States. The National Science Board and others have proposed various strategies to address the problem, but the expense of obtaining a STEM degree remains underexplored as a limiting factor.
Controlling for inflation, the cost of attending a public four-year college in the United States has more than doubled since the early 1990s. Concurrently, federal student loan debt has ballooned by over 200%, from $500 billion in 2007 to $1.6 trillion today, though the number of borrowers has only grown by 53% in that time. This surge in borrowing, coupled with nearly one-third of students defaulting on their loans, has not escaped notice. Many in media and policy spheres continue to sound the alarm and explore ways to help borrowers manage their student debt. Most of the federal focus has been on lowering debt borrowers have already accumulated; notably, last year the Biden administration launched its Saving on a Valuable Education Plan, which aims to cancel debt or reduce borrowers’ monthly payments through a number of executive actions.
The cost of attending a public four-year college in the United States has more than doubled since the early 1990s.
But reducing student debt in the long term—especially for marginalized populations—requires making college more affordable in the first place. Unfortunately, as evidenced by the growing student debt crisis, current policy tools used to manage the cost of tuition and fees across the ecosystem of US higher education are falling short. Building a more affordable and equitable path to higher education will require policymakers, researchers, and leaders in higher education to broaden the national conversation around existing options, and particularly their impact on underrepresented degree seekers.
The state of college affordability
Undergraduate student loan debt has become unmanageable for a wide swath of borrowers in the United States. Bachelor’s degree recipients borrow on average $41,300, with a median of $30,000. The median borrower still owes 92% of their loan four years after earning a bachelor’s degree, and nearly one-third of people who took out a student loan between 1998 and 2018 fell into default. As part of its emergency response to the pandemic, the US Department of Education suspended action on federal student loans that were in default as of March 13, 2020, until at least September 2024.
Student loan debt is uneven across racial groups. Figure 1 highlights a disparity scholars have found across various datasets and over time: that Black students rely on student loans at a substantially higher rate compared to their peers. Disparities continue when students enter repayment, as Black students not only borrow the largest amount for a bachelor’s degree, but also owe the highest amount four years after graduating. These realities are connected to the interlocking structural inequalities of anti-Black racism that make it more difficult for Black families to accumulatewealth and for Black students to access healthyenvironments and well-funded schooling.
Recent data has also shown variation in loan repayment patterns by major, challenging the popular assumption that all STEM graduates have similar prospects after college. Though the median amount owed on student loans for STEM majors four years after earning their degree is 80%, this varies—from 59% for engineering to 94% for biological and physical sciences and agricultural sciences (Figure 2). These figures do not include the amount of additional debt students may incur in pursuit of further graduate education. Due to interest accrual, delayed repayment of undergraduate student loans can also result in greater debt burdens.
One relatively unremarked aspect of this analysis is that many public colleges and universities in the United States adjust tuition prices based on a student’s progress through higher education or by major, a practice called differential tuition. Public four-year institutions have a history of differentiating tuition based on in-state or out-of-state residence. They may also charge higher tuition or fees for certain majors based on the costs associated with educating students in these programs as well as the projected income of graduates. Figure 3 shows that by the 2015–2016 academic year, more than half of research-intensive public institutions in the United States used differential tuition, generally charging more for majors in business and STEM.
The fact that differential tuition may make a STEM major more expensive than a non-STEM major at some universities deserves more attention when considering how to make STEM degrees more affordable. For example, advanced, in-state students at the University of Maryland pursuing engineering and computer science degrees pay $1,500 more per semester than their peers enrolled in other disciplines (nearly 27% higher).
Of particular concern are the ways in which differential tuition may counteract efforts to attract and retain people historically excluded from STEM fields. Studies have shown that differential tuition policies have reduced the number of degrees awarded in majors with higher tuition, especially for women and students of color. Unfortunately, financial aid increases have been insufficient at offsetting these disparities, especially for low-income students.
A suite of unsustainable solutions
In addition to loans, most students rely on a combination of other federal, state, or local financial aid resources to cover the price of earning a bachelor’s degree. The majority of public and private institutions subsidize education expenses to some extent, and those with significant resources also offer grants large enough to guarantee that students from families below a certain income level will not incur loan debt to attend college. But in general, public US colleges and universities are more limited in their ability to help because tuition subsidies are determined, in large part, by state funding.
During and after the Great Recession of 2008, financial support for higher education dropped significantly in most states. But it wasn’t the first time. States have regularly cut education funding during tough economic times, and although state funding has finally started increasing, and the average subsidy that institutions provide has started to grow, decades of underinvestment have left a mark on public higher education. Deferred maintenance in several key areas like building repairs and further tuition loss from the pandemic have kept the pressure on most institutions to make ends meet. What’s more, state funding has yet to reach pre-2008 levels, which may signal a political shift in how states prioritize funds for higher education. Within the last year, even states with multi-billion-dollar surpluses, like Wisconsin and West Virginia, have cut funding for public institutions.
States have regularly cut education funding during tough economic times, and although state funding has finally started increasing, and the average subsidy that institutions provide has started to grow, decades of underinvestment have left a mark on public higher education.
At the onset of the pandemic, some public and private institutions committed to freezing tuition at 2020 levels. Though tuition caps and freezes may score political points for appearing to improve affordability, several lines of evidence suggest instances of their adoption at the state or institutional level have done little to make college degrees more affordable—especially over the long term. First, caps are often set so high that their effect on student tuition charges is negligible. Second, research has shown that when tuition is capped, student fees rise—and when fees are capped, tuition rises, essentially negating any savings to individual students. Third, studies suggest that institutions may even reduce financial aid in the wake of tuition caps to keep net tuition revenue steady. And finally, evidence suggests that tuition rapidly increases when short-term caps end. As commonly structured in the United States, tuition caps do not have a consistent effect on student enrollment and do little to improve affordability.
A vision for affordability
The United States currently relies on a rough patchwork of policies and mechanisms to project the image of college affordability while actually depending on students to navigate huge variances in higher education costs. Inevitably, they’re often left to shoulder a debt burden that might follow them around for decades. Lessons from other countries on how to assemble the policy patchwork more deliberately—to actually lower student costs and subsidize tuition in targeted disciplines—may help.
Australia’s Commonwealth Grant Scheme combines permanent tuition caps with differential tuition and government subsidies. The government sets caps on the amount of tuition students can be expected to contribute for different majors and provides supplementary government contributions based both on the cost to educate a student in a given major and the government’s prioritization of the major’s importance. For example, in 2014, students pursuing a mathematics major at any public institution would have a maximum student contribution of A$8,613 with a government contribution of A$9,782. In the same year, engineering majors had the same maximum student contribution, but the government contribution was A$21,707—more than twice as much.
Figure 4 shows the significant difference in tuition rates at public four-year institutions in the United States versus Australia. In recent years, the highest annual tuition at public four-year institutions in the United States is nearly three times as much as the average maximum student contribution in Australia. The highest annual tuition in the United States is still almost twice as much as the highest student contribution in any discipline in Australia.
Another lesson can be drawn from efforts to improve college affordability in the United Kingdom. The UK government capped tuition for all domestic undergraduate students and, at the same time, reduced funding for higher education. Universities and colleges have coped with the situation by cutting courses and programs, slashing faculty and staff compensation, and increasing enrollment of international students paying higher fees. To some extent, the tuition cap, implemented without complementary public support for higher education, has created a trade-off between educational quality and affordability—a reminder that tuition caps alone will not automatically create a more stable higher education sector.
Instituting country-wide tuition caps in the United States would require Congress to craft new policy under the auspices of the Higher Education Act. Several questions would need to be addressed in order to design such a policy: Whom should the caps benefit? Who will set it? How often should it be reset? Should the cap be different depending on the major or number of classes taken? Should the cap be the same for all states? How might caps affect financial aid and student borrowing, and how should that influence their design? These and other issues need to be rigorously examined to see how and whether tuition caps could feasibly help the United States create a more affordable pathway to college.
Experts on college affordability, tuition setting, and other related topics in higher education should convene to examine the value of tuition caps as a policy, particularly within the context of bringing the missing millions into STEM disciplines. Since most public university subsidies come from state coffers, federal efforts alone are unlikely to solve college affordability. And yet there are no clear policy tools available to ensure that states contribute their due for higher education. The decentralized nature of US higher education conceals useful information from researchers, decisionmakers, and policymakers—like the national average tuition increase for STEM degrees under differential tuition. Higher education leaders, especially in STEM fields, should be invested in creating spaces for ongoing conversations about real changes in college affordability as another avenue for removing barriers to STEM education and careers.
What the Energy Transition Means for Jobs
In “When the Energy Transition Comes to Town” (Issues, Fall 2023), Jillian E. Miles, Christophe Combemale, and Valerie J. Karplus highlight critical challenges to transitioning US fossil fuel workers to green jobs. Improved data on workers’ skills, engagement with fossil fuel communities, and increasingly sophisticated models for labor outcomes are each critical steps to inform prudent policy. However, while policymakers and researchers focus on workers’ skills, the larger issue is that fossil fuel communities will not experience green job growth without significant policy intervention.
A recent article I coauthored in Nature Communications looked at data from the US Bureau of Labor Statistics and the US Census Bureau to track the skills of fossil fuel workers and how they have moved between industries and states historically. The study found that fossil fuel workers’ skills are actually well-matched to green industry jobs, and that skill matching has been an important factor in their past career mobility. However, the bigger problem is that fossil fuel workers are historically unwilling to relocate to the regions where green jobs will emerge over the next decade. Policy interventions, such as the Inflation Reduction Act, could help by incentivizing job growth in fossil fuel communities, but success requires that policy be informed by the people who live in those communities.
While policymakers and researchers focus on workers’ skills, the larger issue is that fossil fuel communities will not experience green job growth without significant policy intervention.
Even with this large-scale federal data, it’s still unclear what the precise demands of emerging green jobs will be. For example, will emerging green jobs be stable long-term career opportunities? Or will they be temporary jobs that emerge to support an initial wave of green infrastructure but then fade once the infrastructure is established? We need better models of skills and green industry labor demands to distinguish between these two possibilities.
It’s also hard to describe the diversity of workers in “fossil fuel” occupations. The blanket term encompasses coal mining, natural gas extraction, and offshore drilling, each of which vary in the skills and spatial mobility required by workers. Coal miners typically live near the mine, while offshore drilling workers are on-site for weeks at a time before returning to homes anywhere in the country.
Federal data may represent the entire US economy, but new alternative data offer more nuanced insights into real-time employer demands and workers’ career trajectories. Recent studies of technology and the future of work utilize job postings and workers’ resumes from online job platforms, such as Indeed and LinkedIn. Job postings enable employers to list preferred skills as they shift to reflect economic dynamics—even conveying shifting skill demands for the same job title over time. While federal labor data represent a population at a given time, resumes enable the tracking of individuals over their careers detailing things such as career mobility between industries, spatial mobility between labor markets, and seniority/tenure at each job. Although these data sources may fail to represent the whole population of fossil fuel workers, they have the potential to complement traditional federal data so that we can pinpoint the exact workers and communities that need policy interventions.
Morgan R. Frank
Assistant Professor
Department of Informatics and Networked Systems
University of Pittsburgh
Don’t Let Governments Buy AI Systems That Ignore Human Rights
In 2022, 28-year-old Randal Reid was driving to his mother’s house for Thanksgiving dinner in DeKalb County, Georgia, when he was pulled over by local police and arrested for a crime committed three states away in Jefferson Parish, Louisiana. But Reid had never been to Louisiana, let alone Jefferson Parish. The sheriff’s office there had taken surveillance footage showing a Black man stealing designer purses and fed it to commercial facial recognition software. The software misidentified Reid, who is also a Black man, and led to his arrest. It was not until December 1, after Reid spent a week in jail and thousands of dollars on legal and other fees, that the Louisiana department acknowledged the error and Reid was let go.
Wrongful incarceration is one of several types of human rights violations that unregulated and irresponsible use of AI systems may lead to, but local, state, and federal government procurement regulations in the United States do not require vendors bidding for government contracts to conduct assessments for the quality of data used to build their products, or for their products’ potential bias, risk, and impact. Facial recognition software—including programs sold by IBM, Amazon, and Microsoft—has demonstrated accuracy of more than 90% overall, but a landmark 2018 study found error rates can be more than 30% higher for darker-skinned women than for lighter-skinned men. The population of Jefferson Parish is 36% non-white, but, to our knowledge, the sheriff’s office procured the system from Clearview AI without performing any specific assessments of its performance or potential for social harm. (The office did not reply to Issues’ emails asking how it assessed the software.)
In the United States, most AI systems are procured by federal, state, or local entities the same way as traditional software; criteria focus on the cost of a project and a vendor’s past performance. Vendors are not required to, say, demonstrate that their solution can perform as needed in the real world. Systems developers need not prove the provenance or quality of their training data, share their models’ logic or performance metrics, or lay out design decisions, such as the trade-offs they made and the risks they foresaw and accepted. (Where guidelines do exist, they are outdated and limited to assessing risk for privacy and cybersecurity; one current Department of Justice guide dates to 2012.) Vendors can even claim trade secrecy to deny requests for such information, foreclosing critical assessment and independent validation.
Meanwhile, AI tools are increasingly used for applications that could upend people’s lives: criminal investigations, housing placements, social welfare screening, school assessments, and felony sentencing. In 2019, the story broke that for six years, Dutch tax authorities had been using a self-learning algorithm to help create risk profiles for identifying childcare benefits fraud. Largely based on the algorithm’s risk indicators, authorities wrongly fined and demanded tax repayments from tens of thousands of families—who tended to be from ethnic minorities or have lower incomes—and pulled more than a thousand children from their families into foster care. Families were impoverished and destroyed; some victims committed suicide. In 2021, the prime minister of the Netherlands and his entire cabinet resigned over the scandal.
AI tools are increasingly used for applications that could upend people’s lives: criminal investigations, housing placements, social welfare screening, school assessments, and felony sentencing.
There are other examples of poorly wielded AI technologies that cause life-altering harm. In 2016, the United Kingdom relied on automated voice analyses to verify the identities of those taking English language tests, resulting in around 7,000 foreign students being falsely accused of cheating. The students’ visas were revoked, and they were asked to leave the country. In 2022, US Customs and Border Protection deployed biased facial recognition software that did not accurately detect Black faces, effectively blocking applications from many Black asylum seekers. More and more research papers and news accounts reveal problems resulting from poor training data, unrecognized misassumptions, and misuse of tools intended for good.
The advance of artificial intelligence has been accompanied by a steady stream of high-level recommendations on how to regulate the technology, including the creation of new agencies. Blue-ribbon groups have worked out AI principles or frameworks that focus on the development or governance of AI systems. The White House published the Blueprint for an AI Bill of Rights in 2022; President Biden issued an executive order in 2023 laying out overarching standards to protect Americans’ privacy and civil rights that lists more than a page of AI uses presumed to impact rights. But there is a chasm between making a smart list and instituting real protections—one that can only be bridged through concrete steps. These include legislative action from Congress to regulate private actors as well as formal guidance from the Office of Management and Budget (OMB) on federal use of AI (all of which have yet to materialize). In the meantime, the US government is embedding AI systems into its infrastructure without robust and consistent safeguards.
Federal procurement policy could quickly put this in check by demanding that all federally purchased AI systems respect human rights. This would not only prevent dangerous purchases within the federal system, but could also become a model for state and local governments as well as other entities. Although robust regulations are unquestionably important, mandatory mechanisms embedded within the federal procurement process could go a long way to enhance accountability and avoid societal harm, both before and after other regulations are in place.
We think procurement guidance can be a particularly effective regulation tool because government agencies rarely develop AI systems de novo—they procure commercial ones or hire services from AI vendors. Considering civilian federal agencies alone, there are now more than 1,200 already deployed or planned use cases for AI, according to the US Government Accountability Office. Non-defense agencies requested $1.8 billion in the 2023 federal budget to implement AI technologies. (The Department of Defense itself received $1.8 billion, and has disclosed some 700 ongoing AI projects.)
For years, we have been advocating for more accountability for public actors and firmer centering of human rights in how governments procure and use AI.
Instituting federal procurement guidance at this stage would have cascading effects across the AI marketplace. State and local funding for AI tools often comes from federal agencies, such as the Department of Homeland Security or the Department of Justice (via the National Institute of Justice grants), which require compliance with federal procurement guidelines. And when states and municipalities do make their own purchases, they refer to federal guidelines to establish their own practices. Moreover, most AI vendors marketing to state and local clients also have contracts with federal agencies and so create tools with national regulations in mind. Plus, the US federal government is the largest buyer in the world and seen as a role model by other countries. Even in the absence of broader AI regulations, specific procurement provisions could set expectations for what AI vendors should provide in terms of data quality, model performance, risk assessments, and documentation.
To serve the people
Federal procurement could also do much to shift which groups AI systems are built to serve. Today’s incentives encourage the design of tools for government employees, not for those people most affected by algorithmic recommendations. After relevant personnel within a government agency decide what problems to solve and provide technical requirements, dedicated procurement officers oversee the purchase and ensure compliance with federal regulation as well as any applicable international regulations. But there is a disconnect: those selecting what to purchase may never interact with it again. Intended “end-users” might be social workers, immigration officers, soldiers, or recruiters. Procurement guidelines could help ensure that the needs of multiple people are considered in the process.
In particular, procurement offers a chance to represent the interests of those directly affected by decisions made with AI tools—Randal Reid, for instance, and child welfare applicants, job candidates, and refugees awaiting asylum decisions, to name a few. We think it is remarkable that today’s discussions of AI have no uniform language for the category of people with the most at stake. International development nongovernmental organizations call them “beneficiaries.” The Australian government calls them “customers.” The European Union AI Act calls them “individuals” or “impacted persons.” The Council of Europe’s draft convention on AI mentions “persons interacting with AI,” “affected persons,” and “persons concerned.” OMB’s draft guidelines on federal use of AI calls them “impacted individuals” or “customers” but those terms can concurrently mean “individuals, businesses, or organizations that interact with an agency.”
Even in the absence of broader AI regulations, specific procurement provisions could set expectations for what AI vendors should provide in terms of data quality, model performance, risk assessments, and documentation.
The very fact that there is no uniform term for the group of people most affected by AI tools shows how little their rights are considered in any stage of AI production and deployment—and how poorly prepared industry and governments are to take their needs into account. Companies have plenty of experience designing systems centered on their imagined end user, but we think governments should only purchase systems that demonstrably center the human rights of persons likely to be impacted by them. Procurement standards would ensure that governments uphold their chief duty: to serve the people.
Governments may be able to choose among bids from multiple AI vendors when making a purchase, but they have a monopoly on public services like policing, health care, and public welfare. In the case of Randal Reid, it was not the purchaser or end user of the facial recognition software who was forced to spend a week in jail; it was someone with no say in the system at all. (Indeed, Reid needed legal assistance to even find out that an AI system had identified him, presumably from photos he’d posted on social media.) Often, the more vulnerable the individual, the more they must rely on public services—and the more they are subject to enforcement. Thus, people with the least power are those most exposed to decisions made with AI systems. AI tools are widening the digital divide around public services because the most vulnerable are also less likely to have internet access or other resources (education, connections, implicit knowledge) needed to, say, opt out of default data collection. Whatever people subject to decisions made with AI are called, they deserve fair treatment and recourse to justice.
A procurement framework based on human rights
Relying on definitions in the United Nations’ Universal Declaration of Human Rights, it’s clear that AI systems impact such fundamental rights as the right to privacy, equal protection against any discrimination, access to social security, and access to effective remedies. AI systems can also impede freedoms, such as freedom of expression, association, or freedom from arbitrary arrest.
Governments should only purchase systems that demonstrably center the human rights of persons likely to be impacted by them.
We think building protection of human rights into procurement decisions (as well as AI design) requires several steps. First is justification that an AI system is indeed a solution. This entails collecting evidence demonstrating why it performs better than other methods and that it can actually work as intended within its operational context. The second step is to formally assess assumptions and design decisions within the AI system with an eye to gauging positive and negative impacts on communities. The Office of the Director of National Intelligence, for example, has published a six-page guide for evaluating an AI system’s appropriateness and potential flaws. The third step depends on that kind of assessment. If it foresees harms, measures should be taken to mitigate them. If mitigation is not possible, the AI system should not be procured. The fourth step is to ensure that the system is transparent enough to allow contestation and legal challenges.
Here are broad requirements for a procurement process that centers human rights.
Prohibit AI systems based on scientifically invalid premises. Too many AI systems are designed around spurious concepts and correlations. Many prevalent approaches, including biometric categorization, biometric emotion analysis, and predictive policing, lack scientific validity. Though vendors claim their systems are objective and capable of predicting or inferring such things as a person’s emotions, ethnicity, or likelihood of committing a crime, consensus is growing that they instead identify spurious correlations between disparate data points. The EU AI Act prohibited the use of these systems in many high-stakes contexts. The UN High Commissioner, European Data Protection Board, and European Data Protection Supervisor similarly called for a ban on the use of AI systems in public spaces that identify individuals (such as facial recognition technology), saying, “While the justifications for such programmes are currently theoretical and lack supportive evidence, the harms have been real and, often, irreparable.” More than 2,000 researchers signed a letter condemning crime prediction technology based on biometric and criminal justice statistics. Even in the United States, the Government Accountability Office recommended restricting funding to the Transportation Security Administration’s behavioral risk assessment system, which lacked scientific validity.
We would like to see US procurement guidance require vendors to back their claims with peer-reviewed research. Procurement teams cannot be expected to have the expertise to evaluate vendors’ claims on their own.
Enable meaningful transparency. Similar to the US Food and Drug Administration’s requirement for nutrition labels that clearly describe macronutrients, micronutrients, and allergens, AI systems and outcomes need to be more transparent about listing key characteristics. Proposals for how to do so have already been put forward by academics and implemented by business communication company Twilio. These kinds of disclosures would not require vendors to hand over their source code or even ontology, but there is other essential information—the parameters of their training data, rationale behind the chosen model and performance metrics, methodology of their testing, and outcomes of such testing—that should be available for public inquiry. That kind of transparency fulfills requirements that agencies notify the public and collect input before advancing certain programs, allowing others to assess the vendor’s design decisions, the fitness of the AI system for the purpose or context, and the policy choices embedded into the AI system. Importantly, to bring accountability, transparency criteria must be fulfilled before a contract is awarded. If transparency and external stakeholder engagement occurs only after procurement, it might be hard to terminate the system and the contract.
We would like to see US procurement guidance require vendors to back their claims with peer-reviewed research.
Robust transparency practices and expectations would powerfully aid advocacy on behalf of human rights. Academic and journalistic engagement have led to effective monitoring across industries. Efforts like the AI Incident Database and AI, Algorithmic, and Automation Incidents and Controversies Repository do much toward opening these systems to inquiry, but cases are opened only when problems are reported, and essential information often goes missing. Procurement can push to broaden transparency with AI systems that are explainable, traceable, contestable, and subject to third-party testing and verification. And all of this information can be brought into AI registries, enabling more advocacy with concomitant improvements in practice.
Conduct human rights assessments. Any major procurement process should have an obligatory assessment of human rights implications. It should cover three major aspects: vendors’ own analysis, agencies’ domain expertise, and meaningful engagement from external stakeholders, such as the broader public and civil society organizations. That last task may sound chaotic, but the United States has precedent and best practices to get public comment for policy changes that could apply to high-stakes AI procurements. A comprehensive and easy-to-use reference for assessing human rights in this context is the Dutch Ministry of the Interior’s Fundamental Rights and Algorithm Impact Assessment, which splits the assessment into four themes: intended effects (objective), data (input), implementation and algorithm (output), and fundamental rights (impact). Practically, documentation would be collected similarly to existing protocols, such as the required sections on monitoring and evaluation in State Department and US Agency for International Development requests for proposals, or the Cybersecurity Maturity Model Certification requirement for all Department of Defense vendors. When vendors’ proposals are evaluated, human rights assessment should rank as highly as other major criteria, such as desired outputs and key metrics of effectiveness.
The United States has precedent and best practices to get public comment for policy changes that could apply to high-stakes AI procurements.
Use ongoing assessment to ensure public infrastructure systems are adaptable. Once a technology is deployed, it blends into the background, and its assumptions are not questioned again. The focus of the agency shifts to system maintenance, rather than considering what kind of policies and values the technology promotes or what effects it is having on communities. As law, technology, and society scholar Andrew Selbst and colleagues put it, “code calcifies,” locking in any of several possible conceptual traps whereby technical systems fail within social worlds. The emphasis on maintenance preserves the status quo and deprioritizes responses to changing circumstance or inequities. But AI procurement processes and contractual clauses should allow for flexibility in AI approaches because systems and models are still evolving. The research and product landscape shifts continuously. Just as procurement provisions should build in processes for transparency, so too should they secure ongoing assessment of the system and its assumptions, perhaps tied to contract renewals or retained services.
Time to set requirements
The Office of Management and Budget is expected to finalize guidelines on federal use of AI later this year. It was two years late getting started on its mandate to draft them, but as part of the public consulting process, it collected a wealth of recommendations on how federal agencies should develop, design, procure, and use AI systems.
We urge OMB to finalize its guidance without further delay and leave no ambiguity on the rigorous controls that should be implemented: bright-line prohibitions against unscientific AI systems, clear mandates for evidence-based justification, and impact assessments on human rights that are contestable, explainable, and traceable. Furthermore, we think that OMB should be stricter than current draft guidance about requiring assessments and controls to take place before an AI system is put in use. Assessment should happen during the procurement process, when work is still being specified and the purchaser has more leverage—not as an afterthought or after the contract is awarded.
Critics will counter that strict procurement requirements will increase complexity of an already burdensome process and could drive private companies away from public contracts. But it’s hard to imagine successful businesses leaving such a big market unserved. Another objection is that US human rights requirements could increase the costs of systems or the alacrity with which they are adopted and so give China a competitive edge selling AI technology to less conscientious governments. Thesearguments are analogous to suggesting that passenger planes should be designed without seats or seatbelts or pressurized air because it is simpler and more efficient. The function of democratic governments is protecting their citizens. As the US Department of Justice, Consumer Financial Protection Bureau, Federal Trade Commission, and Equal Employment Opportunity Commission jointly stated, vendors should be held accountable when they fail to address discriminatory outcomes. And vendors can already move toward fulfilling these kinds of requirements and begin to, as a UN group developing guidelines has stated, “operationalize respect for human rights as part of how they do business.”
In the absence of specific guidance, AI will continue to be treated as traditional software with minimum safeguards for human rights, which drastically underestimates its vast societal impact. Federal AI procurement will be piecemeal, incurring further risks to human rights as well as national security. Society should not depend on the personal initiative of procurement professionals, but instead be able to count on clear guidance and AI-specific procurement training.
A Focus on Diffusion Capacity
In “No, We Don’t Need Another ARPA” (Issues, Fall 2023), John Paschkewitz and Dan Patt argue that the current US innovation ecosystem does not lack for use-inspired research organizations and should instead focus its attention on diffusing innovations with potential to amplify the nation’s competitive advantage. Diffusion encompasses multiple concepts, including broad consumption of innovation; diverse career trajectories for innovators; multidisciplinary collaboration among researchers; improved technology transition; and modular technology “stacks” that enable components to be invented, developed, and used interoperably to diversify supply chains and reduce barriers to entry for new solutions.
Arguably, Advanced Research Project Agencies (ARPAs) are uniquely equipped to enable many aspects of diffusion. They currently play an important role in promoting multidisciplinary collaborations and in creating new paths for technology transition by investing at the seam between curiosity-driven research and product development. They could be the unique organizations that establish the needed strategic frameworks for modular technology stacks, both by helping define the frameworks and investing in building and maintaining them.
Perhaps a gap, however, is that ARPAs were initially confined to investing in technologies that aid the mission of the community they support. The target “use” for the use-inspired research was military or intelligence missions, and any broader dual-use impact was secondary. But today the United States faces unique challenges in techno-economic security and must act quickly to fortify its global leadership in critical emerging technologies (CETs), including semiconductors, quantum, advanced telecommunications, artificial intelligence, and biotechnology. We need ARPA-like entities to advance CETs independent of a particular federal mission.
Arguably, ARPAs are uniquely equipped to enable many aspects of diffusion.
The CHIPS and Science Act addresses this issue in a fragmented way. The new ARPAs being established in health and transportation have some of these attributes, but lack direct alignment with CETs. In semiconductors, the National Semiconductor Technology Center could tackle this role. In quantum, the National Quantum Initiative has the needed cross-agency infrastructure and during its second five-year authorization seeks to expand to more applied research. The Public Wireless Supply Chain Innovation Fund is advancing 5G communications by investing in Open Radio Access Network technology that allows interoperation between cellular network equipment provided by different vendors. However, both artificial intelligence and biotechnology remain homeless. Much attention is focused on the National Institute of Standards and Technology to lead these areas, but it lacks the essential funding and extramural research infrastructure of an ARPA.
The CHIPS and Science Act also created the Directorate for Technology, Innovation, and Partnerships (TIP) at the National Science Foundation, with the explicit mission of investing in CETs through its Regional Innovation Engines program, among others. Additionally, TIP established the Tech Hubs program within the Economic Development Administration. Both the Engines and Tech Hubs programs lean heavily into the notion of place-based innovation, where regions of the nation will select their best technology area and build the ecosystem of universities, start-ups, incubators, accelerators, venture investors, and state economic development agencies. While this structure may address aspects of diffusion, it lacks the efficiency of a more directed, use-inspired ARPA.
Arguably the missing piece of the puzzle is an ARPA for critical emerging technologies that can undertake the strategic planning necessary to more deliberately advance US techno-economic needs. Other nations have applied research agencies that strategically execute the functions that the United States distributes across the Economic Development Administration, the TIP directorate, various ARPAs, and state-level economic development and technology agencies. This could be a new agency within the Department of Commerce; a new function executed by TIP within its existing mission; or a shift within the existing ARPAs to ensure that their mission includes investing in CETs, not only because they are dual-use technologies that advance their parent department’s mission but also to advance US techno-economic competitiveness.
Charles Clancy
Chief Technology Officer, MITRE
Senior Vice President and General Manager of MITRE Labs
Cofounder of five venture-backed start-ups in cybersecurity, telecommunications, space, and artificial intelligence
John Paschkewitz and Dan Patt make a strong argument that the biggest bottleneck in the US innovation ecosystem is in technology “diffusion capacity” rather than new ideas out of labs; that there are several promising approaches to solving this problem; and that the nation should implement these solutions. The implicit argument is that another ARPA isn’t needed because the model was created in the world of the 1950s and ’60s where diffusion was all but guaranteed by America’s strong manufacturing ecosystem, and as a result is not well-suited to address modern diffusion bottlenecks.
In my view, however, the need to face problems that the ARPA model wasn’t originally designed for doesn’t necessarily mean that we don’t need another ARPA, for three reasons:
1. While it’s not as common as it could be, there are examples of ARPAs doing great diffusion work. The authors highlight the Semiconductor Manufacturing Technology consortium as a positive example of what we should be doing—but SEMATECH was in fact spearheaded by DARPA, the progenitor of the ARPA model.
2. New ARPAs can modify the model to help diffusion in their specific domains. ARPA-E in the energy sector has “tech to market advisors” who work alongside program directors to strategize how technology will get out into the world. DAPRA has created a transition team.
The need to face problems that the ARPA model wasn’t originally designed for doesn’t necessarily mean that we don’t need another ARPA.
3. At the core, the powerful thing about ARPAs is that they give program managers the freedom and power to take whatever actions they need to accomplish the mission of creating radically new technologies and getting them out into the world. There is no inherent reason that program managers can’t focus more on manufacturability, partnerships with large organizations, tight coordination to build systems, and other actions that can enable diffusion in today’s evolving world.
Still, it may be true that we don’t need another government ARPA. Over time, the way that DARPA and its cousins do things has been increasingly codified: they are under constant scrutiny from legislators, they can write only specific kinds of contracts, they must follow set procedures regarding solicitations and applications, and they may show a bias toward established organizations such as universities or prime contractors as performers. These bureaucratic restrictions will make it hard for government ARPAs to make the creative “institutional moves” necessary to address current and future ecosystem problems.
Government ARPAs run into a fundamental tension: taxpayers in a democracy want the government to spend money responsibly. However, creating new technology and getting it out into the world often requires acting in ways that, at the time, seem a bit irrational. There is no reason an ARPA necessarily needs to be run by the government. Private ARPAs such as Actuate and Speculative Technologies may offer a way for the ARPA model to address technology diffusion problems of the twenty-first century.
Ben Reinhardt
CEO, Speculative Technologies
John Paschkewitz and Dan Patt make some fantastic points about America’s innovation ecosystem. I might suggest, however, a different framing for the article. It could instead have been called “Tough Tech is… Tough; Let’s Make it Easier.” As the authors note, America’s lab-to-market continuum in fields such as biotech, medical devices, and software isn’t perfect. But it is far from broken. In fact, it is the envy of the rest of the world.
Still, it is undeniably true that bringing innovations in materials science, climate, and information technology hardware from the lab to the market is deeply challenging. These innovations are often extremely capital intensive; they take many years to bring to market; venture-backable entrepreneurs with relevant experience are scarce; many innovations are components of a larger system, not stand-alone products; massive investments are required for manufacturing and scale-up; and margins are often thin for commercialized products. For these and various other reasons, many great innovations fail to reach the market.
Bringing innovations in materials science, climate, and information technology hardware from the lab to the market is deeply challenging.
The solutions that Paschkewitz and Patt suggest are excellent—in particular, ensuring that fundamental research is happening in modular components and developing alternative financing arrangements such as “capital stacks” for late-stage development. However, I don’t believe they are the only options, nor are they sufficient on their own to close the gap.
More support and reengineered processes are needed across the entire technology commercialization continuum: from funding for research labs, to support for tech transfer, to securing intellectual property rights, to accessing industry data sets and prototyping equipment for validating the commercial viability of products, to entrepreneurial education and incentives for scientists, to streamlined start-up deal term negotiations, to expanding market-pull mechanisms, and more. This will require concerted efforts across federal agencies focused on commercializing the nation’s amazing scientific innovations. Modularity and capital are part of the solution, but not all of it.
The good news is that we are at the start of a breathtaking experiment centered on investing beyond (but in lieu of) the curiosity-driven research that has been the country’s mainstay for more than 70 years. The federal government has launched a variety of bold efforts to re-envision how its agencies promote innovation and commercialization that will generate good jobs, tax revenues, and exports across the country (not just in the existing start-up hubs). Notable efforts include the National Science Foundation’s new Directorate for Technology, Innovation and Partnerships and its Regional Innovation Engines program, the National Institutes of Health’s ARPA-H, the Commerce Department’s National Semiconductor Technology Center and its Tech Hubs program, and the Department of Treasury’s State Small Business Credit Initiative. Foundations are doing their part as well, including Schmidt Futures (where I am an Innovation Fellow working on some of these topics), the Lemelson Foundation, the Walmart Family Foundation, and many more.
As a final note, let me propose that the authors may have an outdated view of the role that US research universities play in this puzzle. Over the past decade, there has been a near-total reinvention of support for innovation and entrepreneurship. At Columbia alone, we offer proof-of-concept funding for promising projects; dozens of entrepreneurship classes; coaching and mentorship from serial entrepreneurs, industry executives, and venture capitalists; matching programs for venture-backable entrepreneurs; support for entrepreneurs wanting to apply to federal assistance programs; connections to venture capitalists for emerging start-ups; access for start-ups to core facilities; and so much more. Such efforts here and elsewhere hopefully will lead to even more successes in years to come.
Orin Herskowitz
Senior Vice President for Applied Innovation and Industry Partnerships, Columbia University
Executive Director, Columbia Technology Ventures
Embracing Intelligible Failure
In “How I Learned to Stop Worrying and Love Intelligible Failure” (Issues, Fall 2023), Adam Russell asks the important and provocative questions: With the growth of “ARPA-everything,” what makes the model succeed, and when and why doesn’t it? What is the secret of success for a new ARPA? Is it the mission? Is it the money? Is it the people? Is it the sponsorship? Or is it just dumb luck and then a virtuous cycle of building on early success?
I have had the privilege of a six-year term at the Department of Defense Advanced Research Projects Agency (DARPA), the forerunner of these new efforts, along with a couple of years helping to launch the Department of Homeland Security’s HSARPA and then 15 years at the Bill & Melinda Gates Foundation running and partnering with international development focused innovation programs. In the ARPA world, I have joined ongoing success, contributed to failure, and then helped launch new successful ARPA-like organizations in the international development domain.
During my time at the Gates Foundation, we frequently asked and explored with partners the question, What does it take for an organization to be truly good at identifying and nurturing new innovation? To answer, it is necessary to separate the process of finding, funding, and managing new innovations through proof-of-concept from the equally challenging task of taking a partially proven innovative new concept or product through development and implementation to achieve impact at scale. I tend to believe that Russell’s “aliens” (described in his Prediction 6 about “Alienabling”) are required for the early innovation management tasks, but I also believe that they are seldom well suited to the tasks of development and scaling. Experts are good at avoiding mistakes, but it is a different challenge to take a risk that is likely to fail and is in your own field of expertise, where you “should have known better” and where failure might be seen as a more direct reflection of your skills.
What does it take for an organization to be truly good at identifying and nurturing new innovation?
Adding my own predictions to the author’s, here are some other things that it takes for an organization to be good at innovation. Some are obvious, such as having sufficient human capital and financial resources, along with operational flexibility. Others are more nuanced, including:
An appetite for risk and a tolerance for failure.
Patience. Having a willingness to bet on long timelines (and possibly the ability to celebrate success that was not intended and that you do not directly benefit from).
Being involved with a network that provides deeper understanding of problems that need to be and are worth solving, and having an understanding of the landscape of potential solutions.
Recognition as a trusted brand that attracts new talent, is valued as a partner in creating unusual new collaborations, and is known for careful handling of confidential information.
Engaged and effective problem-solving in managing projects, and especially nimble oversight in managing the managers at an ARPA (whether that be congressional and administrative oversight in government or donor and board oversight in philanthropy).
Parent organization engagement downstream in “making markets,” or adding a “prize element” for success (and to accelerate impact).
To a large degree, these organizational attributes align well with many of Russell’s predictions. But I will make one more prediction that is perhaps less welcome. A bit like Anna Karenina’s view of happy and unhappy families, there are so many ways for a new ARPA to fail, but “happy ARPAs” likely share—and need—all of the attributes listed above.
Steven Buchsbaum
Principal, Bermuda Associates
Adam Russell is correct: studying the operations of groups built on the Advanced Research Projects Agency model, applying the lessons learned, and enshrining intelligible failure paradigms could absolutely improve outcomes and ensure that ARPAs stay on track. But not all of the author’s predictions require study to know that they need to be addressed directly. For example, efforts from entrenched external interests to steer ARPA agencies can corrode culture and, ultimately, impact. We encountered this when my colleague Geoff Ling and I proposed the creation of the health-focused ARPA-H. Disease advocacy groups and many universities refused to support creation of the agency unless language was inserted to steer it toward their interests. Indeed, the Biden administration has been actively pushing ARPA-H to invest heavily in cancer projects rather than keeping its hands off. Congress is likely to fall into the same trap.
But there is a larger point as well: if you take a fifty-thousand-foot view of the research enterprise, you can easily see that the principle Russell is espousing—that we should study how ARPAs operate—should also be more aggressively applied to all agencies funding research and development.
Efforts from entrenched external interests to steer ARPA agencies can corrode culture and, ultimately, impact.
There is another element of risk that was out of scope for Russell’s article, and that rarely gets discussed: commercialization. DARPA, developed to serve the Department of Defense, and IARPA, developed to serve the government’s Intelligence agencies, have built-in federal customers—yet they still encounter commercialization challenges. Newer ARPAs such as ARPA-H and the energy-focused ARPA-E are in a more difficult position because they do not necessarily have a means to ensure that the technologies they are supporting can make it to market. Again, this is also true for all R&D agencies and is the elephant in the room for most technology developers and funders.
While there have been more recent efforts to boost translation and commercialization of technologies developed with federal funding—through, for example, the National Science Foundation’s Directorate for Technology, Innovation, and Partnerships—there is a real need to measure and de-risk commercialization across the R&D enterprise in a more concerted and outcomes-focused manner. Frankly, one of the wisest investments the government could make with its R&D dollars would be dedicating some of them toward commercialization of small and mid-cap companies that are developing products that would benefit society but are still too risky to attract private capital investors.
The government is well-positioned to shoulder risk through the entire innovation cycle, from R&D through commercialization. Otherwise, nascent technological advances are liable to die before making it across the infamous “valley of death.” Federal support would ensure that the innovation enterprise is not subject to the economy or whims of private capital. The challenge is that R&D agencies are not staffed with people who understand business risk, and thus initiatives such as the Small Business Innovation Research program are often managed by people with no private-sector experience and are so cumbersome and limiting that many companies simply do not bother applying for funding. There are myriad reasons why this is the case, but it is definitely worth establishing an entity designed to understand and take calculated commercialization risk … intelligibly.
Michael Stebbins
President
Science Advisors
As Adam Russell insightfully suggests, the success of the Advanced Research Projects Agency model hinges not only on technical prowess but also on a less tangible element: the ability to fail. No technological challenge worth taking will be guaranteed to work. As Russell points out, having too high a success rate should indicate that the particular agency is not orienting itself toward ambitious “ARPA-hard problems.”
But failing is inherently fraught when spending taxpayer dollars. Politicians have been quick to publicly kneecap science funding agencies for high-profile failures. It is notable that two of the most successful agencies in this mold have come from the national security community: the original Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA). The Pentagon is famously tightlipped about its failures, which provides some shelter from the political winds for an ambitious, risk-taking research and development enterprise. Fewer critics will readily pounce on a “shrimp on a treadmill” story when four-star generals say it is an important area of research for national security.
Having too high a success rate should indicate that the particular agency is not orienting itself toward ambitious “ARPA-hard problems.”
There are reasons to be concerned about the political sustainability of frequent failure in ARPAs, especially as they move from a vehicle for defense-adjacent research into “normal” R&D areas such as health care, energy, agriculture, and infrastructure. Traditional federal funders already live in fear of selecting the next “Solyndra.” And although Tesla was a success story from the same federal loan portfolio, the US political system has a way of making the failures loom larger than the successes. I’ve personally heard federal funders cite the political maelstrom following the failed Solyndra solar panel company as a reason to be more conservative in their grantmaking and program selection. And it is difficult to put the breakthroughs we neglected to fund on posterboard—missed opportunities don’t motivate political crusades.
As a society and a political system, we need to develop a better set of antibodies to the opportunism that leaps on each failure and thereby smothers success. We need the political will to fail. Finding stories of success will help, yes, but at a deeper level we need to valorize stories of intelligible failure. One idea might be to launch a prestigious award for program managers who took a high-upside bet that nonetheless failed, and give them a public platform to discuss why the opportunity was worth taking a shot on and what they learned from the process.
None of this is to say that federal science and technology funders should be immune from critique. But that criticism should be grounded in precisely the kind of empiricism and desire for iterative improvement that Russell’s article embodies. In the effort to avoid critique, we can sometimes risk turning the ARPA model into a cargo cult phenomenon, copied and pasted wholesale without thoughtful consideration on the appropriateness of each piece. It was a refreshing change of pace, then, to see that Russell, when starting up the health-oriented ARPA-H, added several new questions, centered on technological diffusion and misuse, to the famous Heilmeier Catechism questions that a proposed ARPA project must satisfy to be funded. Giving the ARPA model the room to change, grow, and fail is perhaps the most important lesson of all.
Caleb Watney
Cofounder and co-CEO
Institute for Progress
A key obsession for many scientists and policymakers is how to fund more “high-risk” research—the kind for which the Defense Advanced Research Projects Agency (DARPA) is justifiably famous. There are no fewer than four lines of high-risk research awards at the National Institutes of Health, for example, and many agencies have launched their own version of an ARPA for [fill-in-the-blank].
Despite all of this interest in high-risk research, it is puzzling that “there is no consensus on what constitutes risk in science nor how it should be measured,” to quote Pierre Azoulay, an MIT professor who studies innovation and entrepreneurship. Similarly, the economics scholars Chiara Franzoni and Paula Stephan have reported in a paper for the National Bureau of Economic Research that the discussion about high-risk research “often occurs in the absence of well-defined and developed concepts of what risk and uncertainty mean in science.” As a result, meta-scientists who study this issue often use proxies that are not necessarily measures of risk at all (e.g., rates of “disruption” in citation patterns).
I suggest looking to terminology that investors use to disaggregate various forms of risk:
Execution risk is the risk that a given team won’t be able to complete a project due to incompetence, lack of skill, infighting, or any number of reasons for dysfunctionality. ARPA or not, no science funding agency should try to fund research with high execution risk.
Despite all of this interest in high-risk research, it is puzzling that “there is no consensus on what constitutes risk in science nor how it should be measured,” to quote Pierre Azoulay.
Market risk is the risk that even if a project works, the rest of the market (or in this case, other scientists) won’t think that it is worthwhile or useful. Notably, market risk isn’t a static and unchanging attribute of a given line of research. The curious genome sequences found in a tiny marine organism, reported in a 1993 paper and later named CRISPR, had a lot of market risk at the time (hardly anyone cared about the result when first published), but the market risk of this type of research wildly changed as CRISPR’s potential as a precise gene-editing tool became known. In other words, the reward to CRISPR research went up and the market risk went down (the opposite of what one would expect if risk and reward are positively correlated).
Technical risk is the risk that a project is not technically possible at the time. For example, in 1940, a proposal to decipher the structure of DNA would have had a high degree of technical risk. What makes the ARPA model distinct, I would argue, is selecting research programs that could be highly rewarding (and therefore have little market risk) and are at the frontier of a difficult problem (and therefore have substantial technical risk, but not so much as to be impossible).
Adam Russell’s thoughtful and inventive article points us in the right direction by arguing that, above all, we need to make research failures more intelligible. (I expect to see this and some of his other terms on future SAT questions!) After all, one of the key problems with any attempt to fund high-risk research is that when a research project “fails” (as many do), we often don’t know or even have the vocabulary to discuss whether it was because of poor execution, technical challenges, or any other source of risk. Nor, as Russell points out, do we ask peer reviewers and program managers to estimate the probability of failure, although we could easily do so (including disaggregated by various types of risk). As Russell says, ARPAs (any funding agency, for that matter) could improve only if they put more effort into actually enabling the right kind of risk-taking while learning from intelligible failures. More metascience could point the way forward here.
Stuart Buck
Executive Director, Good Science Project
Former Vice President of Research, Arnold Ventures
Adam Russell discusses the challenge of setting up the nascent Advanced Research Projects Agency for Health (ARPA-H), meant to transform health innovation. Being charged with building an organization that builds the future would make anyone gulp. Undeterred, Russell drank from a firehose of opinion on what makes an ARPA tick, and distilled from it the concept of intelligible failure.
As Russell points out, ARPA programs fail—a lot. In fact, failure is expected, and demonstrates that the agency is being sufficiently ambitious in its goals. ARPA-H leadership has explicitly stated that it intends to pursue projects “that cannot otherwise be pursued within the health funding ecosystem due to the nature of the technical risk”—in other words, projects with revolutionary or unconventional approaches that other agencies may avoid as too likely to fail. Failure is not usually a winning strategy. But paired with this willingness to fail, Russell says, is the mindset that “a technical failure is different from a mistake.”
By building feedback loops, technical failures can ultimately turn into insight regarding which approaches truly work. We absolutely agree that intelligible technical failure is crucial to any ARPA’s success, and find Russell’s description of it brilliantly apt. However, we believe Russell could have added one more note about failure. There are other types of failure, aside from technical failure, that ARPAs face as they pursue cutting-edge technology. Failures stemming from unanticipated accidents, misuse, or misperception are types of failures that do need to be worried about.
By building feedback loops, technical failures can ultimately turn into insight regarding which approaches truly work.
The history of DARPA technologies demonstrates the “dual use” nature of transformative innovation, which can unlock new useful applications as well as unintentional harmful consequences. DARPA introduced Agent Orange as a defoliation compound during the Vietnam War, despite warnings of its health harms. These are types of failures we believe any modern ARPA would wish to avoid. Harmful accidents and misuses are best proactively anticipated and avoided, rather than attempting to learn from them only after the disaster has occurred.
In fact, we believe the most ambitious technologies often prove the safest ones: we should aim to create the health equivalent of the safe and comfortable passenger jet, not simply a spartan aircraft prone to failure. To do this, ARPAs should pursue both technical intelligible failure and catastrophobia: an anticipation of, and commitment to avoiding, accidental and misuse failures of their technologies.
With regard to ARPA-H in particular, the agency has signaled its awareness of misuse and misperception risks of its technologies, and has solicited outside input into structures, strategies, and approaches to mitigating these risks. We hope consideration of accidental risks will also be included. With health technologies in particular, useful applications can be a mere step away from harmful outcomes. Technicians developing x-ray technology initially used their bare hands to calibrate the machines, resulting in cancers requiring amputation. Now, a modern hospital is incomplete without radiographic imaging tools. ARPA-H should lead the world in both transformative innovation and pioneering safety.
Jassi Pannu
Resident Physician, Stanford University School of Medicine
Jacob Swett
Executive Director, Blueprint Biosecurity
“AI Is a Tool, and Its Values Are Human Values.”
Fei-Fei Li has been called the godmother of AI for her pioneering work in computer vision and image recognition. Li invented ImageNet, a foundational large-scale dataset that has contributed to key developments in deep learning and artificial intelligence. She previously served as chief scientist of AI at Google Cloud and as a member of the National Artificial Intelligence Research Resource Task Force for the White House Office of Science and Technology Policy and the National Science Foundation.
Li is currently the Sequoia Professor of Computer Science at Stanford University, where she cofounded and codirects the Institute for Human-Centered AI. She also cofounded the national nonprofit AI4ALL, which aims to increase inclusion and diversity in AI education. Li is a member of the National Academy of Engineering and the National Academy of Medicine, and her recent book is The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI.
In an interview with Issues editor Sara Frueh, Li shares her thoughts on how to keep AI centered on human well-being, the ethical responsibilities of AI scientists and developers, and whether there are limits to the human qualities AI can attain.
Illustration by Shonagh Rae.
What drew you into AI? How did it happen, and what appealed to you about it?
Li: It was a pure intellectual curiosity that developed around 25 years ago. And the audacity of a curious question, which is: What is intelligence, and can we make intelligent machines? That was just so much fun to ponder.
My original entry point into science was physics. I was an undergrad in physics at Princeton. And physics is a way of thinking about big and fundamental questions. One fun aspect of being a physics student is that you learn about the physical world, the atomic world.
What is intelligence, and can we make intelligent machines? That was just so much fun to ponder.
The question of intelligence is a contrast to that. It’s so much more nebulous. Maybe one day we will prove that it’s all just physically realized intelligence, but before that happens, it’s just a whole different way of asking those fundamental questions. That was just fascinating. And of all the aspects of intelligence, visual intelligence is a cornerstone of intelligence for animals and humans. The pixel world is so rich and mathematically infinite. To make sense of it, to be able to understand it, to be able to live within it, and to do things in it is just so fascinating to me.
Where are we at in the development of AI? Do you see us as being at a crossroads or inflection point, and if so, what kind?
Li: We’re absolutely at a very interesting time. Are we at an inflection point? The short answer is yes, but the longer answer is that technologies and our society will go through many inflection points. I don’t want to overhype this by saying this is the singular one.
So it is an inflection point for several reasons. One is the power of new AI models. AI as a field is relatively young—it’s 60, maybe 70 years old by now. It’s young enough that it’s only come of age to the public recently. And suddenly we’ve got these powerful models like large language models—and that itself is an inflection point.
The second reason it’s an inflection point is the public has awakened to AI. We’ve gone through a few earlier, smaller inflection points, like when AlphaGo beat a human Go player in 2016, but AlphaGo didn’t change public life. You can sit here and watch a computer play a Go master, but it doesn’t make your life different. ChatGPT changed that—whether you’re asking a question or trying to compose an email or translate a language. And now we have other generative AI creating art and all that. That just fundamentally changed people, and that public awakening is an inflection point.
And the third is socioeconomic. You combine the technology with the public awakening, and suddenly many of the doings of society are going to be impacted by this powerful technology. And that has profound impacts on business, socioeconomic structure, and labor, and there will be intended and unintended consequences—including for democracy.
Thinking about where we go from here—you cofounded and lead the Institute for Human-Centered AI (HAI) at Stanford. What does it mean to develop AI in a human-centered way?
Li: It means recognizing AI is a tool. And tools don’t have independent values—their values are human values. That means we need to be responsible developers as well as governors of this technology—which requires a framework. The human-centered framework is anchored in a shared commitment that AI should improve the human condition—and it consists of concentric rings of responsibility and impact, from individuals to community to society as a whole.
You combine the technology with the public awakening, and suddenly many of the doings of society are going to be impacted by this powerful technology.
For example, human centeredness for the individual recognizes that this technology can empower or harm human dignity, can enhance or take away human jobs and opportunity, and can enhance or replace human creativity.
And then you look at community. This technology can help communities. But this technology can also exacerbate the bias or the challenges among different communities. It can become a tool to harm communities. So that’s another level.
And then society—this technology can unleash incredible, civilizational-scale positive changes like curing diseases, discovering drugs, finding new materials, creating climate solutions. Even last year’s fusion milestone was very much empowered by AI and machine learning. In the meantime, it can really create risks to society and to democracy, like disinformation and painful labor market change.
A lot of people, especially in Silicon Valley, talk about increased productivity. As a technologist, I absolutely believe in increased productivity, but that doesn’t automatically translate into shared prosperity. And that’s a societal level issue. So no matter if you look at the individual, community, or society, a human-centered approach to AI is important.
Are there policies or incentives that could be implemented to ensure that AI is developed in ways that enhance human benefits and minimize risks?
Li: I think education is critical. I worry that the United States hasn’t embraced effective education for our population—whether it’s K–12 or continuing education. A lot of people are fearful of this technology. There is a lack of public education on what this is. And I cringe when I read about AI in the news because it either lacks technical accuracy or it is going after eyeballs. The less proper education there is, the more despair and anxiety it creates for our society. And that’s just not helpful.
As a technologist, I absolutely believe in increased productivity, but that doesn’t automatically translate into shared prosperity.
For example, take children and learning. We’re hearing about some schoolteachers absolutely banning AI. But we also see some children starting to use AI in a responsible way and learning to take advantage of this tool. And the difference between those who understand how to use AI and those who do not is going to have extremely profound downstream effects.
And of course, skillset education is also important. It’s been how many decades since we entered the computing age? Yet I don’t think US K–12 computing education is adequate. And that will also affect the future.
Thoughtful policies are important, but by policy I don’t mean regulation exclusively. Policy can effectively incentivize and actually help to create a healthier ecosystem. I have been advocating for the National AI Research Resource, which would provide the public sector and the academic world with desperately needed computing and data resources to do more AI research and discovery. And that’s part of policy as well.
And of course there are policies that need to look into the harms and unintended consequences of AI, especially in areas like health care, education, manufacturing, and finance.
You mentioned that you’ve been advocating for the National AI Research Resource (NAIRR). An NSF-led pilot of NAIRR has just begun, and legislation has been introduced in Congress—the Create AI Act—that would establish it at full scale. How would that shape the development of AI in a way that benefits people?
Li: The goal is to resource our public sector. NAIRR is a vision for a national infrastructure for AI research that democratizes the tools needed to advance discovery and innovation. The goal is to create a public resource that enables academic and nonprofit AI researchers to access the tools they need—including data, computing power, and training.
The difference between those who understand how to use AI and those who do not is going to have extremely profound downstream effects.
And so let’s look at what public sector means, not just in terms of AI, but fundamentally to our country and to our civilization. The public sector produces public goods in several forms. The first form is knowledge expansion and discovery in the long arc of civilizational progress, whether it’s printing books or writing Beethoven’s Sixth Symphony or curing diseases.
The second public good is talent. The public sector is shouldering the education of students and continued skilling of the public. And resourcing the public sector well means investing in the future of these talents.
And last but not least, the public sector is what the public should be able to trust when there is a need to assess, evaluate, or explain something. For example, I don’t know exactly how ibuprofen works; most people don’t. Yet we trust ibuprofen to be used in certain conditions. It’s because there have been both public- and private-sector studies and assessments and evaluations and standardizations of how to use these drugs. And that is a very important process, so that by and large our public trusts using medications like ibuprofen.
We need the public sector to play that evaluative role in AI. For example, HAI has been comparing large language models in an objective way, but we’re so resource-limited. We wish we could do an even better job, but we need to resource the public sector to do that.
You’re working on AI for health care. People think about AI as being used for drug discovery, but you’re thinking about it in terms of the human experience. How do you think AI can improve the human experience in our fractured, frustrating health care system? And how did your own experience shape your vision for that?
Li: I’ve been involved in AI health care for a dozen years—really motivated by my personal journey of taking care of an ailing parent for the past three decades. And now two ailing parents. I’ve been at the front and center of caring—not just providing moral support, but playing the role of home nurse, translator, case manager, advocate, and all that. So I’ve seen that so much about health care is not just drug names and treatment plans and X-ray machines. Health care is people caring for people. Health care is ensuring patients are safe, are getting adequate, timely care, and are having a dignified care process.
And I learned we are not resourced for that. There are just not enough humans doing this work, and nurses are so in demand. And care for the elderly is even worse.
That makes me think that AI can assist with care—seeing, hearing, triaging, and alerting. Depending on the situation, for example, it could be a pair of eyes watching a patient fall and alerting a person. It could be software running in the background and constantly watching for changes of lab results. It could be a conversation engine or software that answers patient questions. There are many forms of AIs that can help in the care delivery aspect of health care.
What are the ethical responsibilities of engineers and scientists like you who are directly involved in developing AI?
Li: I think there is absolutely individual responsibility in terms of how we are developing the technology. There are professional norms. There are laws. There’s also the reflection of our own ethical value system. I will not be involved in using AI to develop a drug that is illegal and harmful for people, for example. Most people won’t. So there’s a lot, from individual values to professional norms to laws, where we have responsibility.
But I also feel we have a little bit of extra responsibility at this stage of AI because it’s new. We have a responsibility in communication and education. This is why HAI does so much work with the policy world, with the business world, with the ecosystem, because if we can use our resources to communicate and educate about this technology in a responsible way, it’s so much better than people reading misinformation that creates anxiety or irresponsible expectations of utopia. I guess it’s individual and optional, but it is a legit responsibility we can take.
When you think about AI’s future, what worries you the most, and what gives you hope?
Li: It’s not AI’s future, it’s humanity’s future. We don’t talk about electricity’s future, we don’t talk about steam’s future. At the end of the day, it is our future, our species’ future, and our civilization’s future—in the context of AI.
If we can use our resources to communicate and educate about this technology in a responsible way, it’s so much better than people reading misinformation that creates anxiety or irresponsible expectations of utopia.
So the dangers and the hopes of our future rely on people. I’m always more hopeful because I have hope in people. But when I get down or low, it’s also because of people, not because of this technology. It’s people’s lack of responsibility, people’s distortion of what this technology is, and also, frankly, the unfair role power and money play that is instigated or enhanced by this technology.
But then the positive side is the same. The students, the future generation, the people who are trying to do good, the doctors using AI to cure diseases, the biologists using AI to protect species, the agriculture companies using AI to innovate on farming. That’s the hope I have for AI.
Are there aspects of human intelligence that you think will always be beyond the capabilities of AI?
Li: I naturally think about compassion and love. I think this is what defines us as human—possibly one of the most unique things about humans. Computers embody our values. But humans have the ability to love and feel compassion. Right now, it’s not clear there is a mathematical path toward that.
This Eclipse Could Make You Cry–And Make New Scientists
Douglas Duncan is an astronomer who worked on the Hubble Space Telescope. He is also an eclipse fanatic. Since 1970, he has been to 11 total solar eclipses. When April 8, 2024, comes around, he’ll experience his twelfth with his 600 best friends as he leads a three-day eclipse viewing extravaganza in Texas. “It looks like the end of the world,” he says, and a total eclipse can be a source of intense fascination. He uses the emotional experience of the eclipse as a gateway to learning more about science.
On this episode, Lisa Margonelli talks to Duncan about how he has used this sense of experiential wonder, particularly in planetariums, as a way to invite the public into the joy of science. In previous generations, planetariums were seen as “old fashioned” and isolated from the work of modern astronomers. But Duncan pioneered a career track that combined public teaching at a planetarium with a faculty position at the University of Colorado. Now many planetariums have become places where academic astronomers can share their knowledge with the public.
See the itinerary for Duncan’s “Totality Over Texas” trip, which will be attended by 600 people. The trip offers three days of eclipse-related activities.
Transcript
Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.
On April 8, 2024, North America will experience something that we will not see again for another 20 years: a total solar eclipse. Across the continent, millions will look up at the sky and watch the moon block out the sun, causing darkness to descend for up to four and a half minutes. Celestial events such as these are awe-inspiring and bring the joy of science to the people. But these public engagement opportunities and institutions are often seen as separate from the work of actual scientists. How can these communities better work together to engage the public?
I’m Lisa Margonelli, editor-in-chief at Issues. On this episode, I’m joined by Douglas Duncan, an astronomer at the University of Colorado and former director of the Fiske Planetarium. Doug pioneered a new career track that combined public teaching at a planetarium with a faculty position. He has helped to modernize planetariums across the country. And, he is a bonafide eclipse fanatic.
Doug, welcome!
Doug Duncan: Thank you very much.
Margonelli: You are preparing to go chase the eclipse in early April. Tell me where you’re going and who you’re bringing with you.
Duncan: Well, I’m looking forward to my 12th total eclipse of the last 50 years. And this one is relatively nearby. We’re going about an hour north of Austin, Texas. Me and 600 friends.
Margonelli: Close friends.
Duncan: Close friends. Actually, interestingly enough, roughly one quarter of them are repeat customers. They have chased eclipses somewhere else in the world with me, and it was such a powerful experience, they really, really wanted to see it again. And so they’re back.
A total eclipse of the sun is the most powerful natural spectacle I have ever seen.
Margonelli: The first time I met you, which was a couple weeks ago, you made such a powerful pitch about the joy and amazingness of being in the midst of a total eclipse that I actually am going to take the day off and drive across Maine to the town of Jackman, where I’m going to hope for a big experience under some cloud cover. But what I want you to do is tell us why eclipses are so meaningful and why we should drop everything and go see that eclipse.
Duncan: I have to say that a total eclipse of the sun is the most powerful natural spectacle I have ever seen. That includes standing on the rim of the Grand Canyon which is majestic, being next to the waterfall in Yosemite, seeing the Northern Lights which are marvelous. But if you do see a total eclipse, it looks like the end of the world, and some people start to cry and to scream. I’ve had all kinds of very powerful personal comments to me, but before I repeat any of those, let’s set the scene. Our sun is so incredibly powerful that even a few percent of it keeps everything light. So it’s a common misperception. I meet lots and lots of people who say, “Oh, yeah, I’ve seen an eclipse. It was 90%. Yeah, I saw it.” 90% is not the same. A total eclipse when the sun finally is completely covered by the moon looks a lot like the end of the world. There’s a big black hole in the sky, right where the sun should be, and pink flames all around the edge.
Those are the prominences of the sun. And silver streamers stretching quite far across the sky and little shapes caused by the sun’s magnetic field. And then it gets cold, quite cold. And animals do strange things. The animals definitely respond, and so do the people. So some of the comments that I’ve had, I think my favorite is my daughter’s. She said to me when she saw a total eclipse, “Dad, that was scary fun.” And I’ve had people say, “I feel connected to the universe like never before.” I’ve seen people say, “I, I felt like I was in the presence of God.” I have people say just, “Oh, I was just freaked out.” So it’s a very powerful natural experience. Intellectually, it’s interesting, and a partial eclipse is interesting, but a total eclipse is very emotional.
I suppose one other way I could try and describe that is, even though I have seen 11 total eclipses, when we get to totality, the hair on the back of my neck stands up because something is weird, something isn’t right. And that’s only happened one other time in my life. I was out hiking and I came around a bend and there maybe 25, 30 yards away was a mountain lion. And I can tell you that my response was not, you know, a purely intellectual response. Oh my goodness. There was a mountain lion. Oh, no, not at all. Instantly, the hair on the back of my head stood up and my mind quickly said, “Uh-oh uh-oh, something’s wrong.” And that happens to me, even though I’m a professional astronomer, during an eclipse, there just is this powerful, probably subconscious feeling. Something’s wrong in the world. This is not right. Maybe those astronomers are correct, and in four minutes it’ll all return to normal. But if they’re wrong, maybe this is what the end of the world looks like. So I hope that’s the best I can do to try and convey some of the power of this experience.
Margonelli: Well, so this is a very interesting experience that you’re describing because you are a professional astronomer. You’ve published 50 papers. They’ve been cited thousands of times. You’ve run several planetariums. You are through and through a scientist, and you’re very interested in the communication of science to citizens and getting more people into the experience of science. And you’re taking people to a very emotional experience to see the eclipse as a sort of gateway to science. Science doesn’t normally talk about the end of the world. What are you doing?
Duncan: Implied, perhaps, in what you say is that there may be some conflict between the intellectual pursuit of science and this very emotional experience of an eclipse. I vote with Richard Feynman. He always said that it’s—and I agree with him—it’s a false dichotomy to say, for instance, if you look at a flower and you’re just an artist with no scientific background, it’s a beautiful flower. Maybe you’ll paint it. But if you’re a scientist and you know how it works, if you know what’s inside that flower, it does not take away from the beauty. Feynman would argue you just appreciate it more if you know more of the details behind the picture.
Having a background in science museums, I know that if you engage people and if they really start to feel pretty passionate about something, that’s the gateway for them to go and learn more.
And I vote with Feynman, you know, it’s the same way with an eclipse. I will prepare people who come with me and we’ll talk about what causes eclipses and why are they so rare? And there’s all of this you know, kind of cognitive knowledge, and that’s great. But then when the actual thing happens, it’s a very, very powerful experience. So I actually think those go together, and they probably should go together. Having a background in science museums, I know that if you engage people and if they really start to feel pretty passionate about something, that’s the gateway for them to go and learn more. So, you know, if you appeal to people’s curiosity, if you appeal to their interest, if you appeal to just their sense of, wow, that’s something beautiful, that’s something interesting, I wanna learn more. I see that as a gateway.
Margonelli: Another one of your gateways, I think was planetariums. Tell me about planetariums.
Duncan: Well, when the planetarium was invented almost a hundred years ago exactly by the Zeiss Company in Germany, it was considered miraculous. It was the first VR experience.
Margonelli: Virtual, so it’s a virtual reality experience to be able to sit underneath the heavens.
Duncan: Absolutely. People would go inside and they saw the stars almost as if it was outside. You have to put yourself in the mind frame of a hundred years ago. The newspaper stories from Germany talked about all these people coming out from the first planetarium theater and saying how incredible it was that you could see the stars in the daytime and that it felt like night. Over the last 20 years, there’s been a second revolution in planetariums. That has been going from a big mechanical device that looks like a giant ant, which was a bunch of lenses and lamps to just project the stars, to changing the entire planetarium dome to a digital video screen. So our planetarium, for instance, the Fiske Planetarium at the University of Colorado is 20 meters in diameter, and on that screen is 8,000 by 8,000 pixel video. Once you have a video screen, as probably almost anybody listening to this podcast knows if you’re creative, you can do marvelous things. You can create animations, you could take people inside a molecule, and you could look around, you could imagine what it would be like to fall into a black hole, or you could do what’s real with a special camera that will capture the 360 degree video. And you can go to interesting real life events like a rocket launch or an eclipse.
Margonelli: Could we back up a little bit? I got the impression for that maybe you started actually working on in a planetarium long before this era, like in your college days at Griffith Planetarium. So you were originally attracted to the immersive experience of the huge ant, like the big metallic projector. And what I remember from when I was a kid was that those projectors, they smelled, they had that machine smell. They were kind of, they were wonderful and scary, and they were a very physical thing. So tell me a little bit about your interest in planetariums growing up.
What other field of science has over a quarter of a million hobbyists who have their own telescope and practice?
Duncan: Well, I think the powerful sight and even mechanical smell of the big old Zeiss projectors was very powerful and and got a lot of kids’ curiosity. But honestly, the thing that really got my curiosity was the subject matter. I, of course, am biased as an astronomer. I decided to be an astronomer in second grade, apparently, because my mother saved all these crayon drawings of the planet. But I have always been intrigued by what’s out there. I am one of those people who looks at the sky and wonders: Does it go on forever? How could something go on forever? But if it didn’t, how could it stop? What would be on the other side? That actually used to occupy me when I was say, middle school age. And so, if I’m being honest, although the technology of planetariums was really fascinating, it was the subject matter that sucked me in. I continue to find that today.
You know, if you think of all the different sciences, I think there’s only about two that can give astronomy a run for its money in terms of bringing people in with interest. I think that the bird watchers are up there, and I think that people who enjoy the part of geology that you can see and collect minerals and gems, those are pretty high. But what other field of science has over a quarter of a million hobbyists who have their own telescope and practice? And I think it’s because of the compelling nature of the questions, and it’s also because of the fact that you can see it. I didn’t have to have, I don’t think, a great imagination to look up at the sky and see all those stars and wonder what’s out there.
You’re quite right that the planetarium is interesting technology, but it’s also really interesting subject matter. And by the way, just to finish out that thought, once the planetariums became digital, and once that dome nowadays is filled with video, I think it’s worth pointing out there’s all this whoopty do about VR goggles, you know, and it’s been hyped and over-hyped, at least the past 10 years of my life. But I would point out the whole idea of the VR goggles is when you look left, you see what’s on the left and when you look right, you see what’s on the right. Well, a modern planetarium does that without goggles. It’s the same video, but everybody gets to enjoy it together.
Margonelli: So planetariums offer this immersive experience, and for you, that has been kind of a lifelong learning, a place for an intervention around learning and to bring people into science. Can you talk a little bit, I mean, I don’t think that everybody in astronomy is as hooked on planetariums as a spot for intervention as you are.
Here’s something that probably most listeners would not know. Until fairly recently, most planetariums had surprisingly little contact with professional astronomers.
Duncan: I think there’s two reasons for that. The first one is that for so many years, planetariums showed the stars and nothing else. And you know that Zeiss machine, when it was invented a hundred years ago was miraculous, but now it’s a hundred years old. And until a couple of decades ago, most planetariums were still the original kind of projector, but astronomy itself had moved on. And so if you asked me when I was in college in the 1970s, the early 1980s, you say, “What’s interesting about astronomy?” Well, hey, we just discovered black holes, quasars, things falling into black holes, landing a rover on Mars. Well, those were the really exciting current things, but the planetariums were so old fashioned, they showed up in Woody Allen movies as a joke.
Here’s something that probably most listeners would not know. Until fairly recently, most planetariums had surprisingly little contact with professional astronomers. They were museum people, and they knew a lot about audiences, but they kept the planetarium astronomers so busy in their building that they didn’t have time to go to astronomy meetings to meet with the people who had just discovered a black hole. It was this very unfortunate dichotomy, and I think that’s why planetariums became so old fashioned. You know, if you’re a practitioner and you’re trying to publish and learn new things, boy, you gotta be on top of what’s happening. And so when I went to the Adler Planetarium, and when I left the staff of the Hubble Space Telescope in about 1991, many of my colleagues came to me from the Hubble Space Telescope, and they said, “Oh my gosh, Doug, why in the world are you going to planetariums? They’re so old fashioned.” And I looked at them and I said, “That’s why I’m going to this venerable, beautiful planetarium in Chicago.” Which was indeed very old fashioned.
And I thought, “Boy, there’s such opportunity here.” Because if you ask the typical person on the street, “if you wanna learn some astronomy, where do you go?” They don’t say, “I’m gonna go to Caltech.” They say, “I’m gonna go to Griffith Observatory, go to the planetarium.” Millions of people do every year. And so the connecting of what goes on in the planetarium with the current world of academia was the main reason I went into the planetarium world in the 1990s. And to a certain extent around the country that has been copied. And I think planetariums are on the whole much more current. They’re not showing up in any more Woody Allen movies. So I think this is good.
Margonelli: So this is interesting because you’re sort of an entrepreneur for figuring out how to get people into science. And so you saw this sort of underused space, the planetarium, and thought of a way to kind of update it, and at the same time, bring more people in. It also helped that it coincided with sort of a revolution in LEDs and video and the ability to completely change these immersive experiences in the in the planetariums. Tell me: so what happened then through that technology? I think that you were able to help planetariums across the country sort of modernize or change the programs that they were showing.
Duncan: When you talk about planetariums, half a dozen of them get all the headlines. New York, Chicago, Griffith in Los Angeles, they’re all very famous. Each one of those planetariums makes their own programs, but there’s a thousand planetariums in the US. Most of them are mid-size or small, and none of them have the staff to make their own programs.
So I saw a real opportunity here. Neil Tyson and Company make great planetarium programs outta New York, but they rent for $30,000 a year or something like that. And so a small planetarium usually can’t afford that. And I was fortunate enough to approach NASA. And NASA has twice now funded the the Fiske Planetarium at University of Colorado to make videos to make new programs 360 degree format and give them for free to all these mid-size and smaller planetariums around the country. And I couldn’t be happier. We couldn’t be happier at Fiske. Over 750 planetariums have downloaded various programs that we produce. That counts inside the US and outside the US—it’s about about 50/50. So that has been really good. Atypically, the Fiske Planetarium is embedded in one of the leading university astronomy and aerospace programs. So all I have to do is walk a couple of buildings over and there’s people sending stuff to Mars. So it makes it much easier to be very current than if you’re in a planetarium that’s isolated. It’s downtown, everybody loves it, but it’s not part of any academic institution.
Margonelli: So what you’re saying here is that you went from looking at planetariums being very disconnected from science to actually having a planetary embedded within the scientific community at the University of Colorado.
Duncan: Yes. Let’s pursue that one more step. Because to my immense pleasure for perhaps the first time in my career, the leading science research institutions in America like the National Science Foundation and NASA, they have discovered that communicating with the broad public is important to their future. So, you know, I’m coming at it from the side of, we’re the planetarium, we’d like to connect more with the researchers, but all of a sudden, if you wanna get a research grant, you have to describe what’s called a broader impact. You have to communicate what you’re doing to the public. Guess what, researchers, here we are. The way I really raised most of the money at Fiske in the beginning was when NASA said, “You know, you’ve gotta have a broader impact.” We all raised our hands and said, “Hey, broader impact is what we’re about.” Let us write the section of your grant and tell NASA, tell NSF all the ways we can connect astronomy to the public through the planetarium. So that actually worked out very well.
If scientists don’t communicate to the public what they’re doing and the value of what they’re doing, why should the public support it?
I happened to have had lunch one time with Dan Goldin. This is about, I don’t know, roughly 30 years when he was the head of NASA. And Dan Goldin was the NASA director who set up the idea that every NASA mission should spend 1% of its budget communicating with the public. And you know what, back in the original days, all of the NASA researchers hated it. I was on the staff of Hubble Telescope at roughly that time. And so many people said, “Oh my God, you’re gonna spend half a million dollars reaching out to the public. Give me the half million dollars. I wanna use it on my research program.” And that’s not what they did.
I had the chance to ask Dan Goldin, and I said, “Why did you do this?” And he said to me, “It wasn’t altruism, it’s the future. If scientists don’t communicate to the public what they’re doing and the value of what they’re doing, why should the public support it?” And that is just so true. And we as scientists often forget that. And we stub our toe. And I don’t think scientists have anyone to blame other than themselves, but a lot of scientists really denigrate their colleagues who take the time to communicate with the public. I’ll remind people who remember who Carl Sagan was. The single best science communicator I ever met or heard in my life, bar none. Absolutely. An incredible communicator. And he didn’t get tenure. And that was because he was spending too much time with the public. And I think that’s so shortsighted, but it does continue. Not everywhere, but it often continues to this day, to our detriment.
Margonelli: I wanna segue to something else that you’ve done. Okay. So one of the things that you did back when you were teaching at University of Chicago was you, you taught classes about science to students who were not on a science track, and you surveyed them and you found out some of their underlying attitudes about science. And what were those? Why were people not studying science?
Duncan: Oh, this was mind blowing. This was mind blowing to me. Okay. So I taught, because I really enjoy it, big intro class of people who are not science majors. Most astronomy departments will have two kinds of introductory classes. One is for majors, kind of heavy on the equations in math, and one for non-majors. And I always enjoyed both of them, but especially the non-majors class. And I was teaching one day, and I looked out at my students, and they were all very engaged, but I was saying to myself, “gosh, none of them are science majors and they’ve become quite enthusiastic about astronomy. I wonder if any of them would be interested in majoring in my field.” And so I stopped, and I said to the class, “I know that none of you are science majors, but you seem very engaged in the astronomy. I’d like you to take out a piece of paper and write down would I ever want to become a science major. Why or why not?” And I collected about 90 of these, and I start reading through them, and this same theme starts to come up again and again. And about the sixth or seventh time I read it, I started hitting my forehead because what the people said was, “No, I’m a creative person, so I’m doing this, I’m doing this, I’m doing English literature, I’m doing something else.” And I thought that was a real disconnect because the reason I went into science is I’m a creative person and I want to discover and create something new. But then I thought about it more carefully, and I realized that a typical college student who’s not a science major is only gonna meet a scientist in the classroom.
I began to realize that the way science is traditionally taught keeps from the students the most engaging things.
And how do we conduct ourselves in the classroom? We dress nicely, we act formally, and we only tell them things that are true because it’s in the book, and we want ’em to learn the book. And every one of those things is not honestly characteristic of scientists. People I work with dress in hoodies you know, and they’re most interested in what’s unknown, not what’s known. And so I began to realize that the way science is traditionally taught keeps from the students the most engaging things. And that was the start, one of the starts of my really changing the way that I teach.
Margonelli: So first of all, do you feel like that has its roots somewhat in the way that science is taught in the United States, and it’s sort of post-Sputnik kind of formation?
Duncan: The good or the bad, you know, the old or the newer approach?
Margonelli: The old approach has its roots in that, in that older post-Sputnik sort of approach?
Duncan: No, I don’t think so. And I’m pretty confident that it has its roots in the culture and formality of university teaching. You know, I was at the University of Chicago, that’s the most formal place that I’ve hung out. We’ve got professors in the men in Tweed Coats. My goodness, there have been at least one place that I taught where ties were required. So I do think that the formality should not be blamed on the scientists. It should be blamed on the older style of university teaching. And that’s pretty much gone. But you know, that that vestige of how you should comport yourself as, as a professor, that I think that still has can have a negative effect.
Margonelli: I think it seems that in exploring how science fails to sometimes communicate with the public, some of this has involved kind of looking inward and thinking about the stories that science tells to itself, for example, that it’s about being competitive, that you need to compete to have the best ideas, or people have talked about how you need to stake out an idea and then defend it, and that some of these ideas are kind of unscientific.
Duncan: I think you’ve touched onto something which is very important. It has been extremely controversial, and I think it’s controversial still—maybe not as extremely—and that is: Is there more than one way to do good science? And the model under which I was taught, now that I reflect on decades of seeing it in action, is a very male model. It’s very competitive. And the idea is you do some research, you give a talk on it, and all the other people in the room are supposed to critique it. And if there are any weaknesses, that critique is supposed to bring them out. And then you’re supposed to make it better because you’ve been told that’s stupid. You shouldn’t have done that. Improve it. I think there’s some truth in that, you know, that can work. But some people who originally were contrarians said, “Is that the best way to do science? Is the competitive model the only one? What if you had a cooperative model?” And instead of, you know, all trying to shoot each other down, work together.
And as time has gone on, I personally have seen more and more value to that. The particular NASA program, which has funded my outreach through planetariums, has roughly two dozen grant holders across the US. And we have all been told—and we meet pretty routinely—we’ve been told we’re expected to cooperate, to do things together, to help each other out, to leverage each other’s programs. Well, in that environment, I have seen with my very own eyes that it’s really helpful, you know, when the programs are cooperating, they’re reaching more people, the programs are getting better.
I see increasing value in being cooperative. And so the kind of the old model where science is very cutthroat and probably scares a lot of people away from it as a professional field because of that, that is not the only model.
Now that’s only one part of NASA. There’s another part of NASA, you know, that competes missions, and they only have a limited amount of funding. So are they gonna go to Mars? Are they gonna go to the moon of Jupiter? Who’s gonna get the money? And those proposals are very, very competitive, and everybody works really hard to make the best proposal. So I do see some value in being competitive, but I see increasing value in being cooperative. And so the kind of the old model where science is very cutthroat and probably scares a lot of people away from it as a professional field because of that, that is not the only model. In fact, back when I was at Space Telescope in 1990, we put together the first conference in the history of astronomy to look at the sociology of our field. It was called Women in Astronomy. Nowadays it’s called Women in Astronomy I, because there have been some more meetings. But that was the first time I ever heard raised the methodology of doing science.
Who does it engage? Who does it scare away? And so it, it’s, it’s not very typical in my experience for physical scientists, at least to be self-reflective like that, you know, we didn’t take, I never took a course in sociology or psychology, but it seems pretty important every once in a while to look in the mirror, look around at your field and say is this field engaging everyone that it could? And if it’s not, what is it that you’re doing or not doing? That’s being a barrier to people. And fortunately, those discussions have happened a lot more. I don’t agree with all of them. You know, not all ideas for engaging people are, are good. but it’s important to figure out what is the right way to engage more people.
Margonelli: Well, thank you. This is interesting that, I mean, there’s so much discussion now about science education in the United States and the need to bring in what is sometimes called the missing millions: people who could come into science but are turned off in one way or another or not welcomed. And there’s also there’s the fact that we live in an increasingly technological and science informed world. And you can’t really be a citizen without knowing quite a bit of science. I mean, it’s very hard to just make a decision about when to call your doctor without some sense of science. So, frequently the model focuses on what the public doesn’t know. The scientists model of this frequently focuses on things the public must know. And one of the things that your trip through planetariums and astronomy and schools and informal learning situations and now eclipses brings about is that it takes a fair amount of introspection for scientists to think about how to be more open.
Duncan: Well, I think engaging—certainly openness is one aspect of making feel people feel welcome and engage—but I’m gonna say something which is different from what many people are saying nowadays. You know, how do you engage people who haven’t previously been engaged in science? And I would, I would never downplay the fact that role models are important. You know, they are. If you can see an astronaut and imagine that, gosh, one day I could be an astronaut, that’s important. And if there were no women who are astronauts, as was true up until the time of my PhD, or there are no minorities who are astronauts, that’s a big negative because you can’t see somebody and as readily identify. But I’m not convinced that that’s the number one thing.
Universities would be better if they worked like my science museum in the sense that every time time you come to class, you pay $20. Okay? If instead of collecting all the money upfront, we did it like a museum does, oh, classes would be so much better. They’d be more engaging.
Here’s what I think is the number one thing. The best science museum on planet Earth is the Exploratorium in San Francisco. It is so good that the French National Science Museum, La Villette, 10% of it is copied from the Exploratorium in America. The French never copy Americans except the Exploratorium is so good. And in this time of Oppenheimer, I’d be remiss if I didn’t say it was Robert Oppenheimer’s brother Frank who invented this wonderful science museum. If you go into the Exploratorium, you see an extremely diverse group of young people in there. Why is that? It’s not because somebody has invited them or told them the story that they belong. It’s because it’s fun. There’s electricity that you can play with and there’s all kinds of things that you can play with and get hands on and discover for yourself. It was designed to be the opposite of an old traditional museum with every beautiful thing under glass. Everything is out and everything has a sign on it. Touch me, play with me. You know? And I think that’s underestimated.
I think that if you give people just the chance to do things which are hands-on and fun and accessible, that, especially with younger people, everybody loves that. Once you get to be older, you start to get indoctrinated that, well, these are for boys and these are for girls, these are for white people. These are not for you. But you know, at the age that museums really try and engage people, every fourth grader loves the Exploratorium. And I think that’s because we learn as humans through curiosity, right? We learn through experimentation. Every young person is born a scientist. I remember my daughter, she was quite a scientist. I can’t remember exactly how old she was, about a year old sitting in her highchair? When she discovered gravity. Ooh, the spoon falls. Ooh, the food falls. Ooh, look at that. It splattered all over the ground. So much fun. We learn in the same way that scientists learn, right? Of course, science becomes more formal. But I think that what we need to do is to not inhibit people through the wrong kind of schooling. And that’s why it can be fun to work at a museum, because, you know, I jokingly say schools, universities would be better if they worked like my science museum in the sense that every time time you come to class, you pay $20. Okay? If instead of collecting all the money upfront, we did it like a museum does, oh, classes would be so much better. They’d be more engaging. The students would not only learn, but they would have a relatively good time, because that would be one of the goals. So the good science museums succeed by both teaching, engaging, and enjoying at the same time. And they have the motivation to do that because otherwise people won’t come back.
Margonelli: And on that note, this has been a really fun conversation. It’s been very nice to talk to you and we’ve covered a lot of ground. Thank you very much, Doug.
Duncan: It’s been a great pleasure. And thanks for spreading these good words.
Margonelli: If you would like to learn more about Doug Duncan’s work, check out the resources in our show notes.
Please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producer, Kimberly Quach, and our audio engineer Shannon Lynch. I’m Lisa Margonelli, Editor in Chief at Issues in Science and Technology. Thank you for listening.
FLOE: A Climate of Risk
STEPHEN TALASNIK, Glacial Mapping 2023; Digitally printed vinyl wall print, 10’ x 14’ (h x w)
Imagination can be a fundamental tool for driving change. Through creative narratives, we can individually and collectively imagine a better future and, potentially, take actions to move toward it. For instance, science fiction writers have, at times, seemed to predict new technologies or situations in society—raising the question of whether narratives can create empathy around an issue and help us imagine and work toward a desirable outcome.
Philadelphia-born artist Stephen Talasnik takes this question of narratives seriously. He is a sculptor and installation artist whose exhibition, FLOE: A Climate of Risk, is on display at the Museum for Art in Wood in Philadelphia, Pennsylvania, from November 3, 2023, through February 18, 2024. Talasnik’s work is informed by time, place, and the complex relationship between ideas that form a kind of “functional fiction.” Through FLOE, Talasnik tells the story of a fictitious shipwreck that was carried to Philadelphia by the glacier in which it was buried. As global temperatures warmed, the glacier melted and surrendered the ship’s remains, which were discovered by mischievous local children. The archaeological remains and reconstructions are presented in this exhibition, alongside a sculptural representation of the ice floe that carried the ship to its final resting place. Talasnik uses architectural designs to create intricate wood structures from treated basswood. By building a large wooden model to represent the glacier, the artist evokes a shadowy memory of the iceberg and reminds visitors of the sublime power of nature and its constant, often destructive, search for equilibrium.
STEPHEN TALASNIK, A Climate of Risk – Debris Field (detail)
“FLOE emerged from the imagination of Stephen Talasnik, an artist known worldwide for his hand-built structures installed in natural settings,” writes Jennifer-Navva Milliken, executive director and chief curator at the Museum for Art in Wood. “The exhibition is based on a story created by the artist but touches on the realities of climate change, a problem that exposes the vulnerability of the world’s most defenseless populations, including the impoverished, houseless, and stateless. Science helps us understand the impact through data, but the impact to humanity is harder to quantify. Stephen’s work, through his complex storytelling and organic, fragmented sculptures, helps us understand this loss on the human scale.”
For more information about the exhibition and a mobile visitors’ guide, visit the Museum for Art in Wood website.
STEPHEN TALASNIK, Glacier, 2023. Pine stick infrastructure with bamboo flat reed, 12 ft tall with a footprint of 500 sq ft (approx.)STEPHEN TALASNIK, Leaning Globe, 1998 – 2023; Painted basswood with metallic pigment, 28 x 40 x 22 inches (h x w x d)
STEPHEN TALASNIK, Tunneling, 2007 – 2008; Wood in resin, 4 x 8 x 12 inches
STEPHEN TALASNIK, House of Bones, 2015-2023; Wood and mica; 32 x 24 x 6 inches
Economists Being Economists
In early nineteenth-century England, bands of men known as Luddites went about smashing the automatic knitting machines that had taken their jobs. The novelist Thomas Pynchon described the situation in a 1984 essay titled “Is It OK to Be a Luddite?”: “The knitting machines which provoked the first Luddite disturbances had been putting people out of work for well over two centuries. Everybody saw this happening—it became part of daily life. They also saw the machines coming more and more to be the property of men who did not work, only owned and hired. It took no German philosopher, then or later, to point out what this did, had been doing, to wages and jobs.”
Today, rising economic inequality is a hot topic of political debate in the United States, and economists, rather than German philosophers, are paying attention to the effects of automation on jobs and wages. In this vein, Power and Progress, by Massachusetts Institute of Technology professor Daron Acemoglu and his MIT colleague and former International Monetary Fund chief economist Simon Johnson, spends hundreds of pages trying to say what Pynchon says in several hundred words. It’s a sweeping, big-theory-of-history book—“Our 1,000-Year Struggle Over Technology and Prosperity,” as the subtitle says.
According to Power and Progress, it all started, like the Bible, with a fall from grace. In the beginning, your average hunter-gatherer lived a happy life foraging a few hours a day, doing the work of one person. But the apple of innovation changed all that by magnifying what one person could do, first through organized agriculture, then through water- and wind- and horse-powered machines, and so on. Now, powerful men, pharaohs, emperors, and the like could create untold wealth for themselves by extracting the results of all this additional work-per-person from the masses that they controlled. Productivity was the original sin.
If you want a theory that accounts for 1,000 (or, actually, 10,000) years of inequality, you’ll need an independent variable, an underlying causal driver. For a minute I thought that, in assigning this role to productivity, the authors were onto something weird enough to be interesting.
If you want a theory that accounts for 1,000 (or, actually, 10,000) years of inequality, you’ll need an independent variable, an underlying causal driver. For a minute I thought that, in assigning this role to productivity, the authors were onto something weird enough to be interesting. Certainly, the cult of productivity—whose members are mostly economists and their confederates in tech and finance who want to explain why destroying jobs in the name of greater productivity is actually to everyone’s benefit—is well worth taking down.
But the book’s argument hinges on replacing one variable with another. As Acemoglu and Johnson explain, in competitive labor markets, wages are determined not by increased productivity (output per worker), but by marginal productivity of labor—that is, the increased productivity created by additional workers. According to the authors, this distinction is one that many economists and economics textbooks fail to make.
“All of this brings home perhaps the most important thing about technology: choice” (their italics). As they explain it, “Technology has increased inequality largely because of the choices that companies and other powerful actors have made.” For a better world, the men who do not work, only own and hire—as Pynchon termed them—should not choose automation that increases productivity and profitability while eliminating jobs. Instead, they must choose to invest in “worker friendly technologies”—automation that increases the marginal productivity of labor and protects, creates, and expands well-paying jobs.
So why don’t they? Because we are persuaded by the “blind techno-optimism” which has us believing that the productivity growth that maximizes corporate profits also automatically delivers more and better jobs. (Whoever “we” are—I use the first-person plural here because that’s what the authors insist on, from the book’s first sentence. Presumably they aren’t referring to themselves.)
It has long been thus. Acemoglu and Johnson are very impressed with the “power to persuade” that some men have had. Starting in the eighteenth century, “what comes clearly into view is how those who stood to gain got their way by linking arguments for their preferred technology choice with what they claimed to be the common good.” Take Ferdinand de Lesseps, the nineteenth-century French diplomat and developer of the Suez Canal, who also made an early and unsuccessful attempt to build a Panama Canal. How did he mobilize the resources necessary for these canals? “Lesseps had the power to persuade.” Who did he need to persuade? The politicians and investors who had something to gain from his “version of technological optimism.” To serve their interests, tens of thousands were consigned to brutal, deadly regimes of coerced labor in building the canals.
But it was just a choice: “None of this was inevitable.”
To say that “choice” is “the most important thing about technology,” without confronting the meaning of “choice” itself, is to say nothing at all.
On late seventeenth-century agricultural innovation and the immiseration of the rural populations in England: “None of this was inevitable.” On the rapid expansion of job-creating industries after World War II: “It would be incorrect to think that postwar technology was preordained.” On the destruction of jobs and livelihoods through automation and artificial intelligence today: “None of this had to be the case.”
To say that a particular social and economic arrangement of technology at a particular time was not inevitable is a truism if there ever was one. But it does not mean that any particular alternative arrangement was plausible, or that, starting from the present, a particular future arrangement can be “chosen.” To say that “choice” is “the most important thing about technology,” without confronting the meaning of “choice” itself, is to say nothing at all. In 400 pages.
If “choice” really were “the most important thing about technology,” you’d think the authors would draw on at least some of the voluminous research and writing about social, political, and institutional choice; about how firms innovate and adopt technology; about how cultures, governments, and social movements help to shape it, and are shaped by it. But the reader will search in vain for the influence of the likes of James Madison, John Dewey, Herbert Simon, James March, Mary Douglas, Kenneth Arrow, Christopher Freeman, Richard Nelson, Lewis Mumford, Sheila Jasanoff, Thomas Hughes, and so forth. Above all, you’d think the authors would appreciate that, at the scales that influence wage structure, there is no “choice” in any conventional sense of the word. Rather, technological regimes emerge within complex, contingent arrays of institutions, actors, interests, incentives, and beliefs, further nested within powerfully constraining historical and cultural contexts.
While the book promises a grand theory about democratic choice, technological change, and economic inequality, by its end this promise has dissipated in a cloud of caveats and counterexamples.
And indeed, while the book promises a grand theory about democratic choice, technological change, and economic inequality, by its end this promise has dissipated in a cloud of caveats and counterexamples. Of the early Industrial Revolution in England (but it might as well be today), the authors are content with more truisms: “Because workers were not organized and lacked political power, employers could get away with paying low wages.” Where’s the choice in that? As the later chapters move through the twentieth century to the current era of robot- and AI-enhanced wage inequality, the authors acknowledge, time and again, that institutions, politics, and culture are always constraining future technological pathways. Thus, “worker-friendly technologies” turn out to be a consequence of tight labor markets and politically empowered workers. They are adopted when firms actually have no choice but to provide good salaries and strong job protections.
Ignoring relevant scholarship undermines other key elements of Power and Progress. Critiques of technology run through the book, but Acemoglu and Johnson fail to draw on a century of thinking and writing about the downsides of technological change. Perhaps more surprising are the discussions of how periodic spasms of innovation across many interdependent industries (such as steel and railroads) led to rapidly increasing demand for labor and rising wages. These discussions seem entirely uninformed by the rich historical scholarship of innovation economists working in the tradition of Joseph Schumpeter—who is, incredibly, unmentioned and uncited here.
As I write, United Auto Workers union members have just concluded a successful strike against American automakers, a reminder that hard political battles, not technological choice, often lie behind better wages. But the percent of unionized American workers continues to drop, and the decline of manufacturing and the rise of the service sector and gig economy have narrowed opportunities for many workers to pursue job security and good wages. Meanwhile, decades of growing wage inequality seem to feed into populist politics in the United States and Europe. Acemoglu and Johnson start their book by explaining that “we” have all been suckered into believing that technology-created productivity growth will bring a future of more, higher-paying jobs for all. Again, who is the “we”? Disaffected workers often said to constitute the core of Donald Trump’s popularity would certainly appear not to be taken in.
Why not state the obvious? So long as the power asymmetry between corporate ownership and workers persists, the challenge of wage inequality for US workers will remain largely unsolved.
Cornered by their own narrative, Acemoglu and Johnson end up offering a grab-bag of policy recommendations that have nothing to do with “technological choice,” such as breaking up big technology companies, reforming taxes to favor corporate investment in workers rather than automation, investing in worker training, and the like. Worthy and venerable stuff. But for a book that promised to provide a new theory of technology and inequality, it’s a whimper of an ending.
Why not state the obvious? So long as the power asymmetry between corporate ownership and workers persists, the challenge of wage inequality for US workers will remain largely unsolved. My friend and colleague Ned Woodhouse, to whom I was complaining about Power and Progress, reminded me that the political economist Charles Lindblom nailed the problem at the end of his 1977 book Politics and Markets:
It has been a curious feature of democratic thought that it has not faced up to the private corporation as a peculiar organization in an ostensible democracy. Enormously large, rich in resources, the big corporations [can] insist that government meet their demands, even if those demands run counter to those of citizens…. They are on all these counts disproportionately powerful…. The large private corporation fits oddly into democratic theory. Indeed, it does not fit.
In failing to take on this fundamental contradiction, all the talk of democracy and technological choice adds up to a future that looks much like the past. The big theory boils down to: “machines coming more and more to be the property of men who did not work, only owned and hired.” Workers of the world, read something else.
AI-Assisted Biodesign
AMY KARLE, BioAI Mycelium Grown Into the Form of Insulators, 2023
Amy Karle is a contemporary artist who uses artificial intelligence as both a medium and a subject in her work. Karle has been deeply engaged with AI, artificial neural networking, machine learning, and generative design since 2015. She poses critical questions about AI, illuminates future visions, and encourages us to actively shape the future we desire.
AMY KARLE, AI Bioforms for Carbon Capture, 2023AMY KARLE, Cell Forms (AI-assisted design), 2023AMY KARLE, AI Coral Bioforms, 2023
One of Karle’s projects focuses on how AI can help design and grow biomaterials and biosubstrates, including guiding the growth of mycelium-based materials. Her approach uses AI to identify, design, and develop diverse bioengineered and bioinspired structures and forms and to refine and improve the structure of biomaterials for greater functionality and sustainability. Another project is inspired by the seductive form of corals. Karle’s speculative biomimetic corals leverage AI-assisted biodesign in conjunction with what she terms “computational ecology” to capture, transport, store, and use carbon dioxide. Her goal with this series is to help mitigate carbon dioxide emissions from industrial sources such as power plants and refineries and to clean up highly polluted areas.
AMY KARLE, BioAI-Formed Mycelium, 2023AMY KARLE, AI Coral Bioforms, 2023AMY KARLE, Cell Forms (AI-assisted design), 2023AMY KARLE, BioAI-Formed Mycelium, 2023
Rethinking Engineering Education
We applaud Idalis Villanueva Alarcón’s essay, “How to Build Engineers for Life” (Issues, Fall 2023). As the leaders of an organization that has for 36 years sought to inspire, support, and sustain the next generation of professionals in science, technology, engineering, mathematics, and medicine (STEMM), we support her desire to improve the content and delivery of engineering education. One of us (Fortenberry) has previously commented in Issues (September 13, 2021) on the challenges in this regard.
We agree with her observation that education should place an emphasis on learning how to learn in order to support lifelong learning and an individual’s ability for continual adaptation and reinvention. We believe that in an increasingly technological society there is a need for professionals trained in STEMM to work in a variety of fields. Therefore, there is a need for a significant increase in the number of STEMM professionals graduating from certificate programs, community colleges, baccalaureate programs, and graduate programs. Thus, we agree that basic engineering courses should be pumps and not filters in the production of these future STEMM professionals.
We strongly support the author’s call for “stackable” certificates leading to degrees. The same holds for further increasing the trend toward pushing experiential learning activities (including laboratories, design-build contests, internships, and co-ops) earlier in engineering curricula.
We need to ensure that underserved students have opportunities for rigorous STEMM instruction in pre-college education.
Over the past 40 years, a number of organizations and individuals have worked to greatly improve engineering education. Various industrial leaders, the nongovernmental accrediting group ABET, the National Academies of Sciences, Engineering, and Medicine, the National Science Foundation, and the National Aeronautics and Space Administration, among others, have helped engineering move to a focus on competencies, recognize the urgency of interdisciplinary approaches, and emphasize the utility of situating problem-solving in systems thinking. But much work remains to be done.
Most particularly, significant work remains in engaging underserved populations. And these efforts must begin in the earliest years. The author begins her essay with her own story of being inspired to engineering by her father. We need to reach students whose caregivers and relatives have not had that opportunity. We need to provide exposure and reinforcement through early and sustained hands-on opportunities. We need to ensure that underserved students have opportunities for rigorous STEMM instruction in pre-college education. We need to remove financial barriers to attendance of high-quality collegiate STEMM programs. And for the precious 5–7% of high school graduates who enter collegiate STEMM majors, we must hold on to more than the approximately 50% national average that currently are retained in engineering through baccalaureate graduation. We need to ensure that having entered a STEMM profession, there are supports in place for retention and professional advancement. The nation’s current legal environment has caused great concern about our ability to target high-potential individuals from underserved communities for programmatic, financial, professional, and social support activities. We must develop creative solutions that allow us to continue and expand our efforts.
Great Minds in STEM is focused on contributing to the attainment of the needed changes and looks forward to collaborating with others in this effort.
Juan Rivera
Chair of Board of Directors, Great Minds in STEM
Retired Director of Mission 1 Advanced Technologies and Applications Space Systems, Aerospace Systems Sector, Northrop Grumman Corporation
Norman Fortenberry
Chief Executive Officer, Great Minds in STEM
In her essay, Idalis Villanueva Alarcón outlines ways to improve engineering students’ educational experience and outcomes. As leaders of the American Society for Engineering Education, we endorse her suggestions. ASEE is already actively working to strengthen engineering education with many of the strategies the author describes.
A system in which “weeder courses” are used to remove “defective products” from the educational pipeline is both outdated and counterproductive in today’s world. We can do better, and we must.
As Villanueva explains, a system in which “weeder courses” are used to remove “defective products” from the educational pipeline is both outdated and counterproductive in today’s world. We can do better, and we must. To help improve this system, ASEE is conducting the Weaving In, Not Weeding Out project, sponsored by the National Science Foundation, under the leadership of ASEE’s immediate past president, Jenna Carpenter, and in collaboration with the National Academy of Engineering (NAE). This project is focused on identifying and sharing best practices known to support student success, in order to replace outdated approaches.
Villanueva emphasizes that “barriers are integrated into engineering culture and coursework and grounded in assumptions about how engineering education is supposed to work, who is supposed to take part, and how engineers should behave.” This sentiment is well aligned with ASEE’s Mindset Project, developed in collaboration with NAE and sponsored by the National Science Foundation. Two leaders of this initiative, Sheryl Sorby and Gary Bertoline, reviewed its goals in the Fall 2021 Issues article “Stuck in 1955, Engineering Education Needs a Revolution.”
The project has five primary objectives:
Teach problem solving rather than specific tools
End the “pipeline mindset”
Recognize the humanity of engineering faculty
Emphasize instruction
Make graduate education more fair, accessible, and pragmatic
In addition, the ASEE Faculty Teaching Excellence Task Force, under the leadership of University of Akron’s Donald Visco, has developed a framework to guide professional development in engineering and engineering technology instruction. Conceptualized by educators for educators and also funded by NSF, the framework will enable ASEE recognition for levels of teaching excellence.
We believe these projects are helping transform engineering education for the future, making the field more inclusive, flexible, supportive, and multidisciplinary. Such changes will help bring about Villanueva’s vision, and they will benefit not only engineering students and the profession but also the nation and world.
Jacqueline El-Sayed
Executive Director,
American Society for Engineering Education
Doug Tougaw
2023–2024 President,
American Society for Engineering Education
The compelling insights in Idalis Villanueva Alarcón’s essay deeply resonate with my own convictions about the essence of engineering education and workforce development. She masterfully articulates a vision where engineering transcends its traditional academic confines to embrace an enduring voyage of learning and personal growth. This vision aligns with my philosophy that engineering is a lifelong journey, one that is continually enriched by a diversity of experiences and cultural insights.
I propose a call to action for all involved in the engineering education ecosystem to embrace and champion the cultural and experiential wealth that defines our society.
The narrative the author shares emphasizes the importance of informal learning, which often takes place outside the classroom and is equally crucial in shaping the engineering mindset. It is a call to action for educational systems to integrate a broader spectrum of knowledge sources, thus embracing the wealth of experiences that individuals bring to the table. This inclusive approach to education is essential for cultivating a dynamic workforce that is innovative, versatile, and responsive to the complex challenges of our time. I propose a call to action for all involved in the engineering education ecosystem to embrace and champion the cultural and experiential wealth that defines our society.
Fostering lifelong learning in engineering must be a collective endeavor that spans the entire arc of an engineer’s career, necessitating a unified effort from every learning partner who influences their journey—from educators instilling the foundations of science and mathematics to mentors guiding seasoned professionals. This collaborative call to action is to actively dismantle the barriers to inclusivity, ensuring that our educational and work cultures not only value but celebrate the diverse “funds of knowledge” each individual brings. By creating platforms where every voice is heard and every experience is valued, we can nurture an engineering profession marked by continual exploration, mutual respect, and a commitment to societal betterment—a profession that is as culturally adept and empathetic as it is technically proficient.
Also central to this partnership is the role of the student as an active participant in their learning journey. Students must be encouraged to take ownership of their continuous development, understanding that the field of engineering is one of perpetual evolution. This empowerment is fostered by learning partners at all life stages instilling in students and professionals the belief that their growth extends beyond formal education and work to include the myriad learning opportunities that life offers.
Inclusive leadership practices and models are the scaffolding that supports this philosophy. Leaders across the spectrum of an engineer’s life—from educators in primary schools to mentors in professional settings—are tasked with creating environments that foster inclusivity and encourage the exchange of ideas. Such leadership is not confined to policymaking; it is embodied in the day-to-day interactions that inspire students and professionals to push the boundaries of their understanding and capabilities.
Finally, we must advocate for frameworks and models that drive systemic change through collaborative leadership. The engineering journey is a tapestry woven from the threads of diverse experiences, continuous learning, and inclusive leadership. Let us, as educators and leaders, learning partners at all levels and stages, commit to empowering engineers to embark on this journey with the confidence and support they need to succeed.
What steps are we willing to take today to ensure that inclusivity and lifelong learning become the enduring legacy we leave for future engineers? Let us pledge to create a future where every engineer is a constant learner, fully equipped to contribute to a world that is richly diverse, constantly evolving, and increasingly interconnected.
Denise R. Simmons
Associate Dean for Workforce Development
Herbert Wertheim College of Engineering
University of Florida
Idalis Villanueva Alarcón calls deserved attention to new initiatives to enhance engineering education, while also reminding us of a failure of the profession to keep up with the changes it keeps causing. Engineering is the dynamic core of the technological changes and innovations that are mass producing a paradoxical societal fallout: glamorous prosperity and psychopolitical disorder. It’s driving us into an engineered world that is, in aggregate, wealthy and powerful beyond the ability to measure or imagine, yet in which a gap between those who call it home and those who struggle to do so ever widens.
It’s also unclear how much curriculum reform might contribute to the deeper political challenges deriving from the gap between the rich and powerful and those who have been uprooted from destroyed communities.
Villanueva’s call for the construction of a broader engineering curriculum and lifelong learning is certainly desirable; it is also something we’ve heard many times, with only marginal results. It’s also unclear how much curriculum reform might contribute to the deeper political challenges deriving from the gap between the rich and powerful and those who have been uprooted from destroyed communities. For many people, creative destruction is much more destruction than creation.
Should we nevertheless ask why such a salutary ideal has gotten so little traction? It’s complex and all the causes are not clear, but it’s hard not to suspect that just as there is a hidden curriculum in the universities that undermines the ideal, there is another in the capitalist economy to which engineering is so largely in thrall. And what are the hidden curricular consequences of not requiring a bachelor’s degree before enrollment in an engineering school, unlike as is required by schools of law and medicine? If engineering were made a truly professional degree, some of Villanueva’s proposals might not even be necessary.
Carl Mitcham
Professor Emeritus of Humanities, Arts, and Social Sciences
Colorado School of Mines
Idalis Villanueva Alarcón aptly describes the dichotomy within the US engineering education system between the driving need for innovation and an antiquated and disconnected educational process for “producing” engineers. Engineers walk into their fields knowing that what they will learn will be obsolete in a matter of years, yet the curricula remain the same. This dissonance, the author notes, stifles passion and perhaps, critically, the very thing that industry and academia are purportedly seeking—innovation and creative problem-solving. This “hidden curriculum” is one of the insidious tools that dehumanize engineering as not an option for those who want to innovate, to help others, and to be connected to a sustainable environment. Enrollments continue to decline nationally—are any of us surprised? Engineering is out of step with the values of US students and the needs of industry.
Engineering is out of step with the values of US students and the needs of industry.
Parallel to this discussion are data from the latest Business Enterprise Research and Development Survey showing that US businesses spent over $602 billion on research and development in 2021. This was a key driver for many engineering colleges and universities to expand “new” partnerships that were more responsive to developmental and applied research. While many were small and medium-size businesses, the majority were large corporations with more than 1,000 employees. Underlying Villanueva’s discussion are classic questions in engineering education: Are we developing innovative thinkers who can problem solve in engineering? Conversely, are we producing widgets who are paying their tuition, getting their paper, interviewing, getting hired, and logging into a terminal? Assembly lines are not typically for innovative development; they are the hallmarks of product development. No one believes that working with students is a form of assembly line production, yet why does it feel like it is? As access to information increases outside academia, new skills, sources of expertise, and experience arise for students, faculty, and industry to tap. If the fossilization of curricula and behaviors within the academy persists, then other avenues of accessing engineering education will evolve. These may be divergent pathways driven by factors surrounding industry and workforce development.
Villanueva suggests considering a more holistic and integrated approach that seeks to actively engage students’ families and social circles. No one is a stand-alone operation. Engineering needs to account for all of the variables impacting students. I wholeheartedly agree, and would add that by leveraging social capital and clarifying the schema for pathways for students (especially first-generation students), working engineers, educators, and other near peers can help connect the budding engineers to a network of potential support when the courses become challenging or the resources are not obvious. Not only would we begin to build capacity within underrepresented populations, but we also would enable the next-generation workforce to realize their dreams and help provide a community with some basic tools to mentor and support the ones they cherish and want to see succeed.
Monica Castañeda-Kessel
Research Program Manager
Oregon State University
Bring on the Policy Entrepreneurs
In the first six months of the COVID-19 pandemic, the scientific literature worldwide was flooded with research articles, letters, reviews, notes, and editorials related to the virus. One study estimates that a staggering 23,634 unique documents were published between January 1 and June 30, 2020, alone.
Making sense of that emerging science was an urgent challenge. As governments all over the world scrambled to get up-to-date guidelines to hospitals and information to an anxious public, Australia stood apart in its readiness to engage scientists and decisionmakers collaboratively. The country used what was called a “living evidence” approach to synthesizing new information, making it available—and helpful—in real time.
Each week during the pandemic, the Australian National COVID‑19 Clinical Evidence Taskforce came together to evaluate changes in the scientific literature base. They then spoke with a single voice to the Australian clinical community so clinicians had rapid, evidence-based, and nationally agreed-upon guidelines to provide the clarity they needed to care for people with COVID-19.
This new model for consensus-aligned, evidence-based decisionmaking helped Australia navigate the pandemic and build trust in the scientific enterprise, but it did not emerge overnight. It took years of iteration and effort to get the living evidence model ready to meet the moment; the crisis of the pandemic opened a policy window that living evidence was poised to surge through. Australia’s example led the World Health Organization and the United Kingdom’s National Institute for Health and Care Excellence to move toward making living evidence models a pillar of decisionmaking for all their health care guidelines. On its own, this is an incredible story, but it also reveals a tremendous amount about how policies get changed.
Policy entrepreneur as changemaker
Many years before the pandemic, living evidence had become the life’s work of Julian Elliott, a clinical doctor and professor at Monash University in Melbourne, Australia, and the founder of the Future Evidence Foundation, a nonprofit that focuses on evidence synthesis. Elliott became interested in evidence when he was an HIV doctor working in Cambodia in the mid-2000s. There, he saw a critical need for accessible, up-to-date health evidence to inform both daily clinical decisions and public health programs. Thousands of people were dying of HIV/AIDS, but it was difficult to find high-quality evidence for sound decisionmaking, patient care, and program development.
This experience inspired him to reimagine all aspects of evidence synthesis and use. The usual cycle of publishing research, conducting systematic reviews, and eventually creating guidelines can take years. Instead, Elliott piloted an approach that would make it possible to create, link, and update datasets so that new research findings could flow through to health systems in days. His model simultaneously brings researchers, clinicians, and decisionmakers together with new ways for people to contribute and collaborate to make sense of research.
Policy entrepreneurship should instead be seen as a set of skills and strategies that are relatively easy to learn.
In 2014, Elliott and colleagues published a first vision paper in PLOS Medicine on the living evidence model. In subsequent years, he helped found the Living Evidence Collaboration in Australia, which used the approach to create living evidence guidelines for stroke and diabetes care. These guidelines were dynamic, online-only summaries of evidence that were updated rapidly and frequently. By the time the COVID-19 pandemic hit in 2020, living evidence already had a proof of concept to build upon and was able to scale rapidly when it was urgently needed.
Elliott is an example of a policy entrepreneur, a uniquely catalytic player in the policy arena. Like Elliott, many policy entrepreneurs fly under the radar for decades as they develop policy ideas geared to specific problems, surfacing with solutions at the right moment. There is a tendency to view such people as “naturals,” and their work is rarely included in science or policy curriculums. But policy entrepreneurship should instead be seen as a set of skills and strategies that are relatively easy to learn. Teaching these to a wider range of scientists could bring both new policy ideas and more diverse perspectives into the process of democratic decisionmaking.
Policy entrepreneurs in the wild
Although individual policy entrepreneurs are visible within the policy space, they’re not well known outside of it. The term was first popularized in the 1980s by political scientist John Kingdon, a close observer of politics in Washington, DC, who noticed how “windows of opportunity” opened for policy changes after an election, an annual budget process, or a national crisis. Policy entrepreneurs, who had often been championing particular solutions for years, had the ability to spot these windows and accelerate the adoption of new practices. Kingdon suggested that policy entrepreneurs “could be in or out of government, in elected or appointed positions, in interest groups or research organizations. But their defining characteristic, much as in the case of a business entrepreneur, is their willingness to invest their resources—time, energy, reputation, and sometimes money—in the hope of a future return.”
Subsequently, scholars Mark Schneider and Paul Teske combined detailed case studies with surveys to better understand what policy entrepreneurs do and when their actions contribute to policy changes. Other scholars, notably Michael Mintrom, added to the literature by describing policy entrepreneurs as ambitious in pursuit of a cause, credible as experts in their field, and socially aware. The sociable tenacity he identified makes sense; driving a major policy innovation takes commitment and energy—a watchful waiting and building that may take decades to come to fruition. Those who are prepared to do this must be motivated by a bigger vision for a better future.
A practical roadmap or curriculum could empower more people from diverse backgrounds and expertise to influence the policy conversation.
This body of scholarship is important, but it’s largely descriptive and theoretical—and also confined to the political science literature. Although many policy entrepreneurs come to their role by virtue of a strong personal desire to make a difference, they often have to pick up their skills on the job, through informal networks, or by serendipitously meeting someone who shows them the ropes. A practical roadmap or curriculum could empower more people from diverse backgrounds and expertise to influence the policy conversation.
Thomas Kalil, who served in senior roles for more than 16 years in the White House, described the need in his article “Policy Entrepreneurship at the White House”: “I believe that individuals who have had the opportunity to serve as policy entrepreneurs acquire tacit knowledge about how to get things done. This knowledge is difficult to share because it is more like learning to ride a bicycle than memorizing the quadratic formula.” Since leaving the White House, Kalil, and many who worked with him, have helped others learn to ride that bicycle.
Growing a movement
My own organization, the Federation of American Scientists (FAS), identifies policy entrepreneurship as a foundational principle of our work. Starting in 1945, when it was called the Federation of Atomic Scientists, the organization included scientists directly involved with the Manhattan Project who felt called to combat the nuclear arms race by promoting public engagement, reducing nuclear risks, and establishing an international system for nuclear control and cooperation. Today, FAS’s mission has expanded, but FAS staff and leadership remain policy entrepreneurs in spirit and in practice, taking the same approaches today to advance progress in climate, wildfire, artificial intelligence, and more. For the past several years we’ve been talking with others in our orbit about how to make the tacit knowledge of the policy community more accessible to scientists and everyone else.
Along with others from FAS, I believe the world needs more policy entrepreneurs. In the face of urgent global challenges such as climate change and pandemics, policy entrepreneurship is one way to hasten progress. Moreover, in a country bitterly divided along partisan lines, policy entrepreneurs can propose pragmatic solutions that bridge myriad gaps. And, as in the case of Julian Elliott’s work during the pandemic, policy entrepreneurship can also bring stakeholders together to rally around a practical approach toward a common goal. Empowering early-career researchers with skills to engage the policy arena could prepare them for a lifetime of high-impact engagement—while bringing their diverse perspectives to the task of democratic governance and accelerating transformative policy outcomes. To this end, FAS has been experimenting with how to help more scientists find their inner policy entrepreneur by creating methodologies for training and communities of practice.
In a country bitterly divided along partisan lines, policy entrepreneurs can propose pragmatic solutions that bridge myriad gaps.
Before the 2020 presidential election, FAS created a platform called the Day One Project to support experts with promising policy ideas. Through online, multiweek bootcamps, we helped experts conceive and write policy memos. Then we gave them the tools to identify levers of policy change and encouraged them to meet with decisionmakers and other stakeholders with the authority to help implement ideas.
The Day One Project has now published more than 300 memos. Some have become policy. In one recent example, engineer and climate technologist Lauren Shum wrote a memo in 2021 spelling out a plan to address the problem of lead emissions from small airplanes that use leaded fuel, which can endanger public health. Armed with her memo, Shum met with decisionmakers in the Environmental Protection Agency (EPA) and other stakeholders. In October 2023, the EPA finalized an endangerment finding, which will get the ball rolling on legislative and executive action.
The Day One community’s results have been encouraging, but our team believes that it would be helpful to create a more cohesive network where policy entrepreneurs can share lessons learned, mentor others, and help build and grow the field. Earlier this fall, we participated in a meeting with more than 100 people who have been architects of change on issues from organ donation to immigration reform. The group came together to envision how a Policy Entrepreneurship Network could create a scaffolding to support current and aspiring policy entrepreneurs. The group’s goal is to create a community of practice for policy entrepreneurs that can assist other members of the community with advice and connections, share what they have learned with broader audiences, and serve as an incubator for projects related to policy entrepreneurship.
Building greater awareness is another important step toward growing a movement. Recently, the Institute for Progress began publishing a newsletter about getting things done in the policy sphere,Statecraft, which features interviews with policy entrepreneurs who, like Julian Elliott, have achieved change over a career of engagement. In its first issue, it featured an interview with Mark Dybul, one of the architects of the President’s Emergency Plan for AIDS Relief, better known as PEPFAR, under President George W. Bush. Dybul talked about creating a successful program and some of the unexpected administrative decisions that made cross-government coordination highly effective. An issue in September featured Marina Nitze, former chief technology officer of the US Department of Veterans Affairs (VA), who helped millions of veterans access VA health care through simple technical reforms. Nitze described how correcting a misinterpretation of the Paperwork Reduction Act made it possible to talk more expansively to veterans about their experiences and put user research at the center of reform efforts.
Telling these stories of successful policy entrepreneurs reveals the often-hidden mechanisms of policy change while demonstrating the power of tenacious individuals in a way that is both empowering and optimistic. These stories are one example of the sort of multipronged effort that will be necessary to make policy entrepreneurship widely accessible and not simply a product of individual heroism or serendipity.
Policy entrepreneurship is a journey, not a destination
Reading the stories of Dybul and Nitze also offers a reminder that being a policy entrepreneur takes a paradoxical combination of urgency and patience, stubbornness and flexibility. You can never truly know when your work will pay off, or what additional opportunities will open up along the way. Though you can pay attention to the demand signals, you simply can’t always know when world events will create a new policy window, or when leaders who can adopt new policies will be paying attention.
Even for policy entrepreneurs, planting the seeds of policy change takes time. As society contends with challenges such as climate change and global security, policy entrepreneurs who are ready to deploy creative, tenacious, and pragmatic approaches to making change need decades of cultivation and scaffolding.
Today’s random arrangement of education for scientists in policy is too slow and too haphazard to yield the progress needed on pressing problems and on bringing diverse perspectives into the policy process.
Today’s random arrangement of education for scientists in policy is too slow and too haphazard to yield the progress needed on pressing problems and on bringing diverse perspectives into the policy process. As the movement to build policy entrepreneurs progresses, it will need to build a curriculum that is tactical, actionable, and accessible. Every graduate student in the hard sciences, social sciences, health, and engineering should be able to learn some of the basic tools and tactics of policy entrepreneurship as a way of contributing their knowledge to a democratic society.
In the years since his model was applied during the height of the COVID-19 pandemic, Julian Elliott has participated in hundreds of meetings all over the world, helping to share knowledge on how living evidence can improve decisionmaking. Although the pandemic helped open the policy window for living evidence, it is now being applied in Asia, Africa, North America, and Europe on subjects including education and climate change, and its long-term impacts are only starting to be felt. Similarly, policy entrepreneurship has been recognized for many decades, but efforts to organize active, structured support are just beginning.
Stories and Basic Science Collide
Short story collections with multiple authors are a curious act. Each story is like a distinct performer given a moment to shine on a shared cabaret stage. But the true test of the show is the whole. Such collections are often not connected beyond a common prompt, leaving the reader to navigate a mishmash of performances and ideas.
This approach works well, however, in Collision: Stories From the Science of CERN—first because of its grounding in particle physics, a field whose tangible impact is likely many years away, and second because of the stories’ collaborative origins.
The anthology began with scientists submitting writing prompts to authors, who then visited CERN—the European Organization for Nuclear Research—where the writers turned scientific ideas into fantastical short stories. Alternatively put: a group of authors were accelerated in a 27-kilometer-long ring and smashed into a selection of ideas, resulting in a collection that pokes and prods at the meanings and implications of CERN’s research. Each story includes an afterword by the scientist who proposed the idea, bringing in the researchers’ experiences working at CERN and how the story compares to the current state of research. The overall result is delightful—an imaginative narrative performance, encored with a reflection grounded in reality.
Launched in 1954 and based in Geneva, Switzerland, CERN is the world’s largest particle physics laboratory, with more than 12,000 scientists researching fundamental questions, including how dark matter fits into the current model for particle physics. CERN’s 2023 budget was $1.3 billion, primarily composed of contributions from 23 European Union member states.
Collision redirects CERN’s mammoth enterprise from its usual purpose of exploring the deepest mysteries of the universe to ask a daring question: So what? The anthology shines for readers intrigued by the question of what the field of particle physics means for humanity.
Collision redirects CERN’s mammoth enterprise from its usual purpose of exploring the deepest mysteries of the universe to ask a daring question: So what?
To me, this is as awesome a question as one can find. The exploration of how a discipline of science is shaped by—and also shapes—some aspect of existence is like a multicourse meal. It starts with the motives and moves on to explore how the scientists perceive their work and themselves. Cleanse the palate and delve into what external factors might be influencing the field; where do the CERN chefs source their ingredients from and how does it affect their recipes? Finally, gorge on the question of what the emerging physics knowledge will do for and to humanity. Fiction provides a unique approach to understanding these questions and connecting social context and personal values.
Collision’s stories can be clustered into several themes. First, there are the inner worlds of the scientists. In general, the range of characterizations is refreshing and humanizing. For example, “Side Channels from Andromeda” by Peter Kalu explores how posttraumatic stress disorder complicates a researcher’s work and ability to function. In “Marble Run” by Luan Goldie, the protagonist, a scientist and mother, is unable to step away from her work, nodding to the impact that research can have on family responsibilities.
“The Ogre, the Monk, and the Maiden” by Margaret Drabble is one of highlights of the collection. It presents one of the few nontraditional romantic relationships—a triad—that I’ve come across in the genre. (Others include Robert A. Heinlein’s Stranger in a Strange Land and Time Enough for Love, N. K. Jemisin’s Broken Earth trilogy, and Martha Wells’s Murderbot Diaries series.) The story includes a delightful mix of backgrounds and interests for the characters, with one member of the trio originally being a linguist and another “tormented by god and metaphysics and predestination.” It reminded me how diverse the driving force behind all scientists’ passion for their work actually is—and must be, if they are to achieve the deepest exploration of the unknown. But what truly struck me was how the interactions among the characters represented the space in which their passions and research merged and evolved. There has been momentum within the scientific enterprise to dispel the trope of the lone “hero scientist,” in the recognition that teams are more commonly the research dynamic behind advances at the bench. But I believe there is scant fiction that delves into how the backgrounds and relationships of team members themselves might converge in shaping the research.
A second theme is the impact of CERN’s science on humanity’s future. The horizon of technology—the cornerstone of science fiction! Here, though, the book is lackluster. Although fun and well written, the technology-centric stories did not push boundaries or envision new ways particle physics might impact society. In “Skipping” by Ian Watson, two pilots travel to distant worlds through so-called graviton highways. In “Gauguin’s Questions” by Stephen Baxter, an artificial intelligence oversees a particle accelerator; this has perhaps become a more pressing topic considering the recent progress of large language models. These stories didn’t immerse me in a “future state” through a new capacity emerging distinctly from the field of particle physics, per se. Overall, the tech-focused stories left me wanting more deeply considered possible futures—a result, possibly, of an inability to see the potential effect of particle physics research on humanity, at least in comparison with the well-articulated, near-future ideas that fiction writers have explored in the realms of biotechnology and artificial intelligence.
The question of who has the liberty to dwell upon the fate of humankind is as relevant then as it is to today’s conversations on who contributes—and shapes—the progress of science.
A third theme of the collection explores what particle physics research means for society. CERN has a particular relevance to this topic due to the lab’s significant costs and it being the flagship institution for international collaboration on basic research. Several stories are thought-provoking: in “Absences,” author Desiree Reynolds imagines a mid-twentieth-century conversation between the American writer and activist James Baldwin and CERN’s first director, Felix Bloch. The question of who has the liberty to dwell upon the fate of humankind is as relevant then as it is to today’s conversations on who contributes—and shapes—the progress of science. In “Afterglow” by Bidisha, a stereotypical mad scientist runs amok, but the story is refreshingly placed in the context of broader geopolitics and scientific ambition. A small spoiler: for once, the scientist is the pawn. And to my relief, the collection only includes one dystopian story. In “Cold Open” by Lillian Weezer, several teenagers living amid widespread antipathy toward scientific knowledge excavate the remains of an earlier time when science was not taboo. The story pushes a particular vein of contemporary thinking to such an absurd conclusion that I had difficulty relating; such a reality feels too far away compared to other dystopian futures that loom so close to the horizon.
I suspect the effect of this diverse exploration of the field will vary by audience. Stories like these, grounded in particle physics, might evade meaningful connection, even for readers who are interested in other scientific disciplines. For comparison, a work that takes a brainy physics theme—in this case quantum mechanics—and viscerally explores its impact on humanity’s sense of self is Ted Chiang’s short story “Anxiety Is the Dizziness of Freedom,” published in his 2020 story collection, Exhalation.
They have created a tool to reflect on how and why society invests in research.
However, I hope this anthology makes its way to both CERN leadership and reform advocates alike. The responsibilities of leadership extend beyond technical details and long-term strategy. CERN’s leaders are luminaries whose word choices influence institutional culture at all levels—and little is as effective as fiction at demonstrating this influence. Even the community at an institution such as CERN—dedicated to objective observation and rational scientific exploration—operates within a social system of values and principles to achieve its technical goals. Stories such as those featured in Collision serve as a mirror of scientists’ discourse, refracted through the perception of talented writers. I would have loved to see, for example, a story that explored the discussions behind CERN’s 25 by ’25 initiative, which commits to increasing the percentage of women personnel from 21% to 25% by 2025. Such a story, whether skewering or supportive, would provide CERN’s community with another perspective to compare with the bullet points on their slide decks.
Such reflections are a critical self-check for any scientific or technological institution. Collision goes a step beyond, asking the scientists themselves to take part in this effort of a collective, artistic exploration of what their enterprise means. In the process, they have created a tool to reflect on how and why society invests in research. May we get to see many more such thought experiments.
Walter Valdivia Researches for the White House
The Science Policy IRL series pulls back the curtain on who does what in science policy and how they shaped their career path. In previous episodes we’ve looked at the cosmology of science policy through the eyes of people who work at federal agencies and the National Academies, but this time we are exploring think tanks.
Walter Valdivia describes how a chance encounter while he was getting a PhD in public policy at Arizona State University led him into science policy. Since then he’s worked at think tanks including Brookings and the Mercatus Center and is now at the Science and Technology Policy Institute, which does research for the White House Office of Science and Technology Policy. In this episode, we’ll talk to Walter about what think tanks do in the policy world and how policy sometimes creates inherent paradoxes.
Is there something about science policy you’d like us to explore? Let us know by emailing us at podcast@issues.org, or by tagging us on social media using the hashtag #SciencePolicyIRL.
Check out The Honest Broker by Roger Pielke, Jr. to learn more about the role of impartial expertise.
Interested in learning more about Federally Funded Research and Development Centers (FFRDCs)? Read this primer.
Transcript
Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine and by Arizona State University.
I’m Lisa Margonelli, Editor in Chief at Issues. In this episode of Science Policy IRL, I’m joined by Walter Valdivia who is a research staff member at the Science and Technology Policy Institute, widely known as STPI. In previous episodes of this series, we’ve looked at the science policy landscape through the eyes of people who work in federal agencies and at the National Academies of Sciences, Engineering, and Medicine. Today we’re looking at a different spot on the science policy map: think tanks that supply research about problems, policies, and outcomes to decisionmakers. STPI is a unique think tank for many reasons–just one of which is that it does research for the White House Office of Science and Technology Policy. In this episode we’ll talk about what think tanks do and also Walter’s thoughts some of the paradoxes of policy.
Walter, welcome! Thank you for joining us.
Walter Valdivia: Glad to be here, Lisa. Thanks for inviting me.
Margonelli: So one question we often start with is a very basic one. How do you define science policy?
Valdivia: I’ll give you a basic answer. Harvey Brooks in ’64 gave us a clear distinction, a useful distinction that is, that you have two realms in science policy. Science for policy is when science informs policy and regulations like that, that goes into the EPA or the FDA or even a larger scale. And then policy for science or the public administration of science and the scientific enterprise of the nation. That is a standard textbook. Lemme give you a little bit more of a twist to play with this definition and be part of a collection of conversations that you’re having. I like to think in public policy terms in general, but in science policy in particular, of the irony or the paradox inherent to policy. And I think this exists in both realms in the science or policy realm and in the policy for science realm.
The most direct way to imagine is that every policy has unanticipated consequences, but at the same time, when you get the real cosmic joke on you is when the very purposes of your policy are undermined by the policy itself. And this is in both cases, unanticipated consequences is very important to incorporate in our policy thinking. And we could weep or we could laugh at the inevitable paradox of sometimes undermining our own goals by the policies that we pursue. But let me get out of the esoteric definitional game. Lemme give you an example of what I’m talking about.
We are in a renaissance of industrial policy. The current administration has as a set of policies rebounding from the tragedy, the self-imposed strategy of economic downturn of the COVID period, has not only injected a new energy using federal monies into the scientific enterprise, but also new vision and more engaged science and also more directed investment for innovation in some strategic sectors.
Margonelli: To paraphrase this, essentially we’re now trying to engineer jobs through science and innovation. And that’s what we’re talking about when we talk about industrial policy. We’re trying to create industries through government intervention.
Valdivia: Correct, or give them a boost, or repatriating industries that had started in the US and for economic reasons of the management of the supply chain had been relocated. Let’s talk specifically of microchips and the important role of Taiwan in the supply chain of microchips. So this means that the government will place a few bets via subsidies tax credits on some industries. And part of the current renaissance of industrial policy is the government will make some significant bets at the earliest stage, at the innovative stage, at the research stage as well for this industry. Well, you know that there’s a whole array of support that some strategic industries are receiving.
Margonelli: Where does the policy undermine itself?
Valdivia: Every time you favor an industry or every time the government creates some kind of protection or subsidy, the government also creates—and this is a normal process of democracy—it also creates an interest group that will defend that subsidy forever. And part of the economic game of boosting an industry is to get a competitive edge, to give the nation’s economy a competitive advantage in that area. The irony of the situation is that however necessary this might be—and I’m not questioning the necessity of some investments of this sort—is that you also create a lobby that protects the subsidy and the industry becomes accustomed to this additional extra favor. So we’ll see what is going to happen going forward. But one thing that we for sure going to see together with if there’s effective success in the intentions of this policy, it’s also the creation of a very strong political lobby for represented the industries that defend the subsidy.
Margonelli: This paradox that you’re talking about then is that in trying to make certain industries more competitive, we may actually be sowing the seeds of their lack of competitiveness by creating lobbies and by creating dependencies.
Valdivia: Yes.
Margonelli: Wow. Okay. This has been a quick trip to the middle earth of science policy. Frequently we talk about the aims and goals of science policies, and now what you’re talking about really is about getting right to the heart of the delicate possibilities of arranging resources and policy and intention and goals, and then of course the real world and what actually happens.
So you work at a place called STPI. I want to talk to you a little bit about what your day job is. How do you take this sort of deep sense of the conflicts and paradoxes in science policy into your day job? Explain to us what STPI is.
Valdivia: Sure. STPI is the acronym for the Science and Technology Policy Institute. And from now on, you’re going to see me introducing lots of acronyms as science policy. I’ll try to spell them out. But do please stop me if I drop an acronym without explaining what it is. So STPI is an FFRDC: a federally funded research and development center. These are independently run research centers, that nevertheless receive federal funding. They’re usually connected to a agency or they have a primary agency of service for which they support. This is highly specialized technical support for things that agencies cannot do themselves in terms of research. They don’t have either the capacity or the purview of their mission. So these research centers support specific federal agencies. And in the case of STPI, the federal agency that we support is the Office of Science and Technology Policy in the White House. It was created to provide technical support to the Office of Science and Technology policy. At the same time and for about 30 years now, perhaps 20, it was so necessary and useful for STPI to also support other agencies in the science bureaucracy to develop the capabilities to support the OSTP robustly across the federal government. And so about 50% of our portfolio projects is directly with OSTP and the other 50% is supporting agencies such as the Department of Energy, NASA, NIST and so on.
Margonelli: Which is the National Institute of Standards and Technology.
Valdivia: Very good.
Margonelli: And the STPI is also as an FFRDC, a federally funded research and development corporation, is also nested within something called the Institute for Defense Analysis. So you’ve got this kind of Russian doll structure, although I guess it would be an American doll structure, since it is offering things advice to the US government. But you have these sort of nested structures of organizations that offer support and research for different agencies and entities within the federal government. So tell us, what do you do for STPI?
Valdivia: In the context of supporting providing technical support to OSTP, STPI provides a range of services related to research. These may include responding to specific questions that OSTP is seeking, and some of these questions may have a very quick turnaround. The president picks up the phone calls Arati Prabhakar at OSTP, and she designates a division head to answer a question that urgently needs an answer. And they pick up the phone calls STPI and say, can you answer this for us? Can you give us a memo within 48 hours?
Margonelli: 48 hours, can you give me an example of what they might ask for? Have you had to answer one of these?
Valdivia: I will have to be honest with you that I haven’t received yet a 48 turnaround task. I just know this has happened, and the reason is that I joined STPI only six months ago. I’m working on longer turnaround projects, but it could be something urgent as in something related to R&D statistics, something related to known impacts of some policies. And so it’s just a matter of collecting information or knowing the expertise that who knows where the data is, where the answers are, and just pull them together very quickly. And that’s why you have a bunch of researchers at STPI with ample capabilities to respond because of their long experience working for and in the federal R&D infrastructure.
Margonelli: That sounds kind of fascinating and also a little bit high pressure.
Valdivia: Indeed. But that was the extreme example of a very, very immediate quick turnaround. STPI, being a full research outfit, also will provide long-term support for long-term studies. Here’s an example of a longer term support that we do. The NSTC, the National Science and Technology Council has a number of subcommittees on which they organize topics of interest for the national R&D enterprise. We could support directly a subcommittee that is writing a report, say on research and development infrastructure. Now of course the subcommittee is good on their own organized writing parties and we would provide the support that you would do as a research outfit with the knowledge and experience and the technical knowledge that would support the writing exercise. We would write it ourselves. That’s one kind of project we could support the writing. That’s another kind of project. But as you see, I’m trying to, in this example, give you some of the versatility of support that an FFRDC can provide the US government. Just finishing that thought, there’s this longer term projects and there’s the two day turnaround and of course there’s a lot in between.
Margonelli: What were you doing last Tuesday? How were you doing science policy?
Valdivia: I would have to look at my agenda.
Margonelli: Okay, what were you doing yesterday?
Valdivia: I know where you’re going to with the questions. What does it look like a normal day? What entails this position? I’m currently a project leader, four projects, two of them related to projects with the OSTP and two with federal agencies. So in each project I have a team, I have gathered a team. Small clarification, I am project leader in some projects. I am a team member in others, led by others. The project leader tends to collect the team, gather the team together between the junior researchers and the more senior researchers depending on the expertise of each member. So a lot of the project leader’s work is to organize the team, to direct the work of the team to start from research design to data collection through analysis to the writeup of a report.
And as a team member, I have specific research tasks segments of a larger project that I need to work on. So a typical day involves meetings with my teams, involves writing research, doing some of the actual research, and it involves also meeting with the sponsors. We call sponsors the primary contact person in the US government with respect to that project. It may also include interviews, say in a project we need to collect information from universities, so we will call a number of universities and so on so forth. This is the sort of activity that I would do in a normal day.
Margonelli: Is it fun and rewarding?
Valdivia: It is very much so. This is a very clever question, Lisa, because as you know, I have worked in the think tank world for the last 11 years. First for Brookings and then at George Mason. And one of the high aspirations of a think tank is that that work, that white paper that you put so much work into, it gets to be read by someone in the government or someone in Capitol Hill. And the huge advantage of an FFRDC is that an FFRDC exists to provide answers directly to the questions asked by the agency, the sponsor agency. So it is enormously rewarding to know that someone will read these reports, someone who is actually thinking about these questions and who can potentially do something about the policies that are inherent to that.
Margonelli: So there’s sort of a direct connection to decisionmakers that makes this research very rewarding.
Valdivia: Yes. Now to this, I should add of course, that we provide independent objective and technical support to the agencies. And by this I mean that a research question that a political officer who may be inclined to advance a policy may find in our answers that is not such a great idea and we are agnostic as to the political content of a particular research question. But still, if this is of a stream of information that helps decision makers, I think it’s a good start.
Margonelli: That’s really interesting. So I want to back up now and find out how you came to be involved in science policy. When you started out, back when you were a little kid, did you say, I want to grow up and be a science policy guy at STPI?
Valdivia: Not even my children would say that. I think being a science policy scholar is not within the, first of all, it is not like being a firefighter or an astronaut, something of public visibility. I think for most of us it’s something you bump into as you’re walking down life and you’re training in grad school and you just start caring for some of these questions and some of these puzzles.
Margonelli: So tell me about your path. How did you bump into it?
Valdivia: I was finishing my first year of doctoral studies and this was at Arizona State. And after this year I was very interested in the philosophy of science. I was reading a lot of that and it’s starting to poke on the edges of philosophy of science, political theory of science, policy science and its role in government. And it was not a very systematic search, but I landed on some key books. One of them was Between Politics and Science by some professor from Rutgers University by the name of David Guston. And I look him up and I find that he’s no longer at Rutgers, he’s at Arizona State, just around the corner from what I was studying. So I shoot him an email and I had some questions about the book, some challenging questions, and he says, oh, come by for a 15 minute talk. And I stayed for two hours chatting with him about his book and at the end of it he offered me a research assistantship. So I moved from my home department to continue my doctoral training, but now directly engaged via this RA-ship with the center that him was running at ASU.
Margonelli: So can I ask you, what was your major before or what was the focus of your doctoral studies before you got interested in science policy.
Valdivia: It’s public administration and policy. My PhDs in public administration and before that I did a master’s in economics, and there was a unique coincidence of interest in that conversation because in his book, him being a political scientist had used a economic theory that I was familiar with and that’s where most of my questions were going. And the principal-agent modeling problem of science policy. In fact, my first paper ever written in science policy was using the principal agent relations.
Margonelli: I think this is very interesting how your career and more theoretical issues has led you into very concrete exercise of uncovering what’s going on in the black box of science that the US government funds.
Valdivia: It’s very generous of you. I would like to see that I have taken glimpses inside that black box. It’s very complex, dense, which is, I should add, typical of a well-functioning democracy. There’s a lot of black boxes inside and they benefit from a little bit of light.
Margonelli: So let’s talk a little bit about your career path after you got your PhD. So you had your sort of conversion moment sitting in Dave Guston’s office talking about the principal-agent problem in science policy, and then you became a research assistant. And then what happened?
Valdivia: I signed up for something that I hadn’t imagined. That’s what I was laughing a little when you said, “when you were a kid, were you thinking to become a science policy scholar?” At that time, him and Dan Sarewitz at ASU had a very large grant to set up a center for the study of nanotechnology in society. And nanotechnology is something that I had never imagined studying or observing, maybe from the distance of popular mechanics, I dunno, an article, but of course my training was in policy. The School of Public Affairs at the time gave you the public administration and policy theory tracks. And so coming out of my doctoral studies, I went out to the job market and the Brookings Institution offered me. At the same time I had a couple of other academic offers. But being a policy think tank and me being a policy scholar, there seemed to be a clear match and I’m glad I took that path. But it was also a conscious path to step outside of academia and the traditional tenure track career of what would be a PhD trained professor.
Margonelli: Let’s stop for a minute and talk a little bit about Brookings. What is the role of think tanks in the sort of cosmology of policy in the United States? And specifically, what is the role of Brookings? What was attractive to you? Why did you leave the academic pathway to get involved in a think tank?
Valdivia: There is a nice concept that has gained greater currency in science policy called use inspire research. And the idea that of use inspire research is that you can do fundamental research at the same time that is oriented to some application instead of a single spectrum, you folded it and you have this match of really high theoretical or fundamental research with a orientation. And so the job of think tanks is to advance research that informs policies, that evaluates policies that produces new ideas. And it’s an interesting thing because they have become a ecology that supports the idea generation functions of government. And that is what I think Brookings and other think tanks to.
Margonelli: So they’re sort of a bubbling fountain of ideas. They’re set up specifically to generate ideas that might lead to productive policies and they might follow different sorts of political ideologies. The different think tanks sort of surface different sorts of policy ideas and concepts. And they do a mixture of promoting those concepts and promoting the thinking and also spreading them to policy makers who may implement them.
Valdivia: Correct. You quickly picked up on the fact that there is an inevitable political affiliation. I don’t mean in the strong term of party affiliation. I don’t think tanks are so overtly pledging to a party, but anytime you have a normative idea, anytime you’re prescriptive in policy studies, of course your idea will land better on one side of the aisle. And if you get together with like-minded people, most of your ideas, however independent and objective, if they’re prescriptive, will probably have a bent. And so generally you imagine, think tanks range along a spectrum because we have a bipartisan system in the US or two party system in the US the spectrum is from one side to another. But having said that, I think that almost all think tanks have scholars who, aware of their normativity, try to get to conclusions that follow their analysis, being impartial more than apolitical, impartial in the way you conduct the research. It’s an aspiration that I saw pursue in practice by my colleagues, myself. And at the same time think tanks, whatever their political bend, they may serve as critique for that, which is their preferred set of policies. And I think that’s perhaps the most useful part of the production of ideas is any renewal and any reform starts with a good critique.
Margonelli: Was there a culture shock in going from academia to a place like Brookings, which is really one of the very top think tanks in the sort of Washington pantheon?
Valdivia: Yes. I wouldn’t say cultural shock, but what was perhaps not shocking, peculiar in my experience, is starting with the first draft, passing through colleagues and receive the sensible feedback of getting rid of the theoretical section nobody’s going to read. And that the theoretical contribution is normally what public policy professional journals are seeking in the contributions you do in peer review journals. What is your theoretical contribution of this study? So it was more of a change of tone, a change of priorities in the production of knowledge. You have to get used to that when you move from academia to the think tanks, and I would say there’s people who do it so well, moving from one side to the other that it’s certainly not impossible. You could call it a type of bilingualism, writing papers for peer review journals and white papers in think tanks.
Margonelli: The think tanks put out white papers and then they ultimately move on to things like op-eds and more public facing opinion pieces to spread out their ideas.
Valdivia: And you nailed on something, you’re putting your finger on something that is being a development. I would say over the last 20 years, the increased currency of social media has created some kind of competition in the market of attention where think tanks want to participate and who scholars in think tanks are sometimes increasingly more invested in having a social media presence than doing the old fashioned job of substantive research.
Margonelli: So from Brookings, you went to I think another think tank. Where did you go from Brookings?
Valdivia: Yes. An intermediate step. I would say think tanks entirely independent of the university system, but there are research centers specialized in policy questions, which by all accounts would be think tanks but are housed within universities. In this case, I worked for the Mercatus Center at George Mason University.
Margonelli: And then ultimately you went on from there to STPI.
Valdivia: Yes.
Margonelli: Yeah. So in your journey through these different zones of science, policy, deliberation and research, what are the big outstanding questions that motivate you to do this work? What gets you out of bed in the morning?
Valdivia: My alarm clock?
But I do understand the question. Here is something that is appealing to me of the work of an FFRDC. You receive questions that are pressing for the US government and you get a chance to use your training and your experience to do an impartial independent analysis. Nobody prescribes as the conclusions of a reports, but you know that somebody who cares about this topic is going to read it. In fact, you’re answering a question that they post. And the appeal is that you remain entirely anonymous like a consulting firm that delivers a report to a client and is a relationship between the consulting firm and the client. An FFRDC delivers the research to the sponsor and the US government and the sponsor will decide what to do with that report. They could input in the policy mix of other considerations that they’re using. It could be put in the drawer and forgotten. There’s many things that could happen and yet you’re done with your job. So the appeal of anonymity and being able to have a say and someone in a political principal position hearing what they’re saying, but then you no longer need to be an advocate for the policy prescription, the policy recommendations the way you would in a think tank. In a think tank, a lot of the game is participating in the public debate entering the public square. And so you could say that the second life of the paper begins with the publication of the white paper. In our case, it’s more humble than that. More anonymous.
Margonelli: This is a very interesting picture. I’m just going to reflect a little bit on it. For one thing, it’s the absolute opposite of social media and the way that the public debate around policies transpires right now in the public debate. There is a lot of personal attachment and identification with the policy ideas. There’s a lot of discussion and the concept of anonymity is not just an anathema, it is a disadvantage in the conversation. And yet what you’re doing, the anonymity is a strength and it is an interesting kind of influence because on the one hand, it could be very influential or like you say, it could just end up in a drawer and it’s this completely other channel of information and policy deliberation that is invisible to a lot of people.
Valdivia: Yes. But I think it satisfies the role of the honest broker.
Margonelli: Explain to me what the honest broker is.
Valdivia: To put simply because there’s a whole book on it. It’s an impartial expert who will honestly intervene with their knowledge and expertise in the debate and aware that of the terms of the debate and the contestation inherent to the debate. The political principals that an FFRDC serve are people in government and being part of the government are subject to all the checks and balances built in our system. The policies that they’re debating, they will be debated publicly. So the FFRDC is merely doing an input on a specific tiny technical point that will be part of the cognitive background on which that political agent will engage the policy debate. So think tanks perhaps used to have several roles, one of which was similar to this, the honest broker, the expertise transmitted to the government regardless of influence, but of course they never needed to limit themselves to participate in the public arena. And connecting to something I said a moment ago about the increased currency of social media. Maybe now we see a lot of participation in an increase in more aggressive participation in the public debate. Precisely because everything is tweeted.
Margonelli: We started this conversation talking a little bit about the black box of science and what it takes to look inside it. And I think that your discussion of your career and your current position has kind of illuminated some of what goes on in the other black box of how the decisions are made. And I thank you for talking with us.
Valdivia: That was my pleasure.
Margonelli: If you would like to learn more about Walter Valdivida’s work, check out the resources in our show notes. Is there something about Science Policy, you’d like to know? Let us know by emailing us at podcast@issues.org or by tagging us on social media using the hashtag #SciencePolicyIRL.
Please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producer, Kimberly Quach, and our audio engineer Shannon Lynch. I’m Lisa Margonelli, Editor in Chief at Issues in Science and Technology. Thank you for listening.
Making Graduate Fellowships More Inclusive
In “Fifty Years of Strategies for Equal Access to Graduate Fellowships” (Issues, Fall 2023), Gisèle Muller-Parker and Jason Bourke suggest that examining the National Science Foundation’s efforts to increase the representation of racially minoritized groups in science, technology, engineering, and mathematics “may offer useful lessons” to administrators at colleges and universities seeking to “broaden access and participation” in the aftermath of the US Supreme Court’s 2023 decision limiting the use of race as a primary factor in student admissions.
Perhaps the most important takeaway from the authors’ analysis—and that also aligns with the court’s decision—is that there are no shortcuts to achieving inclusion. Despite its rejection of race as a category in the admissions process, the court’s decision does not bar universities from considering race on an individualized basis. Chief Justice John Roberts maintained that colleges can, for instance, constitutionally consider a student’s racial identity and race-based experience, be it “discrimination, inspiration or otherwise,” if aligned with a student’s unique abilities and skills, such as “courage, determination” or “leadership”—all of which “must be tied to that student’s unique ability to contribute to the university.” This individualized approach to race implies a more qualitatively focused application and review process.
The NSF experience, as Muller-Parker and Bourke show, also underscores the significance of qualitative applications and review processes for achieving more inclusive outcomes. Despite the decline in fellowship awards to racially minoritized groups starting in 1999, when the foundation ended its initial race-targeted fellowships, the awards pick up and even surpass previous levels of inclusion as the foundation shifted from numeric criteria to a holistic qualitative evaluation and review, for instance, by eliminating summary scores and GRE results and placing more importance on reference letters.
The individualized approach to race will place additional burdens on students of color to effectively make their case for how race has uniquely qualified them and made them eligible for admission.
Importantly, the individualized approach to race will place additional burdens on students of color to effectively make their case for how race has uniquely qualified them and made them eligible for admission, and on administrators to reconceptualize, reimagine, and reorganize the admissions process as a whole. Students, particularly from underserved high schools, will need even more institutional help and clearer instructions when writing their college essays, to know how to tie race and their racial experience to their academic eligibility.
In the context of college admissions, enhancing equal access in race-neutral ways will require significant changes in reconceptualizing applicants—as people rather than numbers or categories—and in connecting student access more closely to student participation. This will require significant resources and organizational change: admissions’ access goals would need to be closely integrated with participation goals of other offices such as student life, residence life, student careers, as well as with academic units; and universities would need to regularly conduct campus climate surveys, assessing not just the quantity of diverse students in the student body but also the quality of their experiences and the ways by which their inclusion enhances the quality of education provided by the university.
These holistic measures are easier said than done, especially among smaller teaching-centered or decentralized colleges and universities, and a measurable commitment to diversity will be even more patchy than is currently achieved across higher education, given the existence of numerous countervailing forces (political, social, financial) that differentially impact public and private institutions and vary significantly from state to state. However, as Justice Sotomayor wrote in closing in her dissenting opinion, “Although the court has stripped almost all uses of race in college admissions…universities can and should continue to use all available tools to meet society’s needs for diversity in education.” The NSF’s story provides some hope that this can be achieved if administrators are able and willing to reimagine (and not just obliterate) racial inclusion as a crucial goal for academic excellence.
Gwendoline Alphonso
Professor of Politics
Cochair, College of Arts and Sciences, Diversity, Equity and Inclusion Committee
Fairfield University
Gisele Muller-Parker and Jason Bourke’s discussion of what we might learn from the forced closure of the National Science Foundation’s Minority Graduate Fellowship Program and subsequent work to redesign the foundation’s Graduate Research Fellowship Program (GRFP) succinctly illustrates the hard work required to construct programs that identify and equitably promote talent development. As the authors point out, GRFP, established in 1952, has awarded fellowships to more than 70,000 students, paving the way for at least 40 of those fellows to become Nobel laureates and more than 400 to become members of the National Academy of Sciences.
The program provides a $37,000 annual stipend for three years and a $12,000 cost of education allowance with no postgraduate service requirement. It is a phenomenal fellowship, yet the program’s history demonstrates how criteria, processes, and structures can make opportunities disproportionally unavailable to talented persons based on their gender, racial identities, socioeconomic status, and where they were born and lived.
This is the great challenge that education, workforce preparation, and talent development leaders must confront: how to parse concepts of talent and opportunity such that we are able to equitably leverage the whole capacity of the nation.
This is the great challenge that education, workforce preparation, and talent development leaders must confront: how to parse concepts of talent and opportunity such that we are able to equitably leverage the whole capacity of the nation. This work must be undertaken now for America to meet its growing workforce demands in science, technology, engineering, mathematics, and medicine—the STEMM fields. This is the only way we will be able to rise to the grandest challenges threatening the world, such as climate change, food and housing instability, and intractable medical conditions.
By and large, most institutions of higher education are shamefully underperforming in meeting those challenges. Here, I point to the too-often overlooked and underfunded regional colleges and universities that were barely affected by the US Supreme Court’s recent decision to end the use of race-conscious admissions policies. Most regional institutions, by nature of their missions and students they serve, have never used race as a factor in enrollment, and yet they still serve more students from minoritized backgrounds than their Research-1 peers, as demonstrated by research from the Brookings Institution. Higher education leaders must undertake the difficult work of examining the ways in which historic and contemporaneous bias has created exclusionary structures, processes, and policies that helped reproduce social inequality instead of increasing access and opportunity for all parts of the nation.
The American Association for the Advancement of Science’s SEA Change initiative cultivates that exact capacity building among an institution’s leaders, enabling them to make data-driven, law-attentive, and people-focused change to meet their institutional goals. Finally, I must note one correction to the authors’ otherwise fantastic article: the Supreme Court’s pivotal decision in Students for Fair Admissions v. Harvard and Students for Fair Admissions v. University of North Carolina did not totally eliminate race and ethnicity as a factor in college admissions. Rather, the decision removed the opportunity for institutions to use race as a “bare consideration” and instead reinforced that a prospective student’s development of specific knowledge, skills, and character traits as they related to race, along with the student’s other lived experiences, can and should be used in the admissions process.
Travis T. York
Director, Inclusive STEMM Ecosystems for Equity & Diversity
American Association for the Advancement of Science
The US Supreme Court’s 2023 rulings on race and admissions have required universities to closely review their policies and practices for admitting students. While the rulings focused on undergraduate admissions, graduate institutions face distinct challenges as they work to comply with the new legal standards. Notably, graduate education tends to be highly decentralized, representing a variety of program cultures and admissions processes. This variety may lead to uncertainty about legally sound practice and, in some cases, a tendency to overcorrect or default to “safe”—because they have been uncontested—standards of academic merit.
Gisèle Muller-Parker and Jason Bourke propose that examining the history of the National Science Foundation’s Graduate Research Fellowship Program (GRFP) can provide valuable information for university leaders and faculty in science, technology, engineering, and mathematics working to reevaluate graduate admissions. The authors demonstrate the potential impact of admission practices often associated with the type of holistic review that NSF currently uses for selecting its fellows: among them, reducing emphasis on quantitative measures, notably GRE scores and undergraduate GPA, and giving careful consideration to personal experiences and traits associated with success. In 2014, for example, the GRFP replaced a requirement for a “Previous Research” statement, which privileged students with access to traditional research opportunities, with an essay that “allows applicants flexibility in the types of evidence they provide about their backgrounds, scientific ability, and future potential.”
These changes made a real difference in the participation of underrepresented students in the GRFP and made it possible for students from a broader range of educational institutions to have a shot at this prestigious fellowship.
There is no compelling evidence to support the idea that traditional criteria for admitting students are the best.
Critics of these changes may say that standards were lowered. But the education community at large must unequivocally challenge this view. There is no compelling evidence to support the idea that traditional criteria for admitting students are the best. Scientists must be prepared to study the customs of their field, examining assumptions (“Are experiences in well-known laboratories the only way to prepare undergraduates for research?”) and asking new questions (“To what extent does a diversity of perspectives and problem-solving strategies affect programs and research?”).
As we look to the future, collecting evidence on the effects of new practices, we will need to give special consideration to the following issues:
First, in introducing new forms of qualitative materials, we must not let bias in the back door. Letters and personal statements need careful consideration, both in their construction and in their evaluation.
Second, we must clearly articulate the ways that diversity and inclusion relate to program goals. The evaluation of personal and academic characteristics is more meaningful, and legally sound, when these criteria are transparent to all.
Finally, we must think beyond the admissions process. In what ways can institutions make diversity, equity, and inclusion integral to their cultures and to the social practices supporting good science?
As the history of the GFRP shows, equity-minded approaches to graduate education bring us closer to finding and supporting what the National Science Board calls the “Missing Millions” in STEM. We must question what we know about academic merit and rigorously test the impact of new practices—on individual students, on program environments, and on the health and integrity of science.
Julia D. Kent
Vice President, Best Practices and Strategic Initiatives
Council of Graduate Schools
What Do Bitter Greens Mean to the Public?
When I heard that a North Carolina biotech company had used gene editing technology to create a new mustard green with less bitterness earlier this year, I laughed. The company cofounder boldly claimed it was “a new category of salad.” But bitter greens are a cultural tradition that I hold dear—they are not just some green leaf that would be more desirable if it tasted like a Jolly Rancher. When I told my mother about the new mustard green, she paused, looked at me, and promptly responded, “It sounds like all they did was remove the culture.” I say this not to oppose the innovation, but to point out that the conversation around it clearly did not include my community. This extends to more complicated topics in the application of biotechnology, which feature the loud voices of companies, activists, and scientists, but not the wider, quieter opinions of, say, my relatives or the many Americans who value food in different cultural, economic, religious, spiritual, historical, and profoundly personal ways. If biotechnology is to be widely regulated and accepted, many more people need to be invited into the conversation about what we value, what our aspirations are, and how this technology should be applied.
The movement to build the US bioeconomy has gained significant momentum over the last decade, leading scientists and policymakers to forecast industrial revolutions in medicine, food, fuel, and materials. In its Bold Goals for Biotechnology and Biomanufacturing report, the Biden administration’s Office of Science and Technology Policy (OSTP) sketched out a vision of “what could be possible with the power of biology,” including the sequencing of the genomes of one million microbial species in the next five years, and the replacement of more than 90% of plastics with biobased feedstocks in the next 20 years. As others have written, realizing this future will take concerted, cross-sectoral efforts to build a multidisciplinary workforce, create a coordinated regulatory framework, and equitably distribute the benefits of the transformation to communities across the country.
If biotechnology is to be widely regulated and accepted, many more people need to be invited into the conversation about what we value, what our aspirations are, and how this technology should be applied.
Overlooked in these projections, however, is the reality that even if these other elements fall into place, advancing the bioeconomy requires public trust. When it comes to a consumer’s purchase of a biotechnology,the pivotal factor is often not price but trust. A 2009 report from the Organisation for Economic Co-operation and Development warned that consumer acceptance and demand for bioeconomy-related products would require active support from governments and engagement from the public at large. Although studies continue to show the importance ofcitizen engagement in building public trust in science and innovation, the current mechanisms for public engagement in the regulatory process fall short of delivering public acceptance of biotechnology.
At this stage, the administration has a unique opportunity to address this issue directly by creating new mechanisms for public engagement. If correctly structured, these processes could serve as a resource for decisionmakers and support the formation of a data repository for evaluating how public perceptions evolve over time.
As a soil biogeochemist and ecologist focused on sustainable agriculture for climate change mitigation, I am energized by the transformative potential of a new bioeconomy era. But I believe the surest path forward will prioritize building trust through new forms of public engagement and transparency.
Influences on public trust
Pathways for building public trust in biotechnology products and techniques must work with the ecosystem of federal regulators, product developers, researchers, and consumers. Federal regulation of the bioeconomy is carried out by three key agencies: the US Environmental Protection Agency (EPA), Food and Drug Administration (FDA), and Department of Agriculture (USDA). Through the regulatory process, the federal government acts as a broker of public trust in biotechnology by providing guidelines that govern the interactions between developers and consumers of biotechnology. Over the past three decades, product developers have depended on strategic alliances with industrial partners from pharmaceutical, agricultural, and food processing corporations to ensure the success of biotechnology within the marketplace. Given that the regulatory approval process for pharmaceuticals can easily exceed 10 years, government partnerships—through technical services or research and development contracts—offer biopharmaceutical companies afinancial lifeline during the premarket phase. These strategies demonstrate a regulatory flexibility that could just as easily be directed toward mechanisms that build public trust.
The academic research community is often caught in the center of debates squaring the frontiers of research and innovation with questions of ethics, risk assessment, and public perception. In 2016, the National Academy of Sciences, Engineering, and Medicine (NASEM) released recommendations on aligning public values with gene drive research. Though gene drive research has tremendous potential to offer solutions to complex agricultural and public health questions, the possibility of the uncontrolled spread of genetic changes raises numerous ethical, regulatory, socio-ecological, and political concerns. The NASEM report revealed that public engagement that promotes “bi-directional exchange of information and perspectives” between researchers and the public can increase trust. A more recent workshop of the Genetic Engineering and Society Center of North Carolina State University on gene drives in agriculture also highlighted the importance of considering public perception and acceptance in risk-based decisionmaking, in the context of developing further research priorities in the field.
Much of today’s framework for biotechnology regulation involves expert deliberations, but the opinions of the public at large are essential to move products into the marketplace. Historically, spaces that allow for dialogue on values, sentiments, and opinions on biotechnology have been dominated by technical experts who keep the discussion confined to issues of their own concern, using relatively narrow terms to label areas of contention. For example, the “product versus process” debate is one of the most contested in the regulatory system for biotechnology. But such narrow approaches to dialogue rarely advance consensus. Since the biotechnology discourse has been inaccessible to general audiences, the opinions of experts continue to drown out calls of concern.
Much of today’s framework for biotechnology regulation involves expert deliberations, but the opinions of the public at large are essential to move products into the marketplace.
Public engagement programs that gather a wide range of opinions across issues central to the advancement of the bioeconomy could help researchers and policymakers put public concern into context, which will add value to the entire regulatory ecosystem. Regulation informed by science and responsive to the values of citizens will more effectively strengthen the sustainability of the US bioeconomy.
Seizing an opportunity
For the first time in the nearly 40-year history of biotechnology governance, the 2022 CHIPS and Science Act directs OSTP to establish a coordination office for the National Engineering Biology Research and Development Initiative charged with, among other activities, conducting public outreach and serving as coordinator of “ethical, legal, environmental, safety, security, and other appropriate societal input.” This policy window presents a novel opportunity to reify a regulatory system for the bioeconomy that also encompasses the voices of the public at large.
The Biden administration should start this work by establishing a bioeconomy initiative coordination office (BICO) within OSTP to foster interagency coordination and provide strategic direction. The office should then create a set of public engagement programs, guided by an advisory board in coordination with EPA, FDA, and USDA, to meet three main priorities.
The first priority should be to involve an inclusive network of external partners to design forums for collecting qualitative and quantitative public acceptance data. The advisory board should include consumers (parents, young adults, patients, etc.) and multidisciplinary specialists (for example, biologists, philosophers, hair stylists, sanitation workers, social workers, dietitians,etc.). Using participatory technology assessments (pTA) methods, the BICO should support public engagement activities, including focus groups, workshops, and forums to gather input from members of the public whose opinions are systemically overlooked. The BICO office should use pre-submission data, past technologies, near-term biotechnologies and, where helpful, imaginative scenarios such as science fiction to produce case studies to engage with these nontraditional audiences. Public engagement should be hosted by external grantees that maintain a wide-ranging network of interdisciplinary specialists and interested citizens to facilitate activities.
The second priority should be for the BICO and its advisory board to translate the raw data collected through these activities into recommendations for regulatory agencies. All public acceptance data (qualitative, quantitative, and recommendations) should be gathered into a repository that can complement the already developing “biological data ecosystem” called for in President Biden’s executive order on advancing biotechnology and biomanufacturing innovation. The biological data ecosystem will include a mix of public, private, and confidential data types and sources. Incorporating data on public acceptance could provide regulators with insights on novel biotechnologies, and even help BICO match product developers with communities to seed cross-sector partnerships. Management and use of this data should employ governance standards that are people- and purpose-oriented, including Collective Benefit, Authority to Control, Responsibility, Ethics (CARE) Principles for Indigenous Data Governance, which complements Findable, Accessible, Interoperable, Reusable (FAIR) Principles, so that data management is clearly in the interest of the public. In the end, public acceptance data will provide new insights to new audiences and underpin a public-facing framework that can advance the bioeconomy.
Investment in building public engagement resources and practices now will put the bioeconomy on a more sustainable path in the future.
The third priority should be to translate this social data into biotechnology regulation. BICO public engagement programs could be used to develop an understanding of noneconomic values in reaching bioeconomy policy goals. These could include, for example, deeply held beliefs about the relationship between humans and the environment, or personal or cultural perspectives related to specific biotechnologies. The program could collect data from nontraditional audiences through pTA methods and multiple criteria decision analysis. Workshops hosted through the program could be used for long-term and short-term horizon scanning. In the short term, information about public perception on products can be used to better understand obstacles to public acceptance, and to support the development of outreach programs, tools, and strategies to incorporate public feedback. In the long term, the BICO will be able to inform regulatory policy development with richer data on socioeconomic and cultural preferences.
Historically, biotechnology regulations have struggled to strike a balance between transparency and protection. Recent federal action to improve coordination around the development of the bioeconomy has provided policymakers with another chance. Cultivating public acceptance and demand for biotechnology products—as speculative and futuristic as some may sound—will take a concerted effort to recognize and engage the public at large. Investment in building public engagement resources and practices now will put the bioeconomy on a more sustainable path in the future.
Native Voices in STEM
Circular Tables, 2022, digital photograph, 11 X 14 inches.
“Many of the research meetings I have participated in take place at long rectangular tables where the power and primary conversation participants are at one end. I don’t experience this hierarchical power differential in talking circles. Talking circles are democratic and inclusive. There is still a circle at the rectangular table, just a circle that does not include everyone at the table. I find this to be representative of experiences I have had in my STEM discipline, in which it was difficult to find a place in a community or team or in which I did not feel valued or included.”
Native Voices in STEM: An Exhibition of Photographs and Interviews is a collection of photographs and texts created by Native scientists and funded by the National Science Foundation. It grew from a mixed-methods study conducted by researchers from TERC, the University of Georgia, and the American Indian Science and Engineering Society (AISES). According to the exhibition creators, the artworks speak to the photographers’ experiences of “Two-Eyed Seeing,” or the tensions and advantages from braiding together traditional Native and Western ways of knowing. The exhibition was shown at the 2022 AISES National Conference.
Getting the Most From New ARPAs
The Fall 2023 Issues included three articles—“No, We Don’t Need Another ARPA” by John Paschkewitz and Dan Patt, “Building a Culture of Risk-Taking” by Jennifer E. Gerbi, and “How I Learned to Stop Worrying and Love Intelligible Failure” by Adam Russell—discussing several interesting dimensions of new civilian organizations modeled on the Advanced Research Projects Agency at the Department of Defense. One dimension that could use further elucidation starts with the observation that ARPAs are meant to deliver innovative technology to be utilized by some end customer. The stated mission of the original DARPA is to bridge between “fundamental discoveries and their military use.” The mission of ARPA-H, the newest proposed formulation, is to “deliver … health solutions,” presumably to the US population.
When an ARPA is extraordinarily successful, it delivers an entirely new capability that can be adopted by its end customer. For example, DARPA delivered precursor technology (and prototype demonstrations) for stealth aircraft and GPS. Both were very successfully adopted.
When an ARPA is extraordinarily successful, it delivers an entirely new capability that can be adopted by its end customer.
Such adoption requires that the new capability coexist or operate within the existing processes, systems, and perhaps even culture of the customer. Understanding the very real constraints on adoption is best achieved when the ARPA organization has accurate insight into specific, high-priority needs, as well as the operations or lifestyle, of the customer. This requires more than expertise in the relevant technology.
DARPA uses several mechanisms to attain that insight: technology-savvy military officers take assignments in DARPA, then return to their military branch; military departments partner, via co-funding, on projects; and often the military evaluates a DARPA prototype to determine effectiveness. These relations with the end customer are facilitated because DARPA is housed in the same department as its military customer, the Department of Defense.
The health and energy ARPAs face a challenge: attaining comparable insight into their end customers. The Department of Health and Human Services does not deliver health solutions to the US population; the medical-industrial complex does. The Department of Energy does not deliver electric power or electrical appliances; the energy utilities and private industry do. ARPA-H and ARPA-E are organizationally removed from those end customers, both businesses (for profit or not) and the citizen consumer.
Technology advancement enables. But critical to innovating an adoptable solution is identification of the right problem, together with a clear understanding of the real-world constraints that will determine adoptability of the solution. Because civilian ARPAs are removed from many end customers, ARPAs would seem to need management processes and organizational structures that increase the probability of producing an adoptable solution from among the many alternative solutions that technology enables.
Anita Jones
Former Director of Defense Research and Engineering
Department of Defense
University Professor Emerita
University of Virginia
The Limits of Data
Illustration by Shonagh Rae
I once sat in a room with a bunch of machine learning folks who were developing creative artificial intelligence to make “good art.” I asked one researcher about the training data. How did they choose to operationalize “good art”? Their reply: they used Netflix data about engagement hours.
The problem is that engagement hours are not the same as good art. There are so many ways that art can be important for us. It can move us, it can teach us, it can shake us to the core. But those qualities aren’t necessarily measured by engagement hours. If we’re optimizing our creative tools for engagement hours, we might be optimizing more for addictiveness than anything else. I said all this. They responded: show me a large dataset with a better operationalization of “good art,” we’ll use it. And this is the core problem, because it’s very unlikely that there will ever be any such dataset.
Right now, the language of policymaking is data. (I’m talking about “data” here as a concept, not as particular measurements.) Government agencies, corporations, and other policymakers all want to make decisions based on clear data about positive outcomes. They want to succeed on the metrics—to succeed in clear, objective, and publicly comprehensible terms. But metrics and data are incomplete by their basic nature. Every data collection method is constrained and every dataset is filtered.
Some very important things don’t make their way into the data. It’s easier to justify health care decisions in terms of measurable outcomes: increased average longevity or increased numbers of lives saved in emergency room visits, for example. But there are so many important factors that are far harder to measure: happiness, community, tradition, beauty, comfort, and all the oddities that go into “quality of life.”
Consider, for example, a policy proposal that doctors should urge patients to sharply lower their saturated fat intake. This should lead to better health outcomes, at least for those that are easier to measure: heart attack numbers and average longevity. But the focus on easy-to-measure outcomes often diminishes the salience of other downstream consequences: the loss of culinary traditions, disconnection from a culinary heritage, and a reduction in daily culinary joy. It’s easy to dismiss such things as “intangibles.” But actually, what’s more tangible than a good cheese, or a cheerful fondue party with friends?
It’s tempting to use the term intangible when what we really mean is that such things are hard to quantify in our modern institutional environment with the kinds of measuring tools that are used by modern bureaucratic systems. The gap between reality and what’s easy to measure shows up everywhere. Consider cost-benefit analysis, which is supposed to be an objective—and therefore unimpeachable—procedure for making decisions by tallying up expected financial costs and expected financial benefits. But the process is deeply constrained by the kinds of cost information that are easy to gather. It’s relatively straightforward to provide data to support claims about how a certain new overpass might help traffic move efficiently, get people to work faster, and attract more businesses to a downtown. It’s harder to produce data in support of claims about how the overpass might reduce the beauty of a city, or how the noise might affect citizens’ well-being, or how a wall that divides neighborhoods could erode community. From a policy perspective, anything hard to measure can start to fade from sight.
An optimist might hope to get around these problems with better data and metrics. What I want to show here is that these limitations on data are no accident. The basic methodology of data—as collected by real-world institutions obeying real-world forces of economy and scale—systematically leaves out certain kinds of information. Big datasets are not neutral and they are not all-encompassing. There are profound limitations on what large datasets can capture.
The basic methodology of data—as collected by real-world institutions obeying real-world forces of economy and scale—systematically leaves out certain kinds of information.
I’m not just talking about contingencies of social biases. Obviously, datasets are bad when the collection procedures are biased by oversampling by race, gender, or wealth. But even if analysts can correct for those sorts of biases, there are other, intrinsic biases built into the methodology of data. Data collection techniques must be repeatable across vast scales. They require standardized categories. Repeatability and standardization make data-based methods powerful, but that power has a price. It limits the kinds of information we can collect.
A small group of scholars have been working on understanding this, mostly in science and technology studies—an interdisciplinary field focused on how science works that conducts studies across philosophy, history, anthropology, sociology, and more. This work offers an understanding of the intrinsic limitations on the process of data collection and on the contents of big datasets. And these limitations aren’t accidents or bad policies. They are built into the core of what data is. Data is supposed to be consistent and stable across contexts. The methodology of data requires leaving out some of our more sensitive and dynamic ways of understanding the world in order to achieve that stability.
These limitations are particularly worrisome when we’re thinking about success—about targets, goals, and outcomes. When actions must be justified in the language of data, then the limitations inherent in data collection become limitations on human values. And I’m not worried just about perverse incentives and situations in which bad actors game the metrics. I’m worried that an overemphasis on data may mislead even the most well-intentioned of policymakers, who don’t realize that the demand to be “objective”—in this very specific and institutional sense—leads them to systematically ignore a crucial chunk of the world.
Decontextualization
Not all kinds of knowledge, and not all kinds of understanding, can count as information and as data. Historian of quantification Theodore Porter describes “information” as a kind of “communication with people who are unknown to one another, and who thus have no personal basis for shared understanding.” In other words, “information” has been prepared to be understood by distant strangers. The clearest example of this kind of information is quantitative data. Data has been designed to be collected at scale and aggregated. Data must be something that can be collected by and exchanged between different people in all kinds of contexts, with all kinds of backgrounds. Data is portable, which is exactly what makes it powerful. But that portability has a hidden price: to transform our understanding and observations into data, we must perform an act of decontextualization.
Data must be something that can be collected by and exchanged between different people in all kinds of contexts, with all kinds of backgrounds.
An easy example is grading. I’m a philosophy professor. I issue two evaluations for every student essay: one is a long, detailed qualitative evaluation (paragraphs of written comments) and the other is a letter grade (a quantitative evaluation). The quantitative evaluation can travel easily between institutions. Different people can input into the same system, so it can easily generate aggregates and averages—the student’s grade point average, for instance. But think about everything that’s stripped out of the evaluation to enable this portable, aggregable kernel.
Qualitative evaluations can be flexible and responsive and draw on shared history. I can tailor my written assessment to the student’s goals. If a paper is trying to be original, I can comment on its originality. If a paper is trying to precisely explain a bit of Aristotle, I can assess it for its argumentative rigor. If one student wants be a journalist, I can focus on their writing quality. If a nursing student cares about the real-world applications of ethical theories, I can respond in kind. Most importantly, I can rely on our shared context. I can say things that might be unclear to an outside observer because the student and I have been in a classroom together, because we’ve talked for hours and hours about philosophy and critical thinking and writing, because I have a sense for what a particular student wants and needs. I can provide more subtle, complex, multidimensional responses. But, unlike a letter grade, such written evaluations travel poorly to distant administrators, deans, and hiring departments.
Quantification, as used in real-world institutions, works by removing contextually sensitive information. The process of quantification is designed to produce highly portable information, like a letter grade. Letter grades can be understood by everybody; they travel easily. A letter grade is a simple ranking on a one-dimensional spectrum. Once an institution has created this stable, context-invariant kernel, it can easily aggregate this kind of information—for students, for student cohorts, for whole universities. A pile of qualitative information, in the form of thousands of written comments, for example, does not aggregate. It is unwieldy, bordering on unusable, to the administrator, the law school admissions officer, or future employer—unless it has been transformed and decontextualized.
So here is the first principle of data: collecting data involves a trade-off. We gain portability and aggregability at the price of context-sensitivity and nuance. What’s missing from data? Data is designed to be usable and comprehensible by very different people from very different contexts and backgrounds. So data collection procedures tend to filter out highly context-based understanding. Much here depends on who’s permitted to input the data and who the data is intended for. Data made by and for specialists in forensic medicine, let’s say, can rely on a shared technical background, if not specific details of working in a particular place or a particular community.
The clearest cases of decontextualization are with public transparency, where a data-based metric needs to be comprehensible to all. Sociologist Jennifer Lena provides an excellent example from the history of arts funding. Assessing which art projects are worthwhile and deserve funding depends on an enormous amount of domain-specific expertise. To tell what’s original, creative, and striking requires knowing a lot about the specific medium and genre in question, be it film, comics, or avant-garde performance art. And there’s not really such a thing as generic expertise in art criticism. Being a jazz expert gives you no insight into what’s exciting in the world of indie video games.
But transparency metrics tend to avoid relying on specialized domain expertise, precisely because that expertise isn’t accessible to the public at large. Lena writes that when Congress became worried about the possibility of nepotism and corruption in the National Endowment for the Arts’ funding decisions, it imposed an accountability regime that filtered out expert knowledge in exchange for a simple, publicly comprehensible metric: ticket sales. The problem should be obvious: blockbuster status is no measure of good art. But ticket sales are easy to measure, easy to aggregate, and easy to comprehend on the largest of scales.
The wider the user base for the data, the more decontextualized the data needs to be. Theodore Porter’s landmark book, Trust in Numbers, gives a lovely example drawn from a history of land measurement compiled by Witold Kula, the early twentieth-century Polish economist. Older measures of land often were keyed to their productivity. For example, a “hide” of land was the amount required to sustain the average family. Such measures are incredibly rich in functional information. But they required a lot of on-the-ground, highly contextual expertise. The land assessor needs to understand the fertility of the soil, how many fish are in the rivers and deer are in the woods, and how much all that might change in a drought year. These measures are not usable and assessable by distant bureaucrats and managers. Societies tend to abandon such measures and switch from hides to acres when they shift from local distributed governance to large, centralized bureaucracies. The demands of data—and certainly data at scale—are in tension with the opacity of highly local expertise and sensitivity. This kind of local awareness is typically replaced with mechanically repeatable measures in the movement to larger-scaled bureaucracy.
The wider the user base for the data, the more decontextualized the data needs to be.
Behind such shifts is the pressure to be objective in a very particular way. There are many different meanings for “objective.” Sometimes when we say something is “objective,” we mean that it’s accurate or unbiased. But other times, we’re asking for a very specific social transformation of our processes to fit our institutional life. We are asking for mechanical objectivity—that is, that a procedure be repeatable by anybody (or anybody with a given professional training), with about the same results. Institutional quantification is designed to support procedures that can be executed by fungible employees.
This mechanical objectivity has become central to contemporary institutional life. It’s easy to forget that mechanical objectivity isn’t everything. People often assume, for instance, that if you have mechanical objectivity, then you have accuracy—but these are different things. An accurate judgment gets at what really matters. But the methodology that leads to the most accurate judgments may not scale. Consider, for example, the legal standard for charging somebody with driving under the influence when the person has a blood alcohol level of 0.08%. This isn’t the most reliable guide to assessing what really matters, which is inebriation to the point of impairment. As it turns out, some people are impaired at lower blood alcohol levels, and some are impaired at higher ones. But it’s very hard to find a scalable and repeatable procedure to judge impairment. So we use the 0.08% blood-alcohol standard because anybody with a breathalyzer can apply it with approximately the same results.
Consider, too, the relationship between the complex idea of “adulthood” and the more mechanical idea of “legal age.” The right to vote, the ability to give consent, and all the other associated rights of adulthood should probably be keyed to intellectual and emotional maturity. But there’s no mechanically objective way to assess that. Some particular people might be good at assessing intellectual and emotional maturity, especially in those they know well. But those procedures don’t scale. So countries like the United States peg the right to vote to a very simple standard—18 years of age—in order to achieve mechanical objectivity.
The historian Lorraine Daston puts it this way: older forms of rules often permitted enormous amounts of discretion and judgment. But in the last few centuries, complex judgment has been replaced with clear and explicit rules—what she calls “algorithmic rules.” Algorithmitization wasn’t initially intended to make information machine-calculable, but instead to cheapen labor, to replace highly trained specialists with low-skilled and replaceable workers who could simply execute an explicit set of rules. The problem, argues Daston, is that explicit and mechanical rule sets only do well when contexts don’t change very much.
The first lesson, again, is that data involves a trade-off. The power of data is that it is collectible by many people and formatted to travel and aggregate. The process of making data portable also screens off sensitive, local, or highly contextual modes of understanding. In transforming understanding into data, we typically eliminate or reduce evaluative methods that require significant experience or discretionary judgment in favor of methods that are highly repeatable and mechanical. And if policymakers insist on grounding their policy in large-scale public datasets, then they are systematically filtering out discretion, sensitivity, and contextual experience from their decisionmaking process.
The politics of classification
Data collection efforts require classification, which is a second kind of filter. Imagine a US census form where everybody simply wrote into a blank space their racial identity, in their own terms. There would be no way to aggregate this easily. Collectors need to sort information into preprepared buckets to enable aggregation. So there are distinct buckets—white, Black, American Indian, Asian, and, in the recent census, “Two or More”—which organize a complex spectrum into a discrete set of chunks. We either presort people’s responses into those buckets by forcing them to choose from a limited list, or we sort them into categories after the fact by coding their free responses.
Informatics scholar Geoffrey Bowker and science studies scholar Susan Leigh Star offer a profound analysis of these pressures in Sorting Things Out: Classification and its Consequences, their political history of classification systems. The buckets that data collectors set up constitute a kind of intentional, institutional forgetting. Sorting information into categories emphasizes information at the boundaries—say, the difference between white and Asian—and puts that information into storage. But those categories also act as a filter; they don’t store information inside the buckets. The US Census categories, for example, elide the difference between Korean, Chinese, Filipino, Khmer, and more—they’re all lumped into “Asian.” This lumping is of necessity, say Bowker and Star: the process of classification is designed to wrangle the overwhelming complexity of the world into something more manageable—something tractable to individuals and institutions with limited storage and attentional capacity. Classification systems decide, ahead of time, what to remember and what to forget.
All classification systems are the result of political and social processes, which involve decisions about what’s worth remembering and what we can afford to forget.
But these categories aren’t neutral. All classification systems are the result of political and social processes, which involve decisions about what’s worth remembering and what we can afford to forget. Some of these constraints are simply practical. Early mortality data collection, write Bowker and Star, was limited by the maximum size of certain forms: you couldn’t have more causes of death than there were lines in the standard form. And it’s very hard to add new causes of death to the data collection system because such an effort would involve convincing hundreds of different national data collection offices to change all their separate death reporting forms.
Here is the second principle: every classification system represents some group’s interests. Those interests are often revealed by where a classification system has fine resolution and where it doesn’t. For example, the International Classification of Disease (ICD) is a worldwide, standardized system for classifying diseases that’s used in collecting mortality statistics, among other things. Without a centralized, standardized system, the data collected by various offices won’t aggregate. But the ICD has highly variable granularity. It has separate categories for accidents involving falling from playground equipment, falling from a chair, falling from a wheelchair, falling from a bed, and falling from a commode. But it only has two categories for falls in the natural world: fall from a cliff, and an “other fall” category that lumps together all the other falls—including, in its example, falls from embankments, haystacks, and trees. The ICD is obviously much more interested in recording the kinds of accidents that might befall people in an urban industrial environment than a rural environment, note Bowker and Star. The ICD’s classification system serves some people’s interests over others.
Classification systems decide ahead of time what to remember and what to forget. This is not bad in and of itself, argue Bowker and Star. Data aggregation requires such filtering. The problem occurs when users of data forget that categories are social inventions created for a purpose. When these classificatory schemes enter an information infrastructure, they become invisible; they become part of the background operating structure of our world. People start assuming that Asian and white and Black are natural categories, and those assumptions quietly reshape the world we live in.
Political interests shape classifications systems, and classification systems shape every institutional data-gathering effort. The government collects data on where citizens live, how much they earn, what property they own. Grocery store chains collect information on what consumers purchase and when. Medical insurance companies collect information on the insured person’s heart rate, temperature, and official medical diagnosis every time the person has an official interaction with medical institutions. Each of these institutions uses an information infrastructure, which is set up to record some very specific kinds of information—but which also makes it difficult to record anything else.
Sometimes information infrastructures do offer a place for unstructured notes. When I’m entering my grades into the school’s database, I get a little blank box for other notes. The information is collected in some sense, but it doesn’t really move well; it doesn’t aggregate. The system aggregates along the classificatory lines that it has been prepared to aggregate. I may offer the system important contextual information, but the aggregating system will usually filter that stuff out; there’s not much sign of it by the time the high-level decisionmakers get their benchmarks and metrics. Unstructured information isn’t legibleto the institution. We who enter information into data systems can often feel their limitations, so we try to add richness and texture—which the system nominally collects and then functionally ignores.
Data collection efforts aren’t neutral and they aren’t complete. They emphasize a particular style of knowledge formatted in a particular way, which makes it possible for the data to slide effortlessly between contexts, be gathered by all sorts of different people for use across vast scales. There is a cost to be paid for this scalability, this independence from context. The data collection methodology tends to filter out the personal, the intimate, the special understanding.
Metrics and values
The consequences of that cleansing are perhaps clearest in the cases of metrics and other data-driven targets. Consider transparency metrics. I’ve argued that transparency schemes have a clear price; transparency is a kind of surveillance. Public transparency requires that the reasoning and actions of institutional actors be evaluated by the public, using metrics comprehensible to the public. But this binds expert reasoning to what the public can understand, thus undermining their expertise. This is particularly problematic in cases where the evaluation of success depends on some specialized understanding. The demand for public transparency tends to wash deep expertise out of the system. Systems of transparency tend to avoid evaluative methods that demand expertise and sensitivity and instead prefer simple, publicly comprehensible standards—such as ticket sales or graduation rates or clicks.
Systems of transparency tend to avoid evaluative methods that demand expertise and sensitivity and instead prefer simple, publicly comprehensible standards.
This isn’t to say that transparency is bad. The demand for data-based transparency is an incredibly powerful and effective tool for fighting bias and corruption. But this demand also exposes us to a set of costs. Transparency metrics are based on publicly comprehensible data. Consider the case of Charity Navigator, which promises to guide your donation dollars by ranking the effectiveness of various nonprofits. For years, Charity Navigator’s rankings were heavily based on an “overhead ratio”—a measure of how much donated money made it through to an external goal compared to how much was spent internally, as overhead. This seems like a great measure of efficiency, and Charity Navigator became a dominant force in guiding donations to nonprofits. But as many experts from the nonprofit realm have complained, the overhead ratio measure is flawed and misleading. Suppose a nonprofit promises to help improve water purification in impoverished areas. Distributing water purification machinery counts as an external expenditure, so it improves the organization’s overhead ratio. But hiring an expert in waterborne bacteria or building a better internal database for tracking long-term use of that purification machinery counts as an internal cost—and so drops the organization’s ranking.
Understanding what’s important generally takes spending an enormous amount of expertise and time within that particular domain. The late anthropologist Sally Engle Merry explored a particularly devastating example in her 2016 book, The Seductions of Quantification. At the time, she reported, international attempts to reduce sex trafficking revolved around tracking success with a single clear metric, generated by the US State Department, in the Trafficking in Persons (TIPS) report. That measure, Merry related, was based on the conviction rate of sex traffickers. This may make sense to the uninitiated, but to experts in the subject, it’s a terrible metric. Sex trafficking is highly related to ambient poverty. If a country reduced ambient poverty, doing so typically reduces sex trafficking. But this would show up in the TIPS report as a failure to control sex trafficking. If sex trafficking dropped due to economic reasons, there would be fewer sex traffickers to convict. The TIPS report had come to dominate the international conversation, Merry wrote, because actual sex trafficking is extremely hard to measure while conviction rates are quite easy to collect.
This dangerous separation of metric from meaning accelerates when people internalize certain metrics as core values. I have called this process “value capture”: when our deepest values get captured by institutional metrics and then become diluted or twisted as a result. Academics aim at citation rates instead of real understanding; journalists aim for numbers of clicks instead of newsworthiness. In value capture, we outsource our values to large-scale institutions. Then all these impersonal, decontextualizing, de-expertizing filters get imported into our core values. And once we internalize those impersonalized values as our own, we won’t even notice what we’re overlooking.
And now, in the algorithmic era, there’s a new version of this problem: these filtered values will be built so deeply into the infrastructure of our technological environment that we will forget that they were filtered in the first place. As artificial intelligence ethicists Sina Fazelpour and David Danksput it, target-setting is one of the most important—but most neglected—entry points for algorithmic bias. Let’s say programmers are training a machine learning model to help improve some quality: reduce crime, for instance, or make good art. Many contemporary training procedures involve randomly generating variations on a model and then pitting them against each other to see which one better hits the target. Fazelpour and Danks discuss a real-world case in which machine learning algorithms were trained to predict student success. But the training procedure itself can introduce biases, depending on who sets the targets, and which targets they select. In this case, the training targets were set by administrators and not students, thereby reflecting administrator interests. “Student success” was typically operationalized in terms of things like graduation rate and drop-out rate, rather than, say, mental health or rich social experiences. The machine learning algorithms were trained to hit a target—but the target itself can be biased.
In the algorithmic era, there’s a new version of this problem: these filtered values will be built so deeply into the infrastructure of our technological environment that we will forget that they were filtered in the first place.
Lessons from Porter apply here as well. To train a machine learning algorithm, engineers need a vast training dataset in which successes and failures are a clear part of the dataset. It’s easy to train a machine learning algorithm to predict which students will likely get a high grade point average, graduate quickly, or get a job afterwards. It’s easy to have a mechanical, repeatable, scalable procedure to accurately evaluate graduation speed, and very hard to have a mechanical, repeatable, and scalable procedure to accurately evaluate increased thoughtfulness. Nor are there large datasets that can train a machine learning algorithm to predict which students will become happier, wiser, or more curious as a result of their education. What algorithms can target depends on what’s in the datasets—and those datasets are tuned to what can be easily, mechanically collected at scale.
The more opaque the training procedure for algorithms, the more hidden these specific, biased, and political decisions will be in setting the targets. The more distant the users are from the training process, the easier it will be for them to assume that the algorithm’s outputs are straightforwardly tracking the real thing—student success—and the easier it will be to forget that the demands of institutional data collection have already filtered out whole swathes of human life and human value.
What can we do?
My point isn’t that we should stop using data-based methods entirely. The key features of data-based methodologies—decontextualization, standardization, and impersonality—are precisely what permit the aggregation of vast datasets and are crucial to reap the many rewards of data-based methodologies.
But policymakers and other data users need to keep in mind the limitations baked into the very essence of this powerful tool. Data-based methods are intrinsically biased toward low-context forms of information. And every data collection method requires a system of standardization that represents somebody’sinterest.
This suggests at least two responses to the limitations of data. First, when confronted with any large dataset, the user should ask: Who collected it? Who created the system of categories into which the data is sorted? What information does that system emphasize, and what does it leave out? Whose interests are served by that filtration system?
Who created the system of categories into which the data is sorted? What information does that system emphasize, and what does it leave out? Whose interests are served by that filtration system?
These are ordinary questions. In ordinary social situations, we know enough to ask basic questions: What are the motivations of a speaker? What are his interests and what are his biases? These same basic suspicions should also be applied to data. It’s tempting, however, to see datasets as somehow magically neutral and free of informational gaps. Maybe this is because when a person is talking to us, it’s obvious that there’s a personality involved—an independent agent with her own interests and motivations and schemes. But data is often presented as if it arose from some kind of immaculate conception of pure knowledge.
As Merry puts it, metrics and indicators require all kinds of political compromises and judgment calls to compress so much rich information into a single ranking. But the superficially simple nature of the final product—the metric—tends to conceal all kinds of subjectivity and politics. This is why, as Porter observes, public officials and bureaucrats often prefer justification in terms of such metrics. The numbers appear fair and impartial. Writes Porter: “Quantification is a way of making decisions without seeming to decide.” So the first response to data is to recall that data has a source, that it does not mysteriously come into existence untainted by human interests. The first response is to remind ourselves that data is created by institutions, which make decisions about what categories to use and what information to collect—and which to ignore.
Second, policymakers and data users should remember that not everything is as tractable to the methodologies of data. It is tempting to act as if data-based methods simply offer direct, objective, and unhindered access to the world—that if we follow the methods of data, we will banish all bias, subjectivity, and unclarity from the world. The power of data is vast scalability; the price is context. We need to wean ourselves off the pure-data diet, to balance the power of data-based methodologies with the context-sensitivity and flexibility of qualitative methods and local experts with deep but nonportable understanding. Data is powerful but incomplete; don’t let it entirely drown out other modes of understanding.
It’s not like qualitative methods are perfect; every qualitative method opens the door to other kinds of bias. Narrative methods open the door to personal biases. Trusting local, sensitive experts can open the door to corruption. The point is that data-based methodologies also have their own intrinsic biases. There is no single dependable, perfect way to understand or analyze the world. We need to balance our many methodologies, to knowingly and deliberately pit their weaknesses against each other.
Connecting STEM with Social Justice
The United States faces a significant and stubbornly unyielding racialized persistence gap in science, technology, engineering, and mathematics. Nilanjana Dasgupta sums up one needed solution in the title of her article: “To Make Science and Engineering More Diverse, Make Research Socially Relevant” (Issues, Fall 2023).
Among the students who enter college intending to study STEM, persons excluded because of ethnicity or race (PEERs) which includes students identifying as Black, Indigenous, and Latine, have a twofold greater likelihood of leaving these disciplines than do non-PEERs. While we know what are not the reasons for the racialized gap—not lack of interest or preparation—we largely don’t know how to effectively close the gap. We know engaging undergraduates in mentored, authentic scientific research raises their self-efficacy and feeling of belonging. However, effective research experiences are difficult to scale because they require significant investments in mentoring and research infrastructure capacity.
Another intervention is much less expensive and much more scalable. Utility-value interventions (UVIs) provide a remarkably long-lasting positive effect on students. In this approach, over an academic term students in an introductory science course spend a modest amount of class time reflecting and writing about how the scientific topic just introduced is personally related to them and their communities. The UVIs benefit all students, resulting in little or no difference in STEM persistence between PEERs and non-PEERs.
The overhaul will be the creation of new courses that seamlessly integrate basic science concepts with society and social justice.
Can we do more? Rather than occasionally interrupting class to allow students to connect a science concept with real-world social needs, can we change the way we present the concept? The UVI inspires a vision of a new STEM curriculum comprising reimagined courses. We might call the result Socially Responsive STEM, or SR-STEM. SR-STEM would be more than distribution or general education requirements, and more than learning science in the context of a liberal arts education. Instead, the overhaul will be the creation of new courses that seamlessly integrate basic science concepts with society and social justice. The courses would encourage students to think critically about the interplay between STEM and non-STEM disciplines such as history, literature, religion, and economics, and explore how STEM affects society.
Here are a few examples from the life sciences; I think similar approaches can be developed for other STEM disciplines. When learning about evolution, students would investigate and discuss the evidence used to create the false polygenesis theory of human races. In genetics, students would evaluate the evidence of epigenetics effects resulting from the environment and poverty. In immunology, students would explore the sociology and politics of vaccine avoidance. The mechanisms of natural phenomena would be discussed from different perspectives, including indigenous ways of knowing about nature.
Implementing SR-STEM will require a complete overhaul of the learning infrastructure, including instructor preparation, textbooks, Advanced Placement courses, GRE and other standardized exams, and accreditation (e.g., ACS and ABET) criteria. The stories of discoveries we tell in class will change, from the “founding (mostly white and dead) fathers” to contemporary heroes of many identities and from all backgrounds.
It is time to begin a movement in which academic departments, professional societies, and funding organizations build Socially Responsive STEM education so that the connection of STEM to society and social justice is simply what we do.
David J. Asai
Former Senior Director for Science Education
Howard Hughes Medical Institute
To maximize the impact of science, technology, engineering, and mathematics in society, we need to do more than attract a diverse, socially concerned cohort of students to pursue and persist through our academic programs. We need to combine the technical training of these students with social skill building.
To advance sustainability, justice, and resilience goals in the real world (not just through arguments made in consulting reports and journal papers), students need to learn how to earn the respect and trust of communities. In addition to understanding workplace culture, norms, and expectations, and cultivating negotiation skills, they need to know to research the history, interests, racial, cultural, and equitable identities, and power imbalances in communities before beginning their work. They need to appreciate the community’s interconnected and, at times, conflicting needs and aspirations. And they need to learn how to communicate and collaborate effectively, to build allies and coalitions, to follow through, and to neither overpromise nor prematurely design the “solution” before fully understanding the problem. They must do all this while staying within the project budget, schedule, and scope—and maintaining high quality in their work.
One of the problems is that many STEM faculty lack these skills themselves. Some may consider the social good implications only after a project has been completed. Others may be so used to a journal paper as the culmination of research that they forget to relay and interpret their technical findings to the groups who could benefit most from them. Though I agree that an increasing number of faculty appear to be motivated by equity and multidisciplinarity in research, translation of research findings into real world recommendations is much less common. If it happens at all, it frequently oversimplifies key logistical, institutional, cultural, legal, or regulatory factors that made the problem challenging in the first place. Both outcomes greatly limit the social value of STEM research. While faculty in many fields now use problem-based learning to tackle real world problems in teaching, we are also notorious for attempting to address a generational problem in one semester, then shifting our attention to something else. We request that community members enrich our classrooms by sharing their lived experiences and perspectives with our students without giving much back in return.
Such practices must end if we, as STEM faculty, are to retain our credibility both in the community and with our students, and if we wish to see our graduates embraced by the communities they seek to serve.
Such practices must end if we, as STEM faculty, are to retain our credibility both in the community and with our students, and if we wish to see our graduates embraced by the communities they seek to serve. The formative years of today’s students were juxtaposed on a backdrop of bad news. If they chose STEM because of a belief that science has answers to these maddening challenges, these students need real evidence that their professional actions will yield tangible and positive outcomes. Just like members of the systematically disadvantaged and marginalized communities they seek to support, these students can easily spot hypocrisy, pretense, greenwashing, and superficiality.
As a socially engaged STEM researcher and teacher, I have learned that I must be prepared to follow through with what I have started—as long as it takes. I prep my students for the complex social dynamics they will encounter, without coddling or micromanaging them. I require that they begin our projects with an overview of the work’s potential practical significance, and that our research methods answer questions that are codeveloped with external partners, who themselves are financially compensated for their time whenever possible. By modeling these best practices, I try to give my students (regardless of their cultural or racial backgrounds) competency not just in STEM, but in application of their work in real contexts.
Franco Montalto
Professor, Department of Civil, Architectural, and Environmental Engineering
Drexel University
Nilanjana Dasgupta’s article inspired reflection on our approach at the Burroughs Wellcome Fund (BWF) to promoting diversity in science nationwide along with supporting science, technology, engineering, and mathematics education specifically in North Carolina. These and other program efforts have reinforced our belief in the power of collaboration and partnership to create change.
These and other program efforts have reinforced our belief in the power of collaboration and partnership to create change.
For nearly 30 years, BWF has supported organizations across North Carolina that provide hands-on, inquiry-based activities for students outside the traditional classroom day. These programs offer a wide range of STEM experiences for students. Some of the students “tinker,” which we consider a worthwhile way to experience the nuts-and-bolts of research, and others explore more socially relevant experiences. An early example is from a nonprofit in the city of Jacksonville, located near the state’s eastern coast. In the program, the city converted an old wastewater treatment plant into an environmental education center where students researched requirements for reintroducing sturgeon and shellfish into the local bay. More than 1,000 students spent their Saturdays learning about environmental science and its application to improve the quality of water in the local watershed. The students engaged their families and communities in a dialogue about environmental awareness, civic responsibility, and local issues of substantial scientific and economic interest.
For our efforts in fostering diversity in science, we have focused primarily on early-career scientists. Our Postdoctoral Diversity Enrichment Program provides professional development support for underrepresented minority postdoctoral fellows. The program places emphasis on a strong mentoring strategy and provides opportunities for the fellows to engage with a growing network of scholars.
Recently, BWF has become active in the Civic Science movement led by the Rita Allen Foundation, which describes civic science as “broad engagement with science and evidence [that] helps to inform solutions to society’s most pressing problems.” This movement is very much in its early stages, but it holds immense possibility to connect STEM to social justice. We have supported fellows in science communication, diversity in science, and the interface of arts and science.
Another of our investments in this space is through the Our Future Is Science initiative, hosted by the Aspen Institute’s Science and Society program. The initiative aims to equip young people to become leaders and innovators in pushing science toward improving the larger society. The program’s goals include sparking curiosity and passion about the connection between science and social justice among youth and young adults who identify as Black, Indigenous, or People of Color, as well as those who have low income or reside in rural communities. Another goal is to accelerate students’ participation in the sciences to equip them to link their interests to tangible educational and career STEM opportunities that may ultimately impact their communities.
This is an area ripe for exploration, and I was pleased to read the author’s amplification of this message. At the Burroughs Wellcome Fund, we welcome the opportunity to collaborate on connecting STEM and social justice work to ignite societal change. As a philanthropic organization, we strive to holistically connect the dots of STEM education, diversity in science, and scientific research.
Louis J. Muglia
President and CEO
Burroughs Wellcome Fund
As someone who works on advancing diversity, equity, and inclusion in science, technology, engineering, and mathematics higher education, I welcome Nilanjana Dasgupta’s pointed recommendation to better connect STEM research with social justice. Gone are the days of the academy being reserved for wealthy, white men to socialize and explore the unknown, largely for their own benefit. Instead, today’s academy should be rooted in addressing the challenges that the whole of society faces, whether that be how to sustain food systems, build more durable infrastructure, or identify cures for heretofore intractable diseases.
Approaching STEM research with social justice in mind is the right thing to do both morally and socially. And our educational environments will be better for it, attracting more diverse and bright minds to science. As Dasgupta demonstrates, research shows that when course content is made relevant to students’ lives, students show increases in interest, motivation, and success—and all these findings are particularly pronounced for students of color.
Despite focused attention on increasing diversity, equity, and inclusion over the past several decades, Black, Indigenous, and Latine students continue to remain underrepresented in STEM disciplines, especially in graduate education and the careers that require such advanced training. In 2020, only 24% of master’s and 16% of doctoral degrees in science and engineering went to Black, Indigenous, and Latine graduates, despite these groups collectively accounting for roughly 37% of the US population aged 18 through 34. Efforts to increase representation have also faced significant setbacks due to the recent Supreme Court ruling on the consideration of race in admissions. However, Dasgupta’s suggestion may be one way we continue to further the nation’s goal of diversifying STEM fields in legally sustainable ways, by centering individuals’ commitments to social justice rather than, say, explicitly considering race or ethnicity in admissions processes.
What if universities centered faculty hiring efforts on scholars who are addressing social issues and seeking to make the world a more equitable place, rather than relying on the otherwise standard approach of hiring graduates from prestigious institutions who publish in top-tier journals?
Moreover, while Dasgupta does well to provide examples of how we might transform STEM education for students, the underlying premise of her article—that connecting STEM to social justice is an underutilized tool—is relevant to several other aspects of academia as well.
For instance, what if universities centered faculty hiring efforts on scholars who are addressing social issues and seeking to make the world a more equitable place, rather than relying on the otherwise standard approach of hiring graduates from prestigious institutions who publish in top-tier journals? The University of California, San Diego, may serve as one such example, having hired 20 STEM faculty over the past three years whose research uses social justice frameworks, including bridging Black studies and STEM. These efforts promote diverse thought and advance institutional missions to serve society.
Science philanthropy is also well poised to prioritize social justice research. At Sloan, we have a portfolio of work that examines critical and under-explored questions related to issues of energy insecurity, distributional equity, and just energy system transitions in the United States. These efforts recognize that many historically marginalized racial and ethnic communities, as well as economically vulnerable communities, are often unable to participate in the societal transition toward low-carbon energy systems due to a variety of financial, social, and technological challenges.
In short, situating STEM in social justice should be the default, not the occasional endeavor.
Tyler Hallmark
Program Associate
Alfred P. Sloan Foundation
A Plan to Develop Open Science’s Green Shoots into a Thriving Garden
Over the past several decades, the movement for open science, which promises a more inclusive, efficient, and trustworthy way of conducting and disseminating scientific research, has grown. Driven by the belief that openly sharing knowledge in all its forms—papers, data, software, methods, and more—can help address a raft of societal quandaries (including, though not limited to, systemic inequity and public mistrust in science), the adoption of open science principles has become increasingly mainstream. In the last five years, the White House Office of Science and Technology Policy; the governments of Ireland, Colombia, Spain, France, and the province of Quebec; higher education coalitions in the United States, Africa, South America, the United Kingdom, and Europe; professional societies and associations; and philanthropic funders have all taken steps toward strengthening policies for and reducing barriers to open science. Moreover, science and research ministers representing the member states of the Group of Seven and the Group of 20 have doubled down on their governments’ commitments to invest in open, equitable, and secure strategies for research and development throughout the world.
As it’s moved from an abstract set of principles about access to research and data into the realm of real-world activities, the open science movement has mirrored some of the characteristics of the open source movement: distributed, independent, with loosely coordinated actions happening in different places at different levels. Globally, many things are happening, often disconnected, but still interrelated: open science has sowed a constellation of thriving green shoots, not quite yet a garden, but all growing rapidly on arable soil.
Streamlining research processes, reducing duplication of efforts, and accelerating scientific discoveries could ensure that the fruits of open science processes and products are more accessible and equitably distributed.
It is now time to consider how much faster and farther the open science movement could go with more coordination. What efficiencies might be realized if disparate efforts could better harmonize across geographies, disciplines, and sectors? How would an intentional, systems-level approach to aligning incentives, infrastructure, training, and other key components of a rationally functioning research ecosystem advance the wider goals of the movement? Streamlining research processes, reducing duplication of efforts, and accelerating scientific discoveries could ensure that the fruits of open science processes and products are more accessible and equitably distributed.
In July 2023, NASA and the European Organization for Nuclear Research, known as CERN, jointly organized a week-long summit, “Accelerating the Adoption of Open Science,” in an effort to push the movement forward. For three decades, NASA has worked on data sharing, team science, and public access to knowledge. In 2021, the agency launched Transform to Open Science (TOPS) to promote an inclusive culture of open science across Earth and space sciences by investing in training, infrastructure, and advocacy. CERN has a similarly longstanding commitment to open science, supporting a range of initiatives that foster collaboration, improve the accessibility of research outputs, and promote reusability and reproducibility. Building on both this shared interest and the US federal government’s designation of 2023 as the Year of Open Science (a project that NASA is leading together with other federal agencies), the two organizations brought together 100 representatives from more than two dozen countries, including policymakers and practitioners, to explore many facets of open science, including open source hardware, protection of sensitive data, the conferral of credit, and evaluation of contributer impact.
The event demonstrated the range of creative and clever ways in which organizations are advancing the cause of open science. The Chile-based Gathering for Open Science Hardware, for example, works to foster sustainable, ethical, and democratic collaboration within dozens of countries across the open science hardware community. The Colombian Science Ministry has embedded ancestral and traditional knowledge systems into its national open science policy. DiploCientifica centers equitable open science through a science diplomacy lens for Latin America and the Caribbean. And OSS4gEO, based in Europe, is actively building components of an open, sustainable, and interoperable geospatial data infrastructure. The diversity of these approaches speaks to the organic way the ideas of open science have propagated in different contexts. Despite their heterogeneity, participants generally share a set of values and common interests in building a more cohesive and equitable approach to open science. There is also broad agreement that the movement needs to prioritize the coordination of incentives, infrastructure, and training.
So, now what? Many a good intention stimulated over the course of a conference dissipates by the time suitcases are unpacked and airport gift shop souvenirs are dispensed. As summit participants, the four of us wanted to avoid squandering the opportunity. For open science to live up to its lofty aspirations, proponents should be deliberate in designing and executing the transition away from research and science policy siloes, data opacity, and publication paywalls. The movement will need to harness the collective wisdom of various communities, organizations, disciplines, career arcs, and perspectives to share lessons and promising ideas alike. And its proponents must embrace both public scrutiny and scientific rigor in assessing whether various efforts are having their intended effect.
For open science to live up to its lofty aspirations, proponents should be deliberate in designing and executing the transition away from research and science policy siloes, data opacity, and publication paywalls.
Failure to clear these hurdles will almost certainly lead to one of two suboptimal outcomes. The first is reverting to a closed science system that erects barriers for students, practitioners, policymakers, industry, the general public, and—increasingly—researchers outside of the best-resourced institutions. The second is a sort of “Tower of Babel” scenario in which some materials are free to read, adhere to FAIR principles (i.e., findable, accessible, interoperable, and reusable), have well-curated metadata, and provide clear licensing terms—and many materials do not. This will generate confusion, inequity, and irreproducibility at a time when the world needs as many bright minds as possible engaged in existential challenges such as climate change, pandemic preparedness, and poverty alleviation. To avoid either of these negative outcomes, we propose a three-pronged approach to coordinating activities globally and across disciplines that mirrors three of open science’s core tenets: intentionality, collaboration, and accountability.
First, participants at the 2023 summit committed in the closing statement to accelerating the transition to a more open, participatory, equitable, robust, and sustainable research ecosystem by articulating action plans—that is, practical ways participants and their communities can collaborate to advance specific open science considerations such as infrastructure, training, funding, and recognition schemes. These efforts will leverage a number of initiatives that are already off the ground: projects like the Coalition for Advancing Research Assessment and the Higher Education Leadership Initiative for Open Scholarship have begun to engage in the meticulous work of building coalitions to change research assessment policies and incentive structures. The community-developed TOPS Open Science 101 curriculum can equip researchers with the skills to prepare their research outputs in ways that are truly findable, reusable, and interoperable. Collaborative efforts such as OpenAIRE are demonstrating how to seamlessly integrate open science activities into existing workflows that researchers are already familiar with. Individually, the action plans generated by summit participants are designed to catalyze open science engagement within specific communities. Collectively, they will demonstrate the depth of interest in transitioning to an open-by-design approach to science.
Coordinating efforts across like-minded organizations is another way to magnify impact. At the summit’s conclusion, participants identified areas in which they would benefit from ongoing peer support, as well as where they could provide support and expertise to others. These areas were as diverse as sustainable and interoperable open infrastructure, incentives, equitable open science, and evidence-based policy development. By organizing standing working groups that draw from a range of sectors on these topics, the community can identify areas in which collective action and collaboration are possible. For example, the evidence-based policy working group will be developing a framework for collecting and interpreting data measuring the impact of open science policy interventions, while the infrastructure groups will identify interoperability pathways across projects and domains.
Strategic coordination will have direct, real-world ramifications for researchers because it cuts down on the cacophony of signals they receive from their funders, governments, disciplines, and institutions. Coordination also reduces the possibility of uneven or even contradictory reporting requirements. And it sends a clear signal that critical actors are aligning across sectors to make open science both more common and easier to adopt. This community-centered model has shown promise in advancing shared open science interests such as research output tracking and reproducibility.
Strategic coordination will have direct, real-world ramifications for researchers because it cuts down on the cacophony of signals they receive from their funders, governments, disciplines, and institutions.
Finally, participants will develop transparent methods for reporting on progress, both as individual organizations and across the cohort of summit participants. This includes sharing evidence and outcomes (the good, the bad, and the unexpected) on the impact of open science interventions. Taken together, these commitments are critical to injecting a sense of public responsibility to the movement, and to ensuring that this work is consistent with the underlying values of open science.
Open science has continued to gain traction for reasons that are both aspirational—building public confidence in science, adding more diverse voices to the research conversation—and practical, such as increasing the pace of discovery, enabling verification, and leveraging emerging technologies such as machine learning. Given global technical and policy developments, the movement is rapidly approaching a moment of truth. This timely coordination of open science approaches across communities and domains will both accelerate the transition and increase the likelihood that these fast-sprouting green shoots grow into a lush and verdant communal garden: well-tended, sustainable, and accessible to all.
Building the Quantum Workforce
In “Inviting Millions Into the Era of Quantum Technologies” (Issues, Fall 2023), Sean Dudley and Marisa Brazil convincingly argue that the lack of a qualified workforce is holding back this field from reaching its promising potential. We at IBM Quantum agree. Without intervention, the nation risks developing useful quantum computing alongside a scarcity of practitioners who are capable of using quantum computers. An IBM Institute for Business Value study found that inadequate skills is the top barrier to enterprises adopting quantum computing. The study identified a small subset of quantum-ready organizations that are talent nurturers with a greater understanding of the quantum skills gap, and that are nearly three times more effective than their cohorts at workforce development.
Quantum-ready organizations are nearly five times more effective at developing internal quantum skills, nearly twice as effective at attracting talented workers in science, technology, engineering, and mathematics, and nearly three times more effective at running internship programs. At IBM Quantum, we have directly trained more than 400 interns at all levels of higher education and have seen over 8 million learner interactions with Qiskit, including a series of online seminars on using the open-source Qiskit tool kit for useful quantum computing. However, quantum-ready organizations represent only a small fraction of the organizations and industries that need to prepare for the growth of their quantum workforce.
As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve. As we move even further toward the age of quantum-centric supercomputing, we will need a larger workforce capable of orchestrating quantum and classical computational resources in order to address domain-specific problems.
Looking to academia, we need more quantum-ready institutions that are effective not only at teaching advanced mathematics, quantum physics, and quantum algorithms, but also are effective at teaching domain-specific skills such as machine learning, chemistry, materials, or optimization, along with teaching how to utilize quantum computing as a tool for scientific discovery.
As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve.
Critically, it is imperative to invest in talent early on. The data on physics PhDs granted by race and ethnicity in the United States paint a stark picture. Industry cannot wait until students have graduated and are knocking on company doors to begin developing a talent pipeline. IBM Quantum has made a significant investment in the IBM-HBCU Quantum Center through which we collaborate with more than two dozen historically Black colleges and universities to prepare talent for the quantum future.
Academia needs to become more effective in supporting quantum research (including cultivating student contributions) and partnering with industry, in connecting students into internships and career opportunities, and in attracting students into the field of quantum. Quoting Charles Tahan, director of the National Quantum Coordination Office within the White House Office of Science and Technology Policy: “We need to get quantum computing test beds that students can learn in at a thousand schools, not 20 schools.”
Rensselaer Polytechnic Institute and IBM broke ground on the first IBM Quantum System One on a university campus in October 2023. This presents the RPI community with an unprecedented opportunity to learn and conduct research on a system powered by a utility-scale 127-qubit processor capable of tackling problems beyond the capabilities of classical computers. And as lead organizers of the Quantum Collaborative, Arizona State University—using IBM and other industry quantum computing resources—is working with other academic institutions to provide training and educational pathways across high schools and community colleges through to undergraduate and graduate studies in the field of quantum.
Our hope is that these actions will prove to be only part of a broader effort to build the quantum workforce that science, industry, and the nation will need in years to come.
Bradley Holt
IBM Quantum
Program Director, Global Skills Development
Sean Dudley and Marisa Brazil advocate for mounting a national workforce development effort to address the growing talent gap in the field. This effort, they argue, should include educating and training a range of learners, including K–12 students, community college students, and workers outside of science and technology fields, such as marketers and designers. As the field will require developers, advocates, and regulators—as well as users—with varying levels of quantum knowledge, the authors’ comprehensive and inclusive approach to building a competitive quantum workforce is refreshing and justified.
At Qubit by Qubit, founded by the Coding School and one of the largest quantum education initiatives, we have spent the past four years training over 25,000 K–12 and college students, educators, and members of the workforce in quantum information science and technology (QIST). In collaboration with school districts, community colleges and universities, and companies, we have found great excitement among all these stakeholders for QIST education. However, as Dudley and Brazil note, there is an urgent need for policymakers and funders to act now to turn this collective excitement into action.
Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large.
The authors posit that the development of a robust quantum workforce will help position the United States as a leader of Quantum 2.0, the next iteration of the quantum revolution. Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large. With the interdisciplinary nature of QIST, learners gain exposure and skills in mathematics, computer science, physics, and engineering, among other fields. Thus, even for learners who choose not to pursue a career in quantum, they will have a broad set of highly sought skills that they can apply to another field offering a rewarding future.
With the complexity of quantum technologies, there are a number of challenges in building a diverse quantum workforce. Dudley and Brazil highlight several of these, including the concentration of training programs in highly resourced institutions, and the need to move beyond the current focus on physics and adopt a more interdisciplinary approach. There are several additional challenges that need to be considered and addressed if millions of Americans are to become quantum-literate, including:
Funding efforts have been focused on supporting pilot educational programs instead of scaling already successful programs, meaning that educational opportunities are not accessible widely.
Many educational programs are one-offs that leave students without clear next steps. Because of the complexity of the subject area, learning pathways need to be established for learners to continue developing critical skills.
Diversity, inclusion, and equity efforts have been minimal and will require concerted work between industry, academia, and government.
Historically, the United States has begun conversations around workforce development for emerging and deep technologies too late, and thus has failed to ensure the workforce at large is equipped with the necessary technical knowledge and skills to move these fields forward quickly. We have the opportunity to get it right this time and ensure that the United States is leading the development of responsible quantum technologies.
Kiera Peltz
Executive Director, Qubit by Qubit
Founder and CEO, The Coding School
To create an exceptional quantum workforce and give all Americans a chance to discover the beauty of quantum information science and technology, to contribute meaningfully to the nation’s economic and national security, and to create much-needed bridges with other like-minded nations across the world as a counterbalance to the balkanization of science, we have to change how we are teaching quantum. Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.
The imminence of quantum technologies has motivated physicists—at least in some places—to reinvent their teaching, listening to and working with their engineering, computer science, materials science, chemistry, and mathematics colleagues to create a new kind of course. In 2020, these early experiments in retooling led to a convening of 500 quantum scientists and engineers to debate undergraduate quantum education. Building on success stories such as the quantum concepts course at Virginia Tech, we laid out a plan, published in IEEE Transactions on Education in 2022, to bridge the gap between the excitement around quantum computing generated in high school and the kind of advanced graduate research in quantum information that is really so astounding. The good news is that as Virginia Tech showed, quantum information can be taught with pictures and a little algebra to first-year college students. It’s also true at the community college level, which means the massive cohort of diverse engineers who start their careers there have a shot at inventing tomorrow’s quantum technologies.
Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.
However, there are significant missing pieces. For one, there are almost no community college opportunities to learn quantum anything because such efforts are not funded at any significant level. For another, although we know how to teach the most speculative area of quantum information, namely quantum computing, to engineers, and even to new students, we really don’t know how to do that for quantum sensing, which allows us to do position, navigation, and timing without resorting to our fragile GPS system, and to measure new space-time scales in the brain without MRI, to name two of many applications. It is the most advanced area of quantum information, with successful field tests and products on the market now, yet we are currently implementing quantum engineering courses focused on a quantum computing outcome that may be a decade or more away.
How can we solve the dearth of quantum engineers? First, universities and industry can play a major role by working together—and several such collective efforts are showing the way. Arizona State University’s Quantum Collaborative is one such example. The Quantum consortium in Colorado, New Mexico, and Wyoming recently received a preliminary grant from the US Economic Development Administration to help advance both quantum development and education programs, including at community colleges, in their regions. Such efforts should be funded and expanded and the lessons they provide should be promulgated nationwide. Second, we need to teach engineers what actually works. This means incorporating quantum sensing from the outset in all budding quantum engineering education systems, building on already deployed technologies. And third, we need to recognize that much of the nation’s quantum physics education is badly out of date and start modernizing it, just as we are now modernizing engineering and computer science education with quantum content.
Lincoln D. Carr
Quantum Engineering Program and Department of Physics
Colorado School of Mines
Preparing a skilled workforce for emerging technologies can be challenging. Training moves at the scale of years while technology development can proceed much faster or slower, creating timing issues. Thus, Sean Dudley and Marisa Brazil deserve credit for addressing the difficult topic of preparing a future quantum workforce.
At the heart of these discussions are the current efforts to move beyond Quantum 1.0 technologies that make use of quantum mechanical properties (e.g., lasers, semiconductors, and magnetic resonance imaging) to Quantum 2.0 technologies that more actively manipulate quantum states and effects (e.g., quantum computers and quantum sensors). With this focus on ramping up a skilled workforce, it is useful to pause and look at the underlying assumption that the quantum workforce requires active management.
In their analysis, Dudley and Brazil cite a report by McKinsey & Company, a global management consulting firm, which found that three quantum technology jobs exist for every qualified candidate. While this seems like a major talent shortage, the statistic is less concerning when presented in absolute numbers. Because the field is still small, the difference is less than 600 workers. And the shortage exists only when considering graduates with explicit Quantum 2.0 degrees as qualified potential employees.
McKinsey recommended closing this gap by upskilling graduates in related disciplines. Considering that 600 workers is about 33% of physics PhDs, 2% of electrical engineers, or 1% of mechanical engineers graduated annually in the United States, this seems a reasonable solution. However, employers tend to be rather conservative in their hiring and often ignore otherwise capable applicants who haven’t already demonstrated proficiency in desired skills. Thus, hiring “close-enough” candidates tends to occur only when employers feel substantial pressure to fill positions. Based on anecdotal quantum computing discussions, this probably isn’t happening yet, which suggests employers can still afford to be selective. As Ron Hira notes in “Is There Really a STEM Workforce Shortage?” (Issues, Summer 2022), shortages are best measured by wage growth. And if such price signals exist, one should expect that students and workers will respond accordingly.
When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences.
If the current quantum workforce shortage is uncertain, the future is even more uncertain. The exact size of the needed future quantum workforce depends on how Quantum 2.0 technologies develop. For example, semiconductors and MRI machines are both mature Quantum 1.0 technologies. The global semiconductor industry is a more than $500 billion business (measured in US dollars), while the global MRI business is about 100 times smaller. If Quantum 2.0 technologies follow the specialized, lab-oriented MRI model, then the workforce requirements could be more modest than many projections. More likely is a mix of market potential where technologies such as quantum sensors, which have many applications and are closer to commercialization, have a larger near-term market while quantum computers remain a complex niche technology for many years. The details are difficult to predict but will dictate workforce needs.
When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences. Overproducing STEM workers benefits industry and academia, but not necessarily the workers themselves. If we prematurely attempt to put quantum computer labs in every high school and college, we may be setting up less-privileged students to pursue jobs that may not develop, equipped with skills that may not be easily transferred to other fields.
Daniel J. Rozell
Research Professor
Department of Technology and Society
Stony Brook University
An Evolving Need for Trusted Information
In “Informing Decisionmakers in Real Time” (Issues, Fall 2023), Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe describe how scientific organizations, funders, and researchers came together to provide vital insights in a time of global need. Their actions during the COVID-19 pandemic created new ways for researchers to coordinate with one another and better ways to communicate critical scientific insights to key end users. Collectively, these actions accelerated translations of basic research to life-saving applications.
Examples such as the Societal Experts Action Network (SEAN) that the authors highlight reveal the benefits of a new approach. While at the National Science Foundation, we pitched the initial idea for this project and the name to the National Academies of Sciences, Engineering, and Medicine (NASEM). We were inspired by NASEM’s new research-to-action workflows in biomedicine and saw opportunities for thinking more strategically about how social science could help policymakers and first responders use many kinds of research more effectively.
SEAN’s operational premise is that by building communication channels where end users can describe their situations precisely, researchers can better tailor their translations to the situations. Like NASEM, we did not want to sacrifice rigor in the process. Quality control was essential. Therefore, we designed SEAN to align translations with key properties of the underlying research designs, data, and analysis. The incredible SEAN leadership team that NASEM assembled implemented this plan. They committed to careful inferences about the extent to which key attributes of individual research findings, or collections of research findings, did or did not generalize to end users’ situations. They also committed to conducting real-time evaluations of their effectiveness. With this level of commitment to rigor, to research quality filters, and to evaluations, SEAN produced translations that were rigorous and usable.
With structures such as SEAN that more deeply connect researchers to end users, we can incentivize stronger cultures of responsiveness and accountability to thousands of end users.
There is significant benefit to supporting approaches such as this going forward. To see why, consider that many current academic ecosystems reward the creation of research, its publication in journals, and, in some fields, connections to patents. These are all worthy activities. However, societies sometimes face critical challenges where interdisciplinary collaboration, a commitment to rigor and precision, and an advanced understanding of how key decisionmakers use scientific content are collectively the difference between life and death. Ecosystems that treat journal publications and patents as the final products of research processes will have limited impact in these circumstances. What Groves and coauthors show is the value of designing ecosystems that produce externally meaningful outcomes.
Scientific organizations can do more to place modern science’s methods of measurement and inference squarely in the service of people who can save lives. With structures such as SEAN that more deeply connect researchers to end users, we can incentivize stronger cultures of responsiveness and accountability to thousands of end users. Moreover, when organizations network these quality-control structures, and then motivate researchers to collaborate and share information effectively, socially significant outcomes are easier to stack (we can more easily build on each other’s insights) and scale (we can learn more about which practices generalize across circumstances).
To better serve people across the world, and to respect the public’s sizeable investments in federally funded scientific research, we should seize opportunities to increase the impact and social value of the research that we conduct. New research-to-action workflows offer these opportunities and deserve serious attention in years to come.
Daniel Goroff
Alfred P. Sloan Foundation
Arthur Lupia
University of Michigan
As Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe describe in their article, the COVID-19 pandemic highlighted the value and importance of connecting social science to on-the-ground decisionmaking and solution-building processes, which require bridging societal sectors, academic fields, communities, and levels of governance. That the National Academies of Sciences, Engineering, and Medicine and public and private funders—including at the local level—created and continue to support the Societal Experts Action Network (SEAN) is encouraging. Still, the authors acknowledge that there is much work needed to normalize and sustain support for ongoing research-practice partnerships of this kind.
In academia, for example, the pandemic provided a rallying point that encouraged cross-sector collaborations, in part by forcing a change to business-as-usual practices and incentivizing social scientists to work on projects perceived to offer limited gains in academic systems, such as tenure processes. Without large-scale reconfiguration of resources and rewards, as the pandemic crisis triggered to some extent, partnerships such as those undertaken by SEAN face numerous barriers. Building trust, fostering shared goals, and implementing new operational practices across diverse participants can be slow and expensive. Fitting these efforts into existing funding is also challenging, as long-term returns may be difficult to measure or articulate. In a post-COVID world, what incentives will remain for researchers and others to pursue necessary work like SEAN’s, spanning boundaries across sectors?
In a post-COVID world, what incentives will remain for researchers and others to pursue necessary work like SEAN’s, spanning boundaries across sectors?
One answer comes from a broader ecosystem of efforts in “civic science,” of which we see SEAN as a part. Proponents of civic science argue that boundary-spanning work is needed in times of crisis as well as peace. In this light, we see a culture shift in which philanthropies, policymakers, community leaders, journalists, educators, and academics recognize that research-practice partnerships must be made routine rather than being exceptional. This culture shift has facilitated our own work as researchers and filmmakers as we explore how research informing filmmaking, and vice versa, might foster pro-democratic outcomes across diverse audiences. For example, how can science films enable holistic science literacy that supports deliberation about science-related issues among conflicted groups?
At first glance, our work may seem distant from SEAN’s policy focus. However, we view communication and storytelling (in non-fiction films particularly) as creating “publics,” or people who realize they share a stake in an issue, often despite some conflicting beliefs, and who enable new possibilities in policy and society. In this way and many others, our work aligns with a growing constellation of participants in the Civic Science Fellows program and a larger collective of collaborators who are bridging sectors and groups to address key challenges in science and society.
As the political philosopher Peter Levine has said, boundary-spanning work enables us to better respond to the civic questions asking “What should we do?” that run through science and broader society. SEAN illustrates how answering such questions cannot be done well—at the level of quality and legitimacy needed—in silos. We therefore strongly support multisector collaborations like those that SEAN and the Civic Science Fellows program model. We also underscore the opportunity and need for sustained cultural and institutional progress across the ecosystem of connections between science and civic society, to reward diverse actors for investing in these efforts despite their scope and uncertainties.
Emily L. Howell
Researcher
Science Communication Lab
Nicole M. Krause
Associate
Morgridge Institute for Research
Ian Cheney
Documentary film director and producer
Wicked Delicate Films
Elliot Kirschner
Executive Producer
Science Communication Lab
Sarah S. Goodwin
Executive Director
Science Communication Lab
I read Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe’s essay with great interest. It is hard to remember the early times of COVID-19, when everyone was desperate for answers and questions popped up daily about what to do and what was right. As a former elected county official and former chair of a local board of health, I valued the welcome I received when appointed to the Societal Experts Action Network (SEAN) the authors highlight. I believe that as a nonacademic, I was able to bring a pragmatic on-the-ground perspective to the investigations and recommendations.
I believe that as a nonacademic, I was able to bring a pragmatic on-the-ground perspective to the investigations and recommendations.
At the time, local leaders were dealing with a pressing need for scientific information when politics were becoming fraught with dissension and the public had reduced trust in science. Given such pressure, it is difficult to fully appreciate the speed at which SEAN operated—light speed compared with what I viewed as the usual standards of large organizations such as its parent, the National Academies of Sciences, Engineering, and Medicine. SEAN’s efforts were nimble and focused, allowing us to collaborate while addressing massive amounts of data.
Now, the key to addressing the evolving need for trusted and reliable information, responsive to the modern world’s speed, will be supporting and replicating the work of SEAN. Relationships across jurisdictions and institutions were formed that will continue to be imperative not only for ensuring academic rigor but also for understanding how to build the bridges of trust to support the value of science, to meet the need for resilience, and to provide the wherewithal to progress in the face of constant change.
Linda Langston
President, Langston Strategies Group
Former member of the Linn County, Iowa, Board of Supervisors
Supervisor and President, National Association of Counties
Building Community in the Bayou
At the age of 19, Monique Verdin picked up a camera and began documenting the lives of her relatives in the Mississippi Delta. Little did she know that she would spend the next two decades investigating and capturing the profound ways that climate, the fossil fuel industry, and the shifting waters of the Gulf of Mexico would transform the landscape that was once a refuge for her Houma ancestors.
Based in Louisiana, Verdin is an artist, storyteller, videographer, and photographer as well as a community builder and activist. She is also the director of the Land Memory Bank and Seed Exchange, a project that seeks to create a community record of the coastal cultures and native ecology of southeast Louisiana. Her work, which was featured in the Winter print edition of Issues, seeks to understand home and belonging after displacement and migration. Her stories are laced with environmental concerns, the shifting roles of corporate entities, and natural and human-made disasters. Verdin’s art practice creates space and gives voice to Indigenous and marginalized communities in the South while building bridges with science communities.
On this episode, Verdin joins host JD Talasek to talk about using art and science to understand a Gulf that is being reshaped by climate, industry, and more.
Is there something about science policy you’d like us to explore? Let us know by emailing us at podcast@issues.org.
I’m JD Talasek, Director of Cultural Programs at the National Academy of Sciences. For this podcast, I’m joined by Monique Verdin. Monique is a citizen of the Houma Nation and director of The Land Memory Bank & Seed Exchange. Based in Louisiana, she wears many hats – artist, storyteller, videographer, and photographer as well as community builder and activist. Her work documents and conveys issues of home and belonging that results from displacement and migration. Because these stories are interconnected with environmental concerns, use of land by corporate entities and a string of natural and human-made disasters, Monique engages with a variety of art practices that seek to create space – giving voice to indigenous and marginalized communities in the South. Often informed by science, her work brings into focus the complexity of the situation and often builds bridges between the local and science communities. When I spoke with Monique, she shared how events in her life drew her into this work and how she uses this work to engage with her community.
Welcome Monique.
Monique Verdin: Thanks so much for the invitation to be in conversation.
Talasek: I first became familiar with your work as a videographer when you contributed to Brandon Ballengée’s Crude Life traveling exhibit that was funded by the National Academies’ Keck Futures Initiative. I’ve just been enthralled with your work ever since. I wonder if we could start at the beginning, maybe a little bit about your origin story. You’re an artist and a storyteller. How did you find yourself going down that path?
Verdin: Wow. I think that to do a rewind of my life, there was a moment in the late nineties when I learned about a place called Grand Bois, or Big Woods, which is between the Bayou Terrebonne and Bayou Lafourche and the Yakni Chitto, the Big Country, also known as Terrebonne and Lafourche Parishes in South Louisiana. And I learned that my cousins who live in this very small community along what was the Bayou Blue, connecting those two bayous and has since become a highway that follows the high ground, that they were being poisoned by toxic oil fill waste that was bringing, being brought across state lines from Alabama to Louisiana where it was considered non-hazardous. Of course, to this day, that site is considered non-hazardous, but the material on the site has hazardous characteristics, so that facility is still up and operating to this day. But my desire to take photographs was really to expose that story, to stop that injustice.
The more I was invited to get into boats with my cousins who are fishermen and taken to the ends of the bayou where my elders once lived in places that were considered trembling prairies. I say often, where my grandmother harvested pecans today, my cousins are putting their crab traps out. And so learning about what was a “not in my cousin’s backyard” story, but I wanted to share, to stop, then I recognize there’s a much bigger planetary problem. I have a perspective that, of course, is informed by many other experiences and stories, and I feel that a responsibility of mine really is to share the stories of my elders. They were not listened to. They were silenced. They were considered uneducated and did not have a voice to advocate for having access to all the things that one needs to survive: clean water and healthy place to live and to be in community. And how my grandmother witnessed oil and gas coming in the early 1900s into the Delta. I lived through the consequences of those kinds of changes that have been witnessed, that have come from decision making processes that are far outside of the wetlands where we call home.
Talasek: Well, it sounds like, Monique, that this pathway found you. It sounds like that you just simply had to respond to what was in front of you to go down this pathway as an artist. A thread that runs throughout your work, and you brought up already is this theme of home and dislocation. And I’d like to dig down into that a little bit more as it reveals itself in your artwork. You mentioned in your description altars at the end of some of these riverways and passages. I wonder if you could talk a little bit more about that complexity of how you create work to raise this awareness and to raise this conversation.
Verdin: I think that a lot of the work that I’ve done has just required a real investigation and being kind of self-taught in a sense. I mean, not to say that I haven’t had amazing teachers along the way, but there isn’t a site that you can go to that feels unbiased and well-informed in regards to how the delta works, for example. Or in my own trying to understand how we ended up at the ends of the Bayou, how my father’s Houma heritage, how that history ends up in southern Louisiana on top of black gold mines, and having land stolen essentially from my elders who were signing X’s onto pieces of paper. And there’s no book you can go and check out that helps break that down. And so I think that it’s been my curiosity in trying to make sense and then trying to translate that back to my community.
And with every wave of whether it’s when the BP oil drilling disaster happened on the Deepwater Horizon rig, and for three months we were watching as this blob of oil just couldn’t be stopped, and recognizing how fragile our systems are, how fragile our infrastructure is, how powerful the nature can be, and recognizing how small we are at times when a big storms coming in. And I am grateful that when I was 25 years old, Hurricane Katrina was a moment that really forced me to reckon with the reality that what you really need to survive, clean water and healthy places to be and dry land, that really becomes a primary focus. And I feel like a lot of my work has been to sound the alarm and to recognize that yes, we may be on the front lines, but none of us will be able to hide from the consequences of a changing climate and how it’s important to recognize the intelligence of nature and to learn those lessons so we don’t continue to fight against the nature for corporate gains in so many circumstances. The work has been schooling me in a way of being forced to put a frame, whether sometimes those are moving images or sometimes it’s a photograph or sometimes it’s me saying words in a weird landscape like on an earthen levee that borders a ghost forest in St. Bernard Parish because of saltwater intrusion and wetland loss.
Talasek: Monique, that’s really powerful. And I guess several thoughts that you expressed have me curious. I am wondering about this idea of there’s no place to go to get this information how these stories and these perspectives are not historically recorded. There are a conversation that in some cases might be silenced and how your work, and this is what I’d like to dig down into, is how does your work actually give your community a voice? And I guess I’d like to have you talk a little bit more about the work you do specifically with the communities and how you help the communities find voice with each other and to hold space to remember these issues.
Verdin: Well, I always wish that I could do more. I feel like one of the most important things that we really need is access to safe spaces where people can come together from different backgrounds with maybe different visions of the future, but to meet on a neutral ground and to learn from each other and to really be able to be exposed to the data in a way that doesn’t feel like you’re being manipulated by the powers that be. And being from a place like South Louisiana and seeing how with every disaster, decisions get made in really rapid ways, and going to so many of these community meetings where I have learned a lot or I have walked away with questions that have forced me to dig deeper into investigating what does that mean, for example, what is a river diversion? What will that mean for my community? What does that mean for the estuaries of one of the planet’s biggest river basins in the world?
I think that I’ve had a privilege of being able to, for whatever reasons, whether I’m working as a researcher’s assistant or working as an artist in trying to create work that is responding to these big questions regarding how do we remain and reclaim or how do we retreat and return, or what does that look like in these times where folks are saying by 2050, life in the red zone of South Louisiana will be gone? And how do we hold onto our food ways, our life ways, our cultural traditions in these times is something that is, like, if some folks will be like, oh, well, you come in and talk about climate change. It’s like this is the existential crisis that we’re all going through. I mean, I can tell you what I’ve seen, but all of us, especially here in the Delta, have been living through many, many cycles and the warnings are coming true.
So the art, I feel like the art is in a big way for me, part of my therapy and processing and making sense of time, and part of my art practice is taking photographs and then layering those photographs with United States Geological survey maps of the place or layering a couple of those maps, one from the 1930s compared to one in 2015, and then an image taken at that place on top of it to really just show how quickly the landscape is changing. And also knowing that maps are our tool of colonization and domination and creating these lines and borders and barriers, but also with the satellite imagery that we’re able to witness this disappearance before our eyes and to translate that into saying, oh, it’s not just a dead tree. This is a dead tree, because do you see all of these straight lines that have been cut through the landscape and what that has allowed in a short amount of time for what should be a fresh or brackish environment to become completely saturated with salt water. Knowing that that generational consequence has come with these short-term gains, which are tied to money that is in the pockets of people far beyond our borders.
Talasek: I appreciate you talking about your process of documenting these and the way that you pull these different elements together for your work. But I also see your work with the community. Do you actually have gone into these spaces and created that as sort of a social sculpture? It’s definitely a part of the medium that I think sets you far apart from a lot of filmmakers or storytellers because you’re in there telling your story, you’re sharing your community story, you’re sharing the data that you’re aware of with your community in a way that perhaps they could hear it better than if they were hearing it from an authority. They’re also living it. I wonder if you could describe maybe some of the more social community based aspects of your work.
Verdin: Yeah. After I collaborated on a project called Cry You One, I was inspired to start a project called the Land Memory Bank and Seed Exchange. And the intention really is to activate sites and to welcome people in. And it was originally when the idea came to me, it was inspired by the fact that so much is being lost and how we have these waves of researchers that will come into our communities whenever disaster strikes. And there’s all of this information gathering and storytelling, whether it be for news outlets or for researchers and university, and how that information often would just go away. So I was wanting to really build a repository and to have a space where community could upload their own history and quickly realized that that is a monumental task, and how it gets held and what the bureaucracy of institutions and even the technology of holding that information became a huge challenge.
But what I recognized in that process was that the real magic was just holding space for people to connect. And for the last, since 2015, we have been activating at the Los Isleños Cultural Complex, which is along the banks of Bayou Terre-aux-Boeufs and Eastern St. Bernard Parish. So about 45 minutes south of the city of New Orleans, also known as Bulbancha, the city was and it continues to be called A Place of Babbling Languages or Babbling Tongues, but was successfully rebranded New Orleans. But recognizing that this part of the world has been a place where many different kinds of people have been connecting for a really long time. And that fiesta is actually celebrating the Canary Island descendants and many other peoples who call St. Bernard Parish home. So we tend to a little medicine wheel garden there. I work collaboratively with Dr. Tammy Greer, who’s another Houma woman, and we kind of invite our community of indigenous folks from across the delta to come and to share their baskets that are made and also jewelry.
And that’s been a really beautiful seasonal celebration. We, in February, every year, build these traditional Houma structures that are made out of palmetto and willow, and building with community, physically, building with community year after year. It’s is something much bigger than an art project or setting up for fiesta. It’s the steep relationship with each other and has become very intertribal and interracial, and also has been a really unexpected journey and continues to strengthen relationship with place too, with being in St. Bernard Parish and being in a place that is so on the edge. And as we are going through this bottleneck of biodiversity being lost, it’s a really special place to be able to get out in the palmetto forest and to recognize the beauty of those places. Out of that practice for so many years, it’s led to the building of a modern nanih. Nanih means mound or hill in Choctaw, and we are building a nanih in Bulbancha. So a modern mound is currently under construction on the Lafitte Greenway, which is a public park in the heart of the city, and building it as an intertribal. And all people, all languages are welcome and have been in community building this site for hopefully celebration and exchange long into the future.
Talasek: So Monique, your process as you’re describing it, is so incredibly complex and beautiful. It has so many different facets, and I just really enjoy listening to you talk about it. You had mentioned an influx of researchers that come into the area, especially after disaster, and I wonder if we could talk a little bit about that. I know you’ve collaborated with scientists. You mentioned Tammy Greer. I know that you’ve worked with Brandon Ballengée. I think you’ve also worked some with Jody Deming and the Ocean Memory Project. I wonder if you could talk a little bit about your relationship with these scientists and how they inform your work, and also maybe how you inform their work. It’s just a simple question. Nothing too much, right?
Verdin: Well, it’s funny to think of them, and also architects. Architects and scientists weave in and out of my world, and we have these ideas of what a scientist is or what an architect is, or even what an artist is. And it’s like, oh, they’re my friends and I love them, and they’ve helped me to flip my worldview or my cosmic perception upside down, especially in regards to those who are working in the ocean sciences world. I mean, Jody Deming has been on the forefront of Arctic ice microorganism research and just thinking about the relationship of space to deep ocean and just geological time. And it’s really helped to allow myself to take a deep breath during these kind of traumatic days of waking up every morning and reading the news headlines about where we are. And to recognize how the smallest of creatures have this ability to survive under the most extreme circumstances, and how that is through communications and networks and sensing that is beyond something that we can see is really just fascinating and inspiring.
I’m a curious person and I’ve been really blessed that by being an artist, sometimes you get invited into peculiar places. And I think that so much of my wanting to understand what’s happening in my own backyard and my wetlands to my family has taken me out to sea, literally. And this time of really wanting to advocate for our survivability, really focusing attention on the importance of the ocean and rivers. The Mississippi River keeps calling me in this way that I’ve connected with communities all the way near one of the many headwaters of the Mississippi. But just recognizing water’s importance and water’s intelligence and how water is connecting people, and just recognizing that I am water too, that my time here is really short, and the perspective that I have is so small, but I know that it’s reflected in many people’s realities across the world.
And right now, I like working in the arts, but I want to put my hands in the dirt. And I feel like the most important thing that I can really be focused on right now focused is creating safe places to be, whether it’s building this Nanih, our modern mound in Bulbancha, and connecting with the community of sweet souls who are also dedicated to creating safe spaces or even building out a land site that’s about two hours north of the city as a safe place to retreat to, not only for myself, but for my community, which are human beings, and also plant friends who are finding it harder to live in my home territories closer to the coast. And just to reflect back about migrations, recognizing that migration is a way of life, and being here in the delta where so many different kinds of birds and other flying creatures and swimming beings coming through here, I feel really blessed to be part of that kind of PowerPoint of a womb site for the world, and also have embraced the fact that I might not always be able to be at my grandmother’s land, and maybe one day it’s going to go underwater, but I have a right to be in relationship with those places that I can’t just turn my back and run away. I belong to it, and it belongs to me.
Talasek: Wonderfully, wonderfully put. So beautifully put. I want to wrap this up by sharing with you a metaphor that I just heard. This was from a musician and community builder in Tampa, and this was taught to him by a West African teacher of his, and it is simply stated that the longest distance is the distance between the heart and the mind. And this came up in a conversation that we were having about the importance of water and how water connects us, and how a lot of these issues are around water. And that just kind of all comes together. Fred Johnson, who shared that phrase with me, he taught me that the heart is why we do something, and that the mind is how we do something. And I think it’s a beautiful thing because it’s the distance between the heart and the mind is often the distance between thinking about something, talking about something, and doing something. And Monique, I would just share my observation of your work over the years as work that shortens the distance between the heart and the mind. And I just wanted to thank you for your time. Thank you for your work that you do. Thank you for creating space for so many other people. And I’ll just let you have the final word. Is there anything else that you would like to share?
Verdin: I think I feel really so incredibly grateful to be in community, to be supported by my ancestors and my family, and my family friends, and also strangers that come into my world and change the direction that I thought I was going to go in such wild and wonderful ways. I think that by being vulnerable and sharing so many of my personal challenges and questions, that has returned gifts in unimaginable ways. So I’m grateful for this time and for all of the networks that I am a part of. So thank you so much.
Talasek: Monique. Thank you. And thank you for responding to what is presented to you. That’s truly an artist path, and it’s something that we can all learn from. So thank you, Monique. Thanks for being here.
Verdin: Thank you.
Talasek: If you would like to learn more about Monique Verdin’s work, check out the resources in our show notes.
You can subscribe to The Ongoing Transformation wherever you get your podcast. Thanks to our podcast producers, Sydney O’Shaughnessy andKimberly Quach and and our audio engineer Shannon Lynch. I’m J.D. Talasek, Director of Cultural Programs at the National Academy of Sciences. Thank you for listening.