Harvesting Insights From Crop Data

In “When Farmland Becomes the Front Line, Satellite Data and Analysis Can Fight Hunger” (Issues, Winter 2024), Inbal Becker-Reshef and Mary Mitkish outline how a standing facility using the latest satellite and machine learning technology could help to monitor the impacts of unexpected events on food supply around the world. They do an excellent job describing the current dearth of public real-time information and, through the example of Ukraine, demonstrating the potential power of such a monitoring system. I want to highlight three points the authors did not emphasize.

First, a standing facility of the type they describe would be incredibly low-cost relative to the benefit. A robust facility could likely be established for $10–20 million per year. This assumes that it would be based on a combination of public satellite data and commercial data accessed through larger government contracts that are now common. Given the potential national security benefits of having accurate information on production shortfalls around the world, the cost of the facility is extremely small, well below 0.1% of the national security spending of most developed countries.

Second, the benefits of the facility will likely grow quickly, because the number of unexpected events each year is very likely to increase. One well-understood reason is that climate changes are making severe events such as droughts, heat waves, and flooding more common. Less appreciated is the continued drag that climate trends are having on global productivity, which puts upward pressure on prices of food staples. The impact of geopolitical events such as the Ukraine invasion then occur on top of an already stressed food system, magnifying the impact of the event on global food markets and social stability. The ability to quickly assess and respond to shocks around the world should be viewed as an essential part of climate adaptation, even if every individual shock is not traceable to climate change. Again, even the facility’s upper-end price tag is small relative to the overall adaptation needs, which are estimated at over $200 billion for developing countries alone.

Third, a common refrain is that the private sector (e.g., food companies, commodity traders) and national security outfits are already monitoring the global food supply in real time. My experience is that they are not doing it with the sophistication and scope that a public facility would have. But even if they could, having estimates in the public domain is critical to achieving the public benefit. This is why the US Department of Agriculture regularly releases both its domestic and foreign production assessments.

The era of Earth observations arguably began roughly 50 years ago with the launching of the original Landsat satellite in 1972. That same year, the United States was caught by surprise by a large shortfall in Russian wheat production, a surprise that reoccurred five years later. By the end of the decade the quest to monitor food supply was a key motivation for further investment in Earth observations. We are now awash in satellite observations of Earth’s surface, yet we have still not realized the vision of real-time, public insight on food supply around the world. The facility that Becker-Reshef and Mitkish propose would help to finally realize that vision, and it has never been more needed than now.

Professor, Department of Earth System Science

Director, Center on Food Security and the Environment

Stanford University

Member, National Academy of Sciences

Given the current global food situation, the importance of the work that Inbal Becker-Reshef and Mary Mitkish describe cannot be emphasized enough. In 2024, some 309 million people are estimated to be acutely food insecure in the 72 countries with World Food Program operations and where data are available. Though lower than the 2023 estimate of 333 million, this marks a massive increase from pre-pandemic levels. The number of acutely hungry people in the world has more than doubled in the last five years.

Conflict is one of the key drivers of food insecurity. State-based armed conflicts have increased sharply over the past decade, from 33 conflicts in 2012 to 55 conflicts in 2022. Seven out of 10 people who are acutely food insecure currently live in fragile or conflict-affected settings. Food production in these settings is usually disrupted, making it difficult to understand how much food they are likely to produce. While Becker-Reshef and Mitkish focus on “crop production data aggregated from local to global levels,” having local-level data is critical for any groups trying to provide humanitarian aid. It is this close link between conflict and food insecurity that makes satellite-based techniques for estimating the extent of croplands and their production so vital.

This underpins the important potential of the facility the authors propose for monitoring the impacts of unexpected events on food supply around the world. Data collected by the facility could lead to a faster and more comprehensive assessment of crop production shortfalls in complex emergencies. Importantly, the facility should take a consensual, collaborative approach involving a variety of stakeholder institutions, such as the World Food Program, that not only have direct operational interest in the facility’s results, but also frequently possess critical ancillary datasets that can help analysts better understand the situation.

While satellite data is an indispensable component of modern agricultural assessments, estimation of cropland area (particularly by type) still faces considerable challenges, especially regarding smallholder farming systems that underpin the livelihoods of the most vulnerable rural populations. The preponderance of small fields with poorly defined boundaries, wide use of mixed cropping with local varieties, and shifting agricultural patterns make analyzing food production in these areas notoriously difficult. Research into approaches that can overcome these limitations will take on ever greater importance in helping the proposed facility’s output have the widest possible application.

In order to maximize the impact of the proposed facility and turn the evidence from rapid satellite-based assessments into actionable recommendations for humanitarians, close integration of its results with other streams of evidence and analysis is vital. Crop production alone does not determine whether people go hungry. Other important factors that can influence local food availability include a country’s stocks of basic foodstuffs or the availability of foreign exchange reserves to allow importation of food from international markets. And even when food is available, lack of access to food, for either economic or physical reasons, or inability to properly utilize it can push people into food insecurity. By combining evidence on a country’s capacity to handle production shortfalls with data on various other factors that influence food security, rapid assessment of crop production will be able to fully unfold its power.

Head, Market and Economic
Analysis Unit

Head, Climate and Earth
Observation Unit

World Food Program

Rome, Italy

Inbal Becker-Reshef and Mary Mitkish use Ukraine to reveal an often-overlooked impact of warfare on the environment. But it is important to remember that soil, particularly the topsoil of productive farmlands, can be lost or diminished in other equally devastating ways.

Globally, there are about 18,000 distinct types of soil. Soils have their own taxonomy, and the different soil types are sorted into one of 12 orders, with no two being the same. In the case of Ukraine, it has an agricultural belt that serves as a “breadbasket” for wheat and other crops. This region sustains its productivity in large part because of its particular soil base, called chernozem, which is rich in humus, contains high percentages of phosphorus and ammonia, and has a high moisture storage capacity—all factors that promote crop productivity.

Even as the world has so many types of soil, the pressures on soil are remarkably consistent across the globe. Among the major source of pressures, urbanization is devouring farmland, as such areas are typically flat and easy to design upon, making them widely marketable. Soil is lost from erosion, which can be gradual and almost unrecognized, or sudden, as following a natural disaster. And soil is lost or degraded from salinization and desertification.

So rather than waiting for a war to inflict damage to soils and flash warning signs about soil health, are there not things that can be done now? As Becker-Reshef and Mitkish mention, “severe climate-related events and armed conflicts are expected to increase.” And while managing such food disruptions is key to ensuring food security, forward-looking polices and enforcements to protect the planet’s base foundation for agriculture would seem to be an important part of food security planning.

In the United States, farmland is being lost at an alarming rate; one reported study found that 11 million acres were lost or paved over between 2001 and 2016. Based on those calculations, it is estimated that another 18.4 million acres could be lost between 2016 and 2040. As for topsoil, researchers agree that it can take from 200 to 1,000 years to form and add an additional inch in depth, which means that topsoil is disappearing faster than it can be replenished.

While the authors clearly show the loss of cultivated acreage from warfare, to fully capture the story would require equivalent projections for agricultural land lost to urbanization and to erosion or runoff. This would then paint a fuller picture as to how one vital resource, that of topsoil, is faring during this time of farmland reduction, coupled with greater expectations for what each acre can produce.

Visiting Scholar, Nicholas School of the Environment

Duke University

Forks in the Road to Sustainable Chemistry

In “A Road Map for Sustainable Chemistry” (Issues, Winter 2024), Joel Tickner and Ben Dunham convincingly argue that coordinated government action involving all federal funding agencies is needed for realizing the goal of a sustainable chemical industry that eliminates adverse impacts on the environment and human health. But any road map should be examined to make sure it heads us in the right direction.

At the outset, it is important to clear misinterpretations about the definition of sustainable chemistry stated in the Sustainable Chemistry Report the authors examine. They opine that the definition is “too permissive in failing to exclude activities that create risks to human health and environment.” On the contrary, the definition is quite clear in including only processes and products that “do not adversely impact human health and the environment” across the overall life cycle. Further, the report’s conclusions align with the United Nations Sustainable Development Goals, against which progress and impacts of sustainable chemistry and technologies are often assessed.

The nation’s planned transition in the energy sector toward net-zero emissions of carbon dioxide, spurred by the passage of several congressional acts during the Biden administration, is likely to cause major shifts in many industry sectors. While the exact nature of these shifts and their ramifications are difficult to predict, it is nevertheless vital to consider them in road-mapping efforts aimed at an effective transition to a sustainable chemical industry. Although some of these shifts could be detrimental to one industry sector, they could give rise to entirely new and sustainable industry sectors.

As an example, as consumers increasingly switch to electric cars, the government-subsidized bioethanol industry will face challenges as demand for ethanol as a fuel additive for combustion-engine vehicles erodes. But bioethanol may be repurposed as a renewable chemical feedstock to make a variety of platform chemicals with significantly more value compared to its value as a fuel. Agricultural leftovers such as corn stover and corn cobs can also be harnessed as alternate feedstocks to make renewable chemicals and materials, further boosting ethanol biorefinery economics. Such biorefineries can spur thriving agro-based economies.

Although some of these shifts could be detrimental to one industry sector, they could give rise to entirely new and sustainable industry sectors.

Another major development in decarbonizing the energy sector involves the government’s recent investments in hydrogen hubs. The hydrogen produced from carbon-free energy sources is expected to decarbonize fertilizer production, now a significant source of carbon emissions. The hydrogen can also find other outlets, including its reaction with carbon dioxide captured and sequestered in removal operations to produce green methanol as either a fuel or a platform chemical. Carbon-free oxygen, a byproduct of electrolytic hydrogen production in these hubs, can be a valuable reagent for processing biogenic feedstocks to make renewable chemicals.

Another untapped and copious source of chemical feedstock is end-of-use plastics. For example, technologies are being developed to convert used polyolefin plastics into a hydrocarbon crude that can be processed as a chemical feedstock in conventional refineries. In other words, the capital assets in existing petroleum refineries may be repurposed to process recycled carbon sources into chemical feedstocks, thereby converting them into circular refineries. There could well be other paradigm-shifting possibilities for a sustainable chemical industry that could emerge from a carefully coordinated road-mapping strategy that involves essential stakeholders across the chemical value chain.

Dan F. Servey Distinguished Professor, Department of Chemical and Petroleum Engineering

Director, Center for Environmentally Beneficial Catalysis

University of Kansas

Joel Tickner and Ben Dunham describe the current opportunity “to better coordinate federal and private sector investments in sustainable chemistry research and development, commercialization, and scaling” through the forthcoming federal strategic plan to advance sustainable chemistry. They highlight the unfortunate separation in many federal efforts between “decarbonization” of the chemical industry (reducing and eliminating the sector’s massive contribution to climate change) and “detoxification” (ending the harm to people and the environment caused by the industry’s reliance on toxic chemistries).

The impacts and opportunities at stake are no small matters. The petrochemical industry produces almost one-fifth of industrial carbon dioxide emissions globally, and is on track to account for one-third of growth in oil demand by 2030, and almost half by 2050. Health, social, and economic costs due to chemical exposures worldwide may already be more than 10% of global domestic product.

As Tickner and Dunham note, transformative change is urgently needed, and will not result from voluntary industry measures or greenwashing efforts. So-called chemical recycling (which is simply a fancy name for incineration of plastic waste, with all the toxic emissions and climate harm that implies), and other false solutions (such as carbon capture and sequestration) that don’t change the underlying toxic chemistry and production models of the US chemical industry will fail to deliver real change and a sustainable industry that isn’t poisoning people and the planet.

Transformative change is urgently needed, and will not result from voluntary industry measures or greenwashing efforts.

Government and commercial efforts to advance sustainable chemistry must be guided by and accountable to the needs and priorities of the most impacted communities and workers, and measured against the vision and platform contained in The Louisville Charter for Safer Chemicals: A Platform for Creating a Safe and Healthy Environment Through Innovation.

The 125-plus diverse organizations that have endorsed the Louisville Charter would agree with Tickner and Dunham. As the Charter states: “Fundamental reform is possible. We can protect children, workers, communities, and the environment. We can shift market and government actions to phase out fossil fuels and the most dangerous chemicals. We can spur the economy by developing safer alternatives. By investing in safer chemicals, we will protect peoples’ health and create healthy, sustainable jobs.”

Among other essential policy directions to advance sustainable chemistry and transform the chemical industry so that it is no longer a source of harm, the Charter calls for:

  • preventing disproportionate and cumulative impacts that harm environmental justice communities;
  • addressing the significant impacts of chemical production and use on climate change;
  • acting quickly on early warnings of harm;
  • taking urgent action to stop the harms occurring now, and to protect and restore impacted communities;
  • ensuring that the public and workers have full rights to know, participate, and decide;
  • ending subsidies for toxic, polluting industries, and replacing them with incentives for safe and sustainable production; and
  • building an equitable and health-based economy.

Federal leadership on sustainable chemistry that advances the vision and policy recommendations of the Louisville Charter would be a welcome addition to ongoing efforts for chemical industry transformation.

Program Director

Coming Clean

Joel Tickner and Ben Dunham offer welcome and long-overdue support for sustainable chemistry, but the article only scratches the surface of societal concerns we should have about toxicants that result from exposure to fossil fuel emissions, to plastics and other products derived from petrochemicals, and to toxic molds or algal blooms. Their proposals continue to rely on the current classical dose-response approach to regulating chemical exposures. But contemporary governmental standards and industrial policies built on this model are inadequate for protecting us from a variety of compounds that can disrupt the endocrine system or act epigenetically to modify specific genes or gene-associated proteins. And critically, present practices ignore a mechanism of toxicity called toxicant-induced loss of tolerance (TILT), which Claudia Miller and I first described a quarter-century ago.

TILT involves the alteration, likely epigenetically, of the immune system’s “first responders”—mast cells. Mast cells evolved 500 million years ago to protect the internal milieu from the external chemical environment. In contrast, our exposures to fossil fuels are new since the Industrial Revolution, a mere 300 years ago. Once altered and sensitized by substances foreign to our bodies, tiny quantities (parts per billion or less) of formerly tolerated chemicals, foods, and drugs trigger degranulation of mast cells, resulting in multisystem symptoms. TILT and mast cell sensitization offer an expanded understanding of toxicity occurring at far lower levels than those arrived at by customary dose-response estimates (usually in the parts per million range). Evidence is emerging that TILT modifications of mast cells explain seemingly unrelated health conditions such as autism, attention deficit hyperactivity disorder (ADHD), chronic fatigue syndrome, and long COVID, as well as chronic symptoms resulting from exposure to toxic molds, burn pits, breast implants, volatile organic compounds (VOCs) in indoor air, and pesticides.

Most concerning is evidence from a recent peer-reviewed study suggesting transgenerational transmission of epigenetic alterations in parents’ mast cells, which may lead to previously unexplained conditions such as autism and ADHD in their children and future generations. The two-stage TILT mechanism is illustrated in the figure below, drawn from the study cited. We cannot hope to make chemistry sustainable until we recognize the results of this and other recent studies, including by our group, that go beyond classical dose-response models of harm and acknowledge the complexity of multistep causation.

Professor of Technology and Policy

Director, Technology and Law Program

Massachusetts Institute of Technology

What the Energy Transition Means for Jobs

In “When the Energy Transition Comes to Town” (Issues, Fall 2023), Jillian E. Miles, Christophe Combemale, and Valerie J. Karplus highlight critical challenges to transitioning US fossil fuel workers to green jobs. Improved data on workers’ skills, engagement with fossil fuel communities, and increasingly sophisticated models for labor outcomes are each critical steps to inform prudent policy. However, while policymakers and researchers focus on workers’ skills, the larger issue is that fossil fuel communities will not experience green job growth without significant policy intervention.

A recent article I coauthored in Nature Communications looked at data from the US Bureau of Labor Statistics and the US Census Bureau to track the skills of fossil fuel workers and how they have moved between industries and states historically. The study found that fossil fuel workers’ skills are actually well-matched to green industry jobs, and that skill matching has been an important factor in their past career mobility. However, the bigger problem is that fossil fuel workers are historically unwilling to relocate to the regions where green jobs will emerge over the next decade. Policy interventions, such as the Inflation Reduction Act, could help by incentivizing job growth in fossil fuel communities, but success requires that policy be informed by the people who live in those communities.

While policymakers and researchers focus on workers’ skills, the larger issue is that fossil fuel communities will not experience green job growth without significant policy intervention.

Even with this large-scale federal data, it’s still unclear what the precise demands of emerging green jobs will be. For example, will emerging green jobs be stable long-term career opportunities? Or will they be temporary jobs that emerge to support an initial wave of green infrastructure but then fade once the infrastructure is established? We need better models of skills and green industry labor demands to distinguish between these two possibilities.

It’s also hard to describe the diversity of workers in “fossil fuel” occupations. The blanket term encompasses coal mining, natural gas extraction, and offshore drilling, each of which vary in the skills and spatial mobility required by workers. Coal miners typically live near the mine, while offshore drilling workers are on-site for weeks at a time before returning to homes anywhere in the country.

Federal data may represent the entire US economy, but new alternative data offer more nuanced insights into real-time employer demands and workers’ career trajectories. Recent studies of technology and the future of work utilize job postings and workers’ resumes from online job platforms, such as Indeed and LinkedIn. Job postings enable employers to list preferred skills as they shift to reflect economic dynamics—even conveying shifting skill demands for the same job title over time. While federal labor data represent a population at a given time, resumes enable the tracking of individuals over their careers detailing things such as career mobility between industries, spatial mobility between labor markets, and seniority/tenure at each job. Although these data sources may fail to represent the whole population of fossil fuel workers, they have the potential to complement traditional federal data so that we can pinpoint the exact workers and communities that need policy interventions.

Assistant Professor

Department of Informatics and Networked Systems

University of Pittsburgh

A Focus on Diffusion Capacity

In “No, We Don’t Need Another ARPA” (Issues, Fall 2023), John Paschkewitz and Dan Patt argue that the current US innovation ecosystem does not lack for use-inspired research organizations and should instead focus its attention on diffusing innovations with potential to amplify the nation’s competitive advantage. Diffusion encompasses multiple concepts, including broad consumption of innovation; diverse career trajectories for innovators; multidisciplinary collaboration among researchers; improved technology transition; and modular technology “stacks” that enable components to be invented, developed, and used interoperably to diversify supply chains and reduce barriers to entry for new solutions.

Arguably, Advanced Research Project Agencies (ARPAs) are uniquely equipped to enable many aspects of diffusion. They currently play an important role in promoting multidisciplinary collaborations and in creating new paths for technology transition by investing at the seam between curiosity-driven research and product development. They could be the unique organizations that establish the needed strategic frameworks for modular technology stacks, both by helping define the frameworks and investing in building and maintaining them.

Perhaps a gap, however, is that ARPAs were initially confined to investing in technologies that aid the mission of the community they support. The target “use” for the use-inspired research was military or intelligence missions, and any broader dual-use impact was secondary. But today the United States faces unique challenges in techno-economic security and must act quickly to fortify its global leadership in critical emerging technologies (CETs), including semiconductors, quantum, advanced telecommunications, artificial intelligence, and biotechnology. We need ARPA-like entities to advance CETs independent of a particular federal mission.

Arguably, ARPAs are uniquely equipped to enable many aspects of diffusion.

The CHIPS and Science Act addresses this issue in a fragmented way. The new ARPAs being established in health and transportation have some of these attributes, but lack direct alignment with CETs. In semiconductors, the National Semiconductor Technology Center could tackle this role. In quantum, the National Quantum Initiative has the needed cross-agency infrastructure and during its second five-year authorization seeks to expand to more applied research. The Public Wireless Supply Chain Innovation Fund is advancing 5G communications by investing in Open Radio Access Network technology that allows interoperation between cellular network equipment provided by different vendors. However, both artificial intelligence and biotechnology remain homeless. Much attention is focused on the National Institute of Standards and Technology to lead these areas, but it lacks the essential funding and extramural research infrastructure of an ARPA.

The CHIPS and Science Act also created the Directorate for Technology, Innovation, and Partnerships (TIP) at the National Science Foundation, with the explicit mission of investing in CETs through its Regional Innovation Engines program, among others. Additionally, TIP established the Tech Hubs program within the Economic Development Administration. Both the Engines and Tech Hubs programs lean heavily into the notion of place-based innovation, where regions of the nation will select their best technology area and build the ecosystem of universities, start-ups, incubators, accelerators, venture investors, and state economic development agencies. While this structure may address aspects of diffusion, it lacks the efficiency of a more directed, use-inspired ARPA.

Arguably the missing piece of the puzzle is an ARPA for critical emerging technologies that can undertake the strategic planning necessary to more deliberately advance US techno-economic needs. Other nations have applied research agencies that strategically execute the functions that the United States distributes across the Economic Development Administration, the TIP directorate, various ARPAs, and state-level economic development and technology agencies. This could be a new agency within the Department of Commerce; a new function executed by TIP within its existing mission; or a shift within the existing ARPAs to ensure that their mission includes investing in CETs, not only because they are dual-use technologies that advance their parent department’s mission but also to advance US techno-economic competitiveness.

Chief Technology Officer, MITRE

Senior Vice President and General Manager of MITRE Labs

Cofounder of five venture-backed start-ups in cybersecurity, telecommunications, space, and artificial intelligence

John Paschkewitz and Dan Patt make a strong argument that the biggest bottleneck in the US innovation ecosystem is in technology “diffusion capacity” rather than new ideas out of labs; that there are several promising approaches to solving this problem; and that the nation should implement these solutions. The implicit argument is that another ARPA isn’t needed because the model was created in the world of the 1950s and ’60s where diffusion was all but guaranteed by America’s strong manufacturing ecosystem, and as a result is not well-suited to address modern diffusion bottlenecks.

In my view, however, the need to face problems that the ARPA model wasn’t originally designed for doesn’t necessarily mean that we don’t need another ARPA, for three reasons:

1. While it’s not as common as it could be, there are examples of ARPAs doing great diffusion work. The authors highlight the Semiconductor Manufacturing Technology consortium as a positive example of what we should be doing—but SEMATECH was in fact spearheaded by DARPA, the progenitor of the ARPA model.

2. New ARPAs can modify the model to help diffusion in their specific domains. ARPA-E in the energy sector has “tech to market advisors” who work alongside program directors to strategize how technology will get out into the world. DAPRA has created a transition team.

The need to face problems that the ARPA model wasn’t originally designed for doesn’t necessarily mean that we don’t need another ARPA.

3. At the core, the powerful thing about ARPAs is that they give program managers the freedom and power to take whatever actions they need to accomplish the mission of creating radically new technologies and getting them out into the world. There is no inherent reason that program managers can’t focus more on manufacturability, partnerships with large organizations, tight coordination to build systems, and other actions that can enable diffusion in today’s evolving world.

Still, it may be true that we don’t need another government ARPA. Over time, the way that DARPA and its cousins do things has been increasingly codified: they are under constant scrutiny from legislators, they can write only specific kinds of contracts, they must follow set procedures regarding solicitations and applications, and they may show a bias toward established organizations such as universities or prime contractors as performers. These bureaucratic restrictions will make it hard for government ARPAs to make the creative “institutional moves” necessary to address current and future ecosystem problems.

Government ARPAs run into a fundamental tension: taxpayers in a democracy want the government to spend money responsibly. However, creating new technology and getting it out into the world often requires acting in ways that, at the time, seem a bit irrational. There is no reason an ARPA necessarily needs to be run by the government. Private ARPAs such as Actuate and Speculative Technologies may offer a way for the ARPA model to address technology diffusion problems of the twenty-first century.

CEO, Speculative Technologies

John Paschkewitz and Dan Patt make some fantastic points about America’s innovation ecosystem. I might suggest, however, a different framing for the article. It could instead have been called “Tough Tech is… Tough; Let’s Make it Easier.” As the authors note, America’s lab-to-market continuum in fields such as biotech, medical devices, and software isn’t perfect. But it is far from broken. In fact, it is the envy of the rest of the world.

Still, it is undeniably true that bringing innovations in materials science, climate, and information technology hardware from the lab to the market is deeply challenging. These innovations are often extremely capital intensive; they take many years to bring to market; venture-backable entrepreneurs with relevant experience are scarce; many innovations are components of a larger system, not stand-alone products; massive investments are required for manufacturing and scale-up; and margins are often thin for commercialized products. For these and various other reasons, many great innovations fail to reach the market.

Bringing innovations in materials science, climate, and information technology hardware from the lab to the market is deeply challenging.

The solutions that Paschkewitz and Patt suggest are excellent—in particular, ensuring that fundamental research is happening in modular components and developing alternative financing arrangements such as “capital stacks” for late-stage development. However, I don’t believe they are the only options, nor are they sufficient on their own to close the gap.

More support and reengineered processes are needed across the entire technology commercialization continuum: from funding for research labs, to support for tech transfer, to securing intellectual property rights, to accessing industry data sets and prototyping equipment for validating the commercial viability of products, to entrepreneurial education and incentives for scientists, to streamlined start-up deal term negotiations, to expanding market-pull mechanisms, and more. This will require concerted efforts across federal agencies focused on commercializing the nation’s amazing scientific innovations. Modularity and capital are part of the solution, but not all of it.

The good news is that we are at the start of a breathtaking experiment centered on investing beyond (but in lieu of) the curiosity-driven research that has been the country’s mainstay for more than 70 years. The federal government has launched a variety of bold efforts to re-envision how its agencies promote innovation and commercialization that will generate good jobs, tax revenues, and exports across the country (not just in the existing start-up hubs). Notable efforts include the National Science Foundation’s new Directorate for Technology, Innovation and Partnerships and its Regional Innovation Engines program, the National Institutes of Health’s ARPA-H, the Commerce Department’s National Semiconductor Technology Center and its Tech Hubs program, and the Department of Treasury’s State Small Business Credit Initiative. Foundations are doing their part as well, including Schmidt Futures (where I am an Innovation Fellow working on some of these topics), the Lemelson Foundation, the Walmart Family Foundation, and many more.

As a final note, let me propose that the authors may have an outdated view of the role that US research universities play in this puzzle. Over the past decade, there has been a near-total reinvention of support for innovation and entrepreneurship. At Columbia alone, we offer proof-of-concept funding for promising projects; dozens of entrepreneurship classes; coaching and mentorship from serial entrepreneurs, industry executives, and venture capitalists; matching programs for venture-backable entrepreneurs; support for entrepreneurs wanting to apply to federal assistance programs; connections to venture capitalists for emerging start-ups; access for start-ups to core facilities; and so much more. Such efforts here and elsewhere hopefully will lead to even more successes in years to come.

Senior Vice President for Applied Innovation and Industry Partnerships, Columbia University

Executive Director, Columbia Technology Ventures

Embracing Intelligible Failure

In “How I Learned to Stop Worrying and Love Intelligible Failure” (Issues, Fall 2023), Adam Russell asks the important and provocative questions: With the growth of “ARPA-everything,” what makes the model succeed, and when and why doesn’t it? What is the secret of success for a new ARPA? Is it the mission? Is it the money? Is it the people? Is it the sponsorship? Or is it just dumb luck and then a virtuous cycle of building on early success?

I have had the privilege of a six-year term at the Department of Defense Advanced Research Projects Agency (DARPA), the forerunner of these new efforts, along with a couple of years helping to launch the Department of Homeland Security’s HSARPA and then 15 years at the Bill & Melinda Gates Foundation running and partnering with international development focused innovation programs. In the ARPA world, I have joined ongoing success, contributed to failure, and then helped launch new successful ARPA-like organizations in the international development domain.

During my time at the Gates Foundation, we frequently asked and explored with partners the question, What does it take for an organization to be truly good at identifying and nurturing new innovation? To answer, it is necessary to separate the process of finding, funding, and managing new innovations through proof-of-concept from the equally challenging task of taking a partially proven innovative new concept or product through development and implementation to achieve impact at scale. I tend to believe that Russell’s “aliens” (described in his Prediction 6 about “Alienabling”) are required for the early innovation management tasks, but I also believe that they are seldom well suited to the tasks of development and scaling. Experts are good at avoiding mistakes, but it is a different challenge to take a risk that is likely to fail and is in your own field of expertise, where you “should have known better” and where failure might be seen as a more direct reflection of your skills.

What does it take for an organization to be truly good at identifying and nurturing new innovation?

Adding my own predictions to the author’s, here are some other things that it takes for an organization to be good at innovation. Some are obvious, such as having sufficient human capital and financial resources, along with operational flexibility. Others are more nuanced, including:

  • An appetite for risk and a tolerance for failure.
  • Patience. Having a willingness to bet on long timelines (and possibly the ability to celebrate success that was not intended and that you do not directly benefit from).
  • Being involved with a network that provides deeper understanding of problems that need to be and are worth solving, and having an understanding of the landscape of potential solutions.
  • Recognition as a trusted brand that attracts new talent, is valued as a partner in creating unusual new collaborations, and is known for careful handling of confidential information.
  • Engaged and effective problem-solving in managing projects, and especially nimble oversight in managing the managers at an ARPA (whether that be congressional and administrative oversight in government or donor and board oversight in philanthropy).
  • Parent organization engagement downstream in “making markets,” or adding a “prize element” for success (and to accelerate impact).

To a large degree, these organizational attributes align well with many of Russell’s predictions. But I will make one more prediction that is perhaps less welcome. A bit like Anna Karenina’s view of happy and unhappy families, there are so many ways for a new ARPA to fail, but “happy ARPAs” likely share—and need—all of the attributes listed above.

Principal, Bermuda Associates

Adam Russell is correct: studying the operations of groups built on the Advanced Research Projects Agency model, applying the lessons learned, and enshrining intelligible failure paradigms could absolutely improve outcomes and ensure that ARPAs stay on track. But not all of the author’s predictions require study to know that they need to be addressed directly. For example, efforts from entrenched external interests to steer ARPA agencies can corrode culture and, ultimately, impact. We encountered this when my colleague Geoff Ling and I proposed the creation of the health-focused ARPA-H. Disease advocacy groups and many universities refused to support creation of the agency unless language was inserted to steer it toward their interests. Indeed, the Biden administration has been actively pushing ARPA-H to invest heavily in cancer projects rather than keeping its hands off. Congress is likely to fall into the same trap.

But there is a larger point as well: if you take a fifty-thousand-foot view of the research enterprise, you can easily see that the principle Russell is espousing—that we should study how ARPAs operate—should also be more aggressively applied to all agencies funding research and development.

Efforts from entrenched external interests to steer ARPA agencies can corrode culture and, ultimately, impact.

There is another element of risk that was out of scope for Russell’s article, and that rarely gets discussed: commercialization. DARPA, developed to serve the Department of Defense, and IARPA, developed to serve the government’s Intelligence agencies, have built-in federal customers—yet they still encounter commercialization challenges. Newer ARPAs such as ARPA-H and the energy-focused ARPA-E are in a more difficult position because they do not necessarily have a means to ensure that the technologies they are supporting can make it to market. Again, this is also true for all R&D agencies and is the elephant in the room for most technology developers and funders.

While there have been more recent efforts to boost translation and commercialization of technologies developed with federal funding—through, for example, the National Science Foundation’s Directorate for Technology, Innovation, and Partnerships—there is a real need to measure and de-risk commercialization across the R&D enterprise in a more concerted and outcomes-focused manner. Frankly, one of the wisest investments the government could make with its R&D dollars would be dedicating some of them toward commercialization of small and mid-cap companies that are developing products that would benefit society but are still too risky to attract private capital investors.

The government is well-positioned to shoulder risk through the entire innovation cycle, from R&D through commercialization. Otherwise, nascent technological advances are liable to die before making it across the infamous “valley of death.” Federal support would ensure that the innovation enterprise is not subject to the economy or whims of private capital. The challenge is that R&D agencies are not staffed with people who understand business risk, and thus initiatives such as the Small Business Innovation Research program are often managed by people with no private-sector experience and are so cumbersome and limiting that many companies simply do not bother applying for funding. There are myriad reasons why this is the case, but it is definitely worth establishing an entity designed to understand and take calculated commercialization risk … intelligibly.

President

Science Advisors

As Adam Russell insightfully suggests, the success of the Advanced Research Projects Agency model hinges not only on technical prowess but also on a less tangible element: the ability to fail. No technological challenge worth taking will be guaranteed to work. As Russell points out, having too high a success rate should indicate that the particular agency is not orienting itself toward ambitious “ARPA-hard problems.”

But failing is inherently fraught when spending taxpayer dollars. Politicians have been quick to publicly kneecap science funding agencies for high-profile failures. It is notable that two of the most successful agencies in this mold have come from the national security community: the original Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA). The Pentagon is famously tightlipped about its failures, which provides some shelter from the political winds for an ambitious, risk-taking research and development enterprise. Fewer critics will readily pounce on a “shrimp on a treadmill” story when four-star generals say it is an important area of research for national security.

Having too high a success rate should indicate that the particular agency is not orienting itself toward ambitious “ARPA-hard problems.”

There are reasons to be concerned about the political sustainability of frequent failure in ARPAs, especially as they move from a vehicle for defense-adjacent research into “normal” R&D areas such as health care, energy, agriculture, and infrastructure. Traditional federal funders already live in fear of selecting the next “Solyndra.” And although Tesla was a success story from the same federal loan portfolio, the US political system has a way of making the failures loom larger than the successes. I’ve personally heard federal funders cite the political maelstrom following the failed Solyndra solar panel company as a reason to be more conservative in their grantmaking and program selection. And it is difficult to put the breakthroughs we neglected to fund on posterboard—missed opportunities don’t motivate political crusades.

As a society and a political system, we need to develop a better set of antibodies to the opportunism that leaps on each failure and thereby smothers success. We need the political will to fail. Finding stories of success will help, yes, but at a deeper level we need to valorize stories of intelligible failure. One idea might be to launch a prestigious award for program managers who took a high-upside bet that nonetheless failed, and give them a public platform to discuss why the opportunity was worth taking a shot on and what they learned from the process.

None of this is to say that federal science and technology funders should be immune from critique. But that criticism should be grounded in precisely the kind of empiricism and desire for iterative improvement that Russell’s article embodies. In the effort to avoid critique, we can sometimes risk turning the ARPA model into a cargo cult phenomenon, copied and pasted wholesale without thoughtful consideration on the appropriateness of each piece. It was a refreshing change of pace, then, to see that Russell, when starting up the health-oriented ARPA-H, added several new questions, centered on technological diffusion and misuse, to the famous Heilmeier Catechism questions that a proposed ARPA project must satisfy to be funded. Giving the ARPA model the room to change, grow, and fail is perhaps the most important lesson of all.

Cofounder and co-CEO

Institute for Progress

A key obsession for many scientists and policymakers is how to fund more “high-risk” research—the kind for which the Defense Advanced Research Projects Agency (DARPA) is justifiably famous. There are no fewer than four lines of high-risk research awards at the National Institutes of Health, for example, and many agencies have launched their own version of an ARPA for [fill-in-the-blank].

Despite all of this interest in high-risk research, it is puzzling that “there is no consensus on what constitutes risk in science nor how it should be measured,” to quote Pierre Azoulay, an MIT professor who studies innovation and entrepreneurship. Similarly, the economics scholars Chiara Franzoni and Paula Stephan have reported in a paper for the National Bureau of Economic Research that the discussion about high-risk research “often occurs in the absence of well-defined and developed concepts of what risk and uncertainty mean in science.” As a result, meta-scientists who study this issue often use proxies that are not necessarily measures of risk at all (e.g., rates of “disruption” in citation patterns).

I suggest looking to terminology that investors use to disaggregate various forms of risk:

Execution risk is the risk that a given team won’t be able to complete a project due to incompetence, lack of skill, infighting, or any number of reasons for dysfunctionality. ARPA or not, no science funding agency should try to fund research with high execution risk.

Despite all of this interest in high-risk research, it is puzzling that “there is no consensus on what constitutes risk in science nor how it should be measured,” to quote Pierre Azoulay.

Market risk is the risk that even if a project works, the rest of the market (or in this case, other scientists) won’t think that it is worthwhile or useful. Notably, market risk isn’t a static and unchanging attribute of a given line of research. The curious genome sequences found in a tiny marine organism, reported in a 1993 paper and later named CRISPR, had a lot of market risk at the time (hardly anyone cared about the result when first published), but the market risk of this type of research wildly changed as CRISPR’s potential as a precise gene-editing tool became known. In other words, the reward to CRISPR research went up and the market risk went down (the opposite of what one would expect if risk and reward are positively correlated).

Technical risk is the risk that a project is not technically possible at the time. For example, in 1940, a proposal to decipher the structure of DNA would have had a high degree of technical risk. What makes the ARPA model distinct, I would argue, is selecting research programs that could be highly rewarding (and therefore have little market risk) and are at the frontier of a difficult problem (and therefore have substantial technical risk, but not so much as to be impossible).

Adam Russell’s thoughtful and inventive article points us in the right direction by arguing that, above all, we need to make research failures more intelligible. (I expect to see this and some of his other terms on future SAT questions!) After all, one of the key problems with any attempt to fund high-risk research is that when a research project “fails” (as many do), we often don’t know or even have the vocabulary to discuss whether it was because of poor execution, technical challenges, or any other source of risk. Nor, as Russell points out, do we ask peer reviewers and program managers to estimate the probability of failure, although we could easily do so (including disaggregated by various types of risk). As Russell says, ARPAs (any funding agency, for that matter) could improve only if they put more effort into actually enabling the right kind of risk-taking while learning from intelligible failures. More metascience could point the way forward here.

Executive Director, Good Science Project

Former Vice President of Research, Arnold Ventures

Adam Russell discusses the challenge of setting up the nascent Advanced Research Projects Agency for Health (ARPA-H), meant to transform health innovation. Being charged with building an organization that builds the future would make anyone gulp. Undeterred, Russell drank from a firehose of opinion on what makes an ARPA tick, and distilled from it the concept of intelligible failure.

As Russell points out, ARPA programs fail—a lot. In fact, failure is expected, and demonstrates that the agency is being sufficiently ambitious in its goals. ARPA-H leadership has explicitly stated that it intends to pursue projects “that cannot otherwise be pursued within the health funding ecosystem due to the nature of the technical risk”—in other words, projects with revolutionary or unconventional approaches that other agencies may avoid as too likely to fail. Failure is not usually a winning strategy. But paired with this willingness to fail, Russell says, is the mindset that “a technical failure is different from a mistake.”

By building feedback loops, technical failures can ultimately turn into insight regarding which approaches truly work. We absolutely agree that intelligible technical failure is crucial to any ARPA’s success, and find Russell’s description of it brilliantly apt. However, we believe Russell could have added one more note about failure. There are other types of failure, aside from technical failure, that ARPAs face as they pursue cutting-edge technology. Failures stemming from unanticipated accidents, misuse, or misperception are types of failures that do need to be worried about.

By building feedback loops, technical failures can ultimately turn into insight regarding which approaches truly work.

The history of DARPA technologies demonstrates the “dual use” nature of transformative innovation, which can unlock new useful applications as well as unintentional harmful consequences. DARPA introduced Agent Orange as a defoliation compound during the Vietnam War, despite warnings of its health harms. These are types of failures we believe any modern ARPA would wish to avoid. Harmful accidents and misuses are best proactively anticipated and avoided, rather than attempting to learn from them only after the disaster has occurred.

In fact, we believe the most ambitious technologies often prove the safest ones: we should aim to create the health equivalent of the safe and comfortable passenger jet, not simply a spartan aircraft prone to failure. To do this, ARPAs should pursue both technical intelligible failure and catastrophobia: an anticipation of, and commitment to avoiding, accidental and misuse failures of their technologies.

With regard to ARPA-H in particular, the agency has signaled its awareness of misuse and misperception risks of its technologies, and has solicited outside input into structures, strategies, and approaches to mitigating these risks. We hope consideration of accidental risks will also be included. With health technologies in particular, useful applications can be a mere step away from harmful outcomes. Technicians developing x-ray technology initially used their bare hands to calibrate the machines, resulting in cancers requiring amputation. Now, a modern hospital is incomplete without radiographic imaging tools. ARPA-H should lead the world in both transformative innovation and pioneering safety.

Resident Physician, Stanford University School of Medicine

Executive Director, Blueprint Biosecurity

FLOE: A Climate of Risk

STEPHEN TALASNIK, Glacial Mapping 2023; Digitally printed vinyl wall print, 10’ x 14’ (h x w)
STEPHEN TALASNIK, Glacial Mapping 2023; Digitally printed vinyl wall print, 10’ x 14’ (h x w)

Imagination can be a fundamental tool for driving change. Through creative narratives, we can individually and collectively imagine a better future and, potentially, take actions to move toward it. For instance, science fiction writers have, at times, seemed to predict new technologies or situations in society—raising the question of whether narratives can create empathy around an issue and help us imagine and work toward a desirable outcome.

Philadelphia-born artist Stephen Talasnik takes this question of narratives seriously. He is a sculptor and installation artist whose exhibition, FLOE: A Climate of Risk, is on display at the Museum for Art in Wood in Philadelphia, Pennsylvania, from November 3, 2023, through February 18, 2024. Talasnik’s work is informed by time, place, and the complex relationship between ideas that form a kind of “functional fiction.” Through FLOE, Talasnik tells the story of a fictitious shipwreck that was carried to Philadelphia by the glacier in which it was buried. As global temperatures warmed, the glacier melted and surrendered the ship’s remains, which were discovered by mischievous local children. The archaeological remains and reconstructions are presented in this exhibition, alongside a sculptural representation of the ice floe that carried the ship to its final resting place. Talasnik uses architectural designs to create intricate wood structures from treated basswood. By building a large wooden model to represent the glacier, the artist evokes a shadowy memory of the iceberg and reminds visitors of the sublime power of nature and its constant, often destructive, search for equilibrium.

STEPHEN TALASNIK, A Climate of Risk - Debris Field (detail)
STEPHEN TALASNIK, A Climate of Risk – Debris Field (detail)

 “FLOE emerged from the imagination of Stephen Talasnik, an artist known worldwide for his hand-built structures installed in natural settings,” writes Jennifer-Navva Milliken, executive director and chief curator at the Museum for Art in Wood. “The exhibition is based on a story created by the artist but touches on the realities of climate change, a problem that exposes the vulnerability of the world’s most defenseless populations, including the impoverished, houseless, and stateless. Science helps us understand the impact through data, but the impact to humanity is harder to quantify. Stephen’s work, through his complex storytelling and organic, fragmented sculptures, helps us understand this loss on the human scale.”

For more information about the exhibition and a mobile visitors’ guide, visit the Museum for Art in Wood website.

STEPHEN TALASNIK, Glacier, 2023. Pine stick infrastructure with bamboo flat reed, 12 ft tall with a footprint of 500 sq ft (approx.)
STEPHEN TALASNIK, Glacier, 2023. Pine stick infrastructure with bamboo flat reed, 12 ft tall with a footprint of 500 sq ft (approx.)
STEPHEN TALASNIK, Leaning Globe, 1998 - 2023; Painted basswood with metallic pigment,
28 x 40 x 22 inches (h x w x d)”
STEPHEN TALASNIK, Leaning Globe, 1998 – 2023; Painted basswood with metallic pigment,
28 x 40 x 22 inches (h x w x d)
STEPHEN TALASNIK, Tunneling, 2007 - 2008; Wood in resin, 4 x 8 x 12 inches
STEPHEN TALASNIK, Tunneling, 2007 – 2008; Wood in resin, 4 x 8 x 12 inches

AI-Assisted Biodesign

AMY KARLE, BioAI Mycelium Grown Into the Form of Insulators, 2023

Amy Karle is a contemporary artist who uses artificial intelligence as both a medium and a subject in her work. Karle has been deeply engaged with AI, artificial neural networking, machine learning, and generative design since 2015. She poses critical questions about AI, illuminates future visions, and encourages us to actively shape the future we desire.

One of Karle’s projects focuses on how AI can help design and grow biomaterials and biosubstrates, including guiding the growth of mycelium-based materials. Her approach uses AI to identify, design, and develop diverse bioengineered and bioinspired structures and forms and to refine and improve the structure of biomaterials for greater functionality and sustainability. Another project is inspired by the seductive form of corals. Karle’s speculative biomimetic corals leverage AI-assisted biodesign in conjunction with what she terms “computational ecology” to capture, transport, store, and use carbon dioxide. Her goal with this series is to help mitigate carbon dioxide emissions from industrial sources such as power plants and refineries and to clean up highly polluted areas.  

AMY KARLE, BioAI-Formed Mycelium, 2023

AMY KARLE, BioAI-Formed Mycelium, 2023
AMY KARLE, BioAI-Formed Mycelium, 2023

Rethinking Engineering Education

We applaud Idalis Villanueva Alarcón’s essay, “How to Build Engineers for Life” (Issues, Fall 2023). As the leaders of an organization that has for 36 years sought to inspire, support, and sustain the next generation of professionals in science, technology, engineering, mathematics, and medicine (STEMM), we support her desire to improve the content and delivery of engineering education. One of us (Fortenberry) has previously commented in Issues (September 13, 2021) on the challenges in this regard.

We agree with her observation that education should place an emphasis on learning how to learn in order to support lifelong learning and an individual’s ability for continual adaptation and reinvention. We believe that in an increasingly technological society there is a need for professionals trained in STEMM to work in a variety of fields. Therefore, there is a need for a significant increase in the number of STEMM professionals graduating from certificate programs, community colleges, baccalaureate programs, and graduate programs. Thus, we agree that basic engineering courses should be pumps and not filters in the production of these future STEMM professionals.

We strongly support the author’s call for “stackable” certificates leading to degrees. The same holds for further increasing the trend toward pushing experiential learning activities (including laboratories, design-build contests, internships, and co-ops) earlier in engineering curricula.

We need to ensure that underserved students have opportunities for rigorous STEMM instruction in pre-college education.

Over the past 40 years, a number of organizations and individuals have worked to greatly improve engineering education. Various industrial leaders, the nongovernmental accrediting group ABET, the National Academies of Sciences, Engineering, and Medicine, the National Science Foundation, and the National Aeronautics and Space Administration, among others, have helped engineering move to a focus on competencies, recognize the urgency of interdisciplinary approaches, and emphasize the utility of situating problem-solving in systems thinking. But much work remains to be done.

Most particularly, significant work remains in engaging underserved populations. And these efforts must begin in the earliest years. The author begins her essay with her own story of being inspired to engineering by her father. We need to reach students whose caregivers and relatives have not had that opportunity. We need to provide exposure and reinforcement through early and sustained hands-on opportunities. We need to ensure that underserved students have opportunities for rigorous STEMM instruction in pre-college education. We need to remove financial barriers to attendance of high-quality collegiate STEMM programs. And for the precious 5–7% of high school graduates who enter collegiate STEMM majors, we must hold on to more than the approximately 50% national average that currently are retained in engineering through baccalaureate graduation. We need to ensure that having entered a STEMM profession, there are supports in place for retention and professional advancement. The nation’s current legal environment has caused great concern about our ability to target high-potential individuals from underserved communities for programmatic, financial, professional, and social support activities. We must develop creative solutions that allow us to continue and expand our efforts.

Great Minds in STEM is focused on contributing to the attainment of the needed changes and looks forward to collaborating with others in this effort.

Chair of Board of Directors, Great Minds in STEM

Retired Director of Mission 1 Advanced Technologies and Applications Space Systems, Aerospace Systems Sector, Northrop Grumman Corporation

Chief Executive Officer, Great Minds in STEM

In her essay, Idalis Villanueva Alarcón outlines ways to improve engineering students’ educational experience and outcomes. As leaders of the American Society for Engineering Education, we endorse her suggestions. ASEE is already actively working to strengthen engineering education with many of the strategies the author describes.

A system in which “weeder courses” are used to remove “defective products” from the educational pipeline is both outdated and counterproductive in today’s world. We can do better, and we must.

As Villanueva explains, a system in which “weeder courses” are used to remove “defective products” from the educational pipeline is both outdated and counterproductive in today’s world. We can do better, and we must. To help improve this system, ASEE is conducting the Weaving In, Not Weeding Out project, sponsored by the National Science Foundation, under the leadership of ASEE’s immediate past president, Jenna Carpenter, and in collaboration with the National Academy of Engineering (NAE). This project is focused on identifying and sharing best practices known to support student success, in order to replace outdated approaches.

Villanueva emphasizes that “barriers are integrated into engineering culture and coursework and grounded in assumptions about how engineering education is supposed to work, who is supposed to take part, and how engineers should behave.” This sentiment is well aligned with ASEE’s Mindset Project, developed in collaboration with NAE and sponsored by the National Science Foundation. Two leaders of this initiative, Sheryl Sorby and Gary Bertoline, reviewed its goals in the Fall 2021 Issues article “Stuck in 1955, Engineering Education Needs a Revolution.”

The project has five primary objectives:

  • Teach problem solving rather than specific tools
  • End the “pipeline mindset”
  • Recognize the humanity of engineering faculty
  • Emphasize instruction
  • Make graduate education more fair, accessible, and pragmatic

In addition, the ASEE Faculty Teaching Excellence Task Force, under the leadership of University of Akron’s Donald Visco, has developed a framework to guide professional development in engineering and engineering technology instruction. Conceptualized by educators for educators and also funded by NSF, the framework will enable ASEE recognition for levels of teaching excellence.

We believe these projects are helping transform engineering education for the future, making the field more inclusive, flexible, supportive, and multidisciplinary. Such changes will help bring about Villanueva’s vision, and they will benefit not only engineering students and the profession but also the nation and world.

Executive Director,

American Society for Engineering Education

2023–2024 President,

American Society for Engineering Education

The compelling insights in Idalis Villanueva Alarcón’s essay deeply resonate with my own convictions about the essence of engineering education and workforce development. She masterfully articulates a vision where engineering transcends its traditional academic confines to embrace an enduring voyage of learning and personal growth. This vision aligns with my philosophy that engineering is a lifelong journey, one that is continually enriched by a diversity of experiences and cultural insights.

I propose a call to action for all involved in the engineering education ecosystem to embrace and champion the cultural and experiential wealth that defines our society.

The narrative the author shares emphasizes the importance of informal learning, which often takes place outside the classroom and is equally crucial in shaping the engineering mindset. It is a call to action for educational systems to integrate a broader spectrum of knowledge sources, thus embracing the wealth of experiences that individuals bring to the table. This inclusive approach to education is essential for cultivating a dynamic workforce that is innovative, versatile, and responsive to the complex challenges of our time. I propose a call to action for all involved in the engineering education ecosystem to embrace and champion the cultural and experiential wealth that defines our society.

Fostering lifelong learning in engineering must be a collective endeavor that spans the entire arc of an engineer’s career, necessitating a unified effort from every learning partner who influences their journey—from educators instilling the foundations of science and mathematics to mentors guiding seasoned professionals. This collaborative call to action is to actively dismantle the barriers to inclusivity, ensuring that our educational and work cultures not only value but celebrate the diverse “funds of knowledge” each individual brings. By creating platforms where every voice is heard and every experience is valued, we can nurture an engineering profession marked by continual exploration, mutual respect, and a commitment to societal betterment—a profession that is as culturally adept and empathetic as it is technically proficient.

Also central to this partnership is the role of the student as an active participant in their learning journey. Students must be encouraged to take ownership of their continuous development, understanding that the field of engineering is one of perpetual evolution. This empowerment is fostered by learning partners at all life stages instilling in students and professionals the belief that their growth extends beyond formal education and work to include the myriad learning opportunities that life offers.

Inclusive leadership practices and models are the scaffolding that supports this philosophy. Leaders across the spectrum of an engineer’s life—from educators in primary schools to mentors in professional settings—are tasked with creating environments that foster inclusivity and encourage the exchange of ideas. Such leadership is not confined to policymaking; it is embodied in the day-to-day interactions that inspire students and professionals to push the boundaries of their understanding and capabilities.

Finally, we must advocate for frameworks and models that drive systemic change through collaborative leadership. The engineering journey is a tapestry woven from the threads of diverse experiences, continuous learning, and inclusive leadership. Let us, as educators and leaders, learning partners at all levels and stages, commit to empowering engineers to embark on this journey with the confidence and support they need to succeed.

What steps are we willing to take today to ensure that inclusivity and lifelong learning become the enduring legacy we leave for future engineers? Let us pledge to create a future where every engineer is a constant learner, fully equipped to contribute to a world that is richly diverse, constantly evolving, and increasingly interconnected.

Associate Dean for Workforce Development

Herbert Wertheim College of Engineering

University of Florida

Idalis Villanueva Alarcón calls deserved attention to new initiatives to enhance engineering education, while also reminding us of a failure of the profession to keep up with the changes it keeps causing. Engineering is the dynamic core of the technological changes and innovations that are mass producing a paradoxical societal fallout: glamorous prosperity and psychopolitical disorder. It’s driving us into an engineered world that is, in aggregate, wealthy and powerful beyond the ability to measure or imagine, yet in which a gap between those who call it home and those who struggle to do so ever widens.

It’s also unclear how much curriculum reform might contribute to the deeper political challenges deriving from the gap between the rich and powerful and those who have been uprooted from destroyed communities.

Villanueva’s call for the construction of a broader engineering curriculum and lifelong learning is certainly desirable; it is also something we’ve heard many times, with only marginal results. It’s also unclear how much curriculum reform might contribute to the deeper political challenges deriving from the gap between the rich and powerful and those who have been uprooted from destroyed communities. For many people, creative destruction is much more destruction than creation.

Should we nevertheless ask why such a salutary ideal has gotten so little traction? It’s complex and all the causes are not clear, but it’s hard not to suspect that just as there is a hidden curriculum in the universities that undermines the ideal, there is another in the capitalist economy to which engineering is so largely in thrall. And what are the hidden curricular consequences of not requiring a bachelor’s degree before enrollment in an engineering school, unlike as is required by schools of law and medicine? If engineering were made a truly professional degree, some of Villanueva’s proposals might not even be necessary.

Professor Emeritus of Humanities, Arts, and Social Sciences

Colorado School of Mines

Idalis Villanueva Alarcón aptly describes the dichotomy within the US engineering education system between the driving need for innovation and an antiquated and disconnected educational process for “producing” engineers. Engineers walk into their fields knowing that what they will learn will be obsolete in a matter of years, yet the curricula remain the same. This dissonance, the author notes, stifles passion and perhaps, critically, the very thing that industry and academia are purportedly seeking—innovation and creative problem-solving. This “hidden curriculum” is one of the insidious tools that dehumanize engineering as not an option for those who want to innovate, to help others, and to be connected to a sustainable environment. Enrollments continue to decline nationally—are any of us surprised? Engineering is out of step with the values of US students and the needs of industry.

Engineering is out of step with the values of US students and the needs of industry.

Parallel to this discussion are data from the latest Business Enterprise Research and Development Survey showing that US businesses spent over $602 billion on research and development in 2021. This was a key driver for many engineering colleges and universities to expand “new” partnerships that were more responsive to developmental and applied research. While many were small and medium-size businesses, the majority were large corporations with more than 1,000 employees. Underlying Villanueva’s discussion are classic questions in engineering education: Are we developing innovative thinkers who can problem solve in engineering? Conversely, are we producing widgets who are paying their tuition, getting their paper, interviewing, getting hired, and logging into a terminal? Assembly lines are not typically for innovative development; they are the hallmarks of product development. No one believes that working with students is a form of assembly line production, yet why does it feel like it is? As access to information increases outside academia, new skills, sources of expertise, and experience arise for students, faculty, and industry to tap. If the fossilization of curricula and behaviors within the academy persists, then other avenues of accessing engineering education will evolve. These may be divergent pathways driven by factors surrounding industry and workforce development.

Villanueva suggests considering a more holistic and integrated approach that seeks to actively engage students’ families and social circles. No one is a stand-alone operation. Engineering needs to account for all of the variables impacting students. I wholeheartedly agree, and would add that by leveraging social capital and clarifying the schema for pathways for students (especially first-generation students), working engineers, educators, and other near peers can help connect the budding engineers to a network of potential support when the courses become challenging or the resources are not obvious. Not only would we begin to build capacity within underrepresented populations, but we also would enable the next-generation workforce to realize their dreams and help provide a community with some basic tools to mentor and support the ones they cherish and want to see succeed.

Research Program Manager

Oregon State University

Making Graduate Fellowships More Inclusive

In “Fifty Years of Strategies for Equal Access to Graduate Fellowships” (Issues, Fall 2023), Gisèle Muller-Parker and Jason Bourke suggest that examining the National Science Foundation’s efforts to increase the representation of racially minoritized groups in science, technology, engineering, and mathematics “may offer useful lessons” to administrators at colleges and universities seeking to “broaden access and participation” in the aftermath of the US Supreme Court’s 2023 decision limiting the use of race as a primary factor in student admissions.

Perhaps the most important takeaway from the authors’ analysis—and that also aligns with the court’s decision—is that there are no shortcuts to achieving inclusion. Despite its rejection of race as a category in the admissions process, the court’s decision does not bar universities from considering race on an individualized basis. Chief Justice John Roberts maintained that colleges can, for instance, constitutionally consider a student’s racial identity and race-based experience, be it “discrimination, inspiration or otherwise,” if aligned with a student’s unique abilities and skills, such as “courage, determination” or “leadership”—all of which “must be tied to that student’s unique ability to contribute to the university.” This individualized approach to race implies a more qualitatively focused application and review process.

The NSF experience, as Muller-Parker and Bourke show, also underscores the significance of qualitative applications and review processes for achieving more inclusive outcomes. Despite the decline in fellowship awards to racially minoritized groups starting in 1999, when the foundation ended its initial race-targeted fellowships, the awards pick up and even surpass previous levels of inclusion as the foundation shifted from numeric criteria to a holistic qualitative evaluation and review, for instance, by eliminating summary scores and GRE results and placing more importance on reference letters.

The individualized approach to race will place additional burdens on students of color to effectively make their case for how race has uniquely qualified them and made them eligible for admission.

Importantly, the individualized approach to race will place additional burdens on students of color to effectively make their case for how race has uniquely qualified them and made them eligible for admission, and on administrators to reconceptualize, reimagine, and reorganize the admissions process as a whole. Students, particularly from underserved high schools, will need even more institutional help and clearer instructions when writing their college essays, to know how to tie race and their racial experience to their academic eligibility.

In the context of college admissions, enhancing equal access in race-neutral ways will require significant changes in reconceptualizing applicants—as people rather than numbers or categories—and in connecting student access more closely to student participation. This will require significant resources and organizational change: admissions’ access goals would need to be closely integrated with participation goals of other offices such as student life, residence life, student careers, as well as with academic units; and universities would need to regularly conduct campus climate surveys, assessing not just the quantity of diverse students in the student body but also the quality of their experiences and the ways by which their inclusion enhances the quality of education provided by the university.

These holistic measures are easier said than done, especially among smaller teaching-centered or decentralized colleges and universities, and a measurable commitment to diversity will be even more patchy than is currently achieved across higher education, given the existence of numerous countervailing forces (political, social, financial) that differentially impact public and private institutions and vary significantly from state to state. However, as Justice Sotomayor wrote in closing in her dissenting opinion, “Although the court has stripped almost all uses of race in college admissions…universities can and should continue to use all available tools to meet society’s needs for diversity in education.” The NSF’s story provides some hope that this can be achieved if administrators are able and willing to reimagine (and not just obliterate) racial inclusion as a crucial goal for academic excellence.

Professor of Politics

Cochair, College of Arts and Sciences, Diversity, Equity and Inclusion Committee

Fairfield University

Gisele Muller-Parker and Jason Bourke’s discussion of what we might learn from the forced closure of the National Science Foundation’s Minority Graduate Fellowship Program and subsequent work to redesign the foundation’s Graduate Research Fellowship Program (GRFP) succinctly illustrates the hard work required to construct programs that identify and equitably promote talent development. As the authors point out, GRFP, established in 1952, has awarded fellowships to more than 70,000 students, paving the way for at least 40 of those fellows to become Nobel laureates and more than 400 to become members of the National Academy of Sciences.

The program provides a $37,000 annual stipend for three years and a $12,000 cost of education allowance with no postgraduate service requirement. It is a phenomenal fellowship, yet the program’s history demonstrates how criteria, processes, and structures can make opportunities disproportionally unavailable to talented persons based on their gender, racial identities, socioeconomic status, and where they were born and lived.

This is the great challenge that education, workforce preparation, and talent development leaders must confront: how to parse concepts of talent and opportunity such that we are able to equitably leverage the whole capacity of the nation.

This is the great challenge that education, workforce preparation, and talent development leaders must confront: how to parse concepts of talent and opportunity such that we are able to equitably leverage the whole capacity of the nation. This work must be undertaken now for America to meet its growing workforce demands in science, technology, engineering, mathematics, and medicine—the STEMM fields. This is the only way we will be able to rise to the grandest challenges threatening the world, such as climate change, food and housing instability, and intractable medical conditions.

By and large, most institutions of higher education are shamefully underperforming in meeting those challenges. Here, I point to the too-often overlooked and underfunded regional colleges and universities that were barely affected by the US Supreme Court’s recent decision to end the use of race-conscious admissions policies. Most regional institutions, by nature of their missions and students they serve, have never used race as a factor in enrollment, and yet they still serve more students from minoritized backgrounds than their Research-1 peers, as demonstrated by research from the Brookings Institution. Higher education leaders must undertake the difficult work of examining the ways in which historic and contemporaneous bias has created exclusionary structures, processes, and policies that helped reproduce social inequality instead of increasing access and opportunity for all parts of the nation.

The American Association for the Advancement of Science’s SEA Change initiative cultivates that exact capacity building among an institution’s leaders, enabling them to make data-driven, law-attentive, and people-focused change to meet their institutional goals. Finally, I must note one correction to the authors’ otherwise fantastic article: the Supreme Court’s pivotal decision in Students for Fair Admissions v. Harvard and Students for Fair Admissions v. University of North Carolina did not totally eliminate race and ethnicity as a factor in college admissions. Rather, the decision removed the opportunity for institutions to use race as a “bare consideration” and instead reinforced that a prospective student’s development of specific knowledge, skills, and character traits as they related to race, along with the student’s other lived experiences, can and should be used in the admissions process.

Director, Inclusive STEMM Ecosystems for Equity & Diversity

American Association for the Advancement of Science

The US Supreme Court’s 2023 rulings on race and admissions have required universities to closely review their policies and practices for admitting students. While the rulings focused on undergraduate admissions, graduate institutions face distinct challenges as they work to comply with the new legal standards. Notably, graduate education tends to be highly decentralized, representing a variety of program cultures and admissions processes. This variety may lead to uncertainty about legally sound practice and, in some cases, a tendency to overcorrect or default to “safe”—because they have been uncontested—standards of academic merit.

Gisèle Muller-Parker and Jason Bourke propose that examining the history of the National Science Foundation’s Graduate Research Fellowship Program (GRFP) can provide valuable information for university leaders and faculty in science, technology, engineering, and mathematics working to reevaluate graduate admissions. The authors demonstrate the potential impact of admission practices often associated with the type of holistic review that NSF currently uses for selecting its fellows: among them, reducing emphasis on quantitative measures, notably GRE scores and undergraduate GPA, and giving careful consideration to personal experiences and traits associated with success. In 2014, for example, the GRFP replaced a requirement for a “Previous Research” statement, which privileged students with access to traditional research opportunities, with an essay that “allows applicants flexibility in the types of evidence they provide about their backgrounds, scientific ability, and future potential.”

These changes made a real difference in the participation of underrepresented students in the GRFP and made it possible for students from a broader range of educational institutions to have a shot at this prestigious fellowship.

There is no compelling evidence to support the idea that traditional criteria for admitting students are the best.

Critics of these changes may say that standards were lowered. But the education community at large must unequivocally challenge this view. There is no compelling evidence to support the idea that traditional criteria for admitting students are the best. Scientists must be prepared to study the customs of their field, examining assumptions (“Are experiences in well-known laboratories the only way to prepare undergraduates for research?”) and asking new questions (“To what extent does a diversity of perspectives and problem-solving strategies affect programs and research?”).

As we look to the future, collecting evidence on the effects of new practices, we will need to give special consideration to the following issues:

  • First, in introducing new forms of qualitative materials, we must not let bias in the back door. Letters and personal statements need careful consideration, both in their construction and in their evaluation.
  • Second, we must clearly articulate the ways that diversity and inclusion relate to program goals. The evaluation of personal and academic characteristics is more meaningful, and legally sound, when these criteria are transparent to all.
  • Finally, we must think beyond the admissions process. In what ways can institutions make diversity, equity, and inclusion integral to their cultures and to the social practices supporting good science?

As the history of the GFRP shows, equity-minded approaches to graduate education bring us closer to finding and supporting what the National Science Board calls the “Missing Millions” in STEM. We must question what we know about academic merit and rigorously test the impact of new practices—on individual students, on program environments, and on the health and integrity of science.

Vice President, Best Practices and Strategic Initiatives

Council of Graduate Schools

Native Voices in STEM

Circular Tables, 2022, digital photograph, 11 X 14 inches.
Circular Tables, 2022, digital photograph, 11 X 14 inches.

“Many of the research meetings I have participated in take place at long rectangular tables where the power and primary conversation participants are at one end. I don’t experience this hierarchical power differential in talking circles. Talking circles are democratic and inclusive. There is still a circle at the rectangular table, just a circle that does not include everyone at the table. I find this to be representative of experiences I have had in my STEM discipline, in which it was difficult to find a place in a community or team or in which I did not feel valued or included.”

Native Voices in STEM: An Exhibition of Photographs and Interviews is a collection of photographs and texts created by Native scientists and funded by the National Science Foundation. It grew from a mixed-methods study conducted by researchers from TERC, the University of Georgia, and the American Indian Science and Engineering Society (AISES). According to the exhibition creators, the artworks speak to the photographers’ experiences of “Two-Eyed Seeing,” or the tensions and advantages from braiding together traditional Native and Western ways of knowing. The exhibition was shown at the 2022 AISES National Conference.

Getting the Most From New ARPAs

The Fall 2023 Issues included three articles—“No, We Don’t Need Another ARPA” by John Paschkewitz and Dan Patt, “Building a Culture of Risk-Taking” by Jennifer E. Gerbi, and “How I Learned to Stop Worrying and Love Intelligible Failure” by Adam Russell—discussing several interesting dimensions of new civilian organizations modeled on the Advanced Research Projects Agency at the Department of Defense. One dimension that could use further elucidation starts with the observation that ARPAs are meant to deliver innovative technology to be utilized by some end customer. The stated mission of the original DARPA is to bridge between “fundamental discoveries and their military use.” The mission of ARPA-H, the newest proposed formulation, is to “deliver … health solutions,” presumably to the US population.

When an ARPA is extraordinarily successful, it delivers an entirely new capability that can be adopted by its end customer. For example, DARPA delivered precursor technology (and prototype demonstrations) for stealth aircraft and GPS. Both were very successfully adopted.

When an ARPA is extraordinarily successful, it delivers an entirely new capability that can be adopted by its end customer.

Such adoption requires that the new capability coexist or operate within the existing processes, systems, and perhaps even culture of the customer. Understanding the very real constraints on adoption is best achieved when the ARPA organization has accurate insight into specific, high-priority needs, as well as the operations or lifestyle, of the customer. This requires more than expertise in the relevant technology.

DARPA uses several mechanisms to attain that insight: technology-savvy military officers take assignments in DARPA, then return to their military branch; military departments partner, via co-funding, on projects; and often the military evaluates a DARPA prototype to determine effectiveness. These relations with the end customer are facilitated because DARPA is housed in the same department as its military customer, the Department of Defense.

The health and energy ARPAs face a challenge: attaining comparable insight into their end customers. The Department of Health and Human Services does not deliver health solutions to the US population; the medical-industrial complex does. The Department of Energy does not deliver electric power or electrical appliances; the energy utilities and private industry do. ARPA-H and ARPA-E are organizationally removed from those end customers, both businesses (for profit or not) and the citizen consumer.

Technology advancement enables. But critical to innovating an adoptable solution is identification of the right problem, together with a clear understanding of the real-world constraints that will determine adoptability of the solution. Because civilian ARPAs are removed from many end customers, ARPAs would seem to need management processes and organizational structures that increase the probability of producing an adoptable solution from among the many alternative solutions that technology enables.

Former Director of Defense Research and Engineering

Department of Defense

University Professor Emerita

University of Virginia

Connecting STEM with Social Justice

The United States faces a significant and stubbornly unyielding racialized persistence gap in science, technology, engineering, and mathematics. Nilanjana Dasgupta sums up one needed solution in the title of her article: “To Make Science and Engineering More Diverse, Make Research Socially Relevant” (Issues, Fall 2023).

Among the students who enter college intending to study STEM, persons excluded because of ethnicity or race (PEERs) which includes students identifying as Black, Indigenous, and Latine, have a twofold greater likelihood of leaving these disciplines than do non-PEERs. While we know what are not the reasons for the racialized gap—not lack of interest or preparation—we largely don’t know how to effectively close the gap. We know engaging undergraduates in mentored, authentic scientific research raises their self-efficacy and feeling of belonging. However, effective research experiences are difficult to scale because they require significant investments in mentoring and research infrastructure capacity.

Another intervention is much less expensive and much more scalable. Utility-value interventions (UVIs) provide a remarkably long-lasting positive effect on students. In this approach, over an academic term students in an introductory science course spend a modest amount of class time reflecting and writing about how the scientific topic just introduced is personally related to them and their communities. The UVIs benefit all students, resulting in little or no difference in STEM persistence between PEERs and non-PEERs.

The overhaul will be the creation of new courses that seamlessly integrate basic science concepts with society and social justice.

Can we do more? Rather than occasionally interrupting class to allow students to connect a science concept with real-world social needs, can we change the way we present the concept? The UVI inspires a vision of a new STEM curriculum comprising reimagined courses. We might call the result Socially Responsive STEM, or SR-STEM. SR-STEM would be more than distribution or general education requirements, and more than learning science in the context of a liberal arts education. Instead, the overhaul will be the creation of new courses that seamlessly integrate basic science concepts with society and social justice. The courses would encourage students to think critically about the interplay between STEM and non-STEM disciplines such as history, literature, religion, and economics, and explore how STEM affects society.

Here are a few examples from the life sciences; I think similar approaches can be developed for other STEM disciplines. When learning about evolution, students would investigate and discuss the evidence used to create the false polygenesis theory of human races. In genetics, students would evaluate the evidence of epigenetics effects resulting from the environment and poverty. In immunology, students would explore the sociology and politics of vaccine avoidance. The mechanisms of natural phenomena would be discussed from different perspectives, including indigenous ways of knowing about nature.

Implementing SR-STEM will require a complete overhaul of the learning infrastructure, including instructor preparation, textbooks, Advanced Placement courses, GRE and other standardized exams, and accreditation (e.g., ACS and ABET) criteria. The stories of discoveries we tell in class will change, from the “founding (mostly white and dead) fathers” to contemporary heroes of many identities and from all backgrounds.

It is time to begin a movement in which academic departments, professional societies, and funding organizations build Socially Responsive STEM education so that the connection of STEM to society and social justice is simply what we do.

Former Senior Director for Science Education

Howard Hughes Medical Institute

To maximize the impact of science, technology, engineering, and mathematics in society, we need to do more than attract a diverse, socially concerned cohort of students to pursue and persist through our academic programs. We need to combine the technical training of these students with social skill building.

To advance sustainability, justice, and resilience goals in the real world (not just through arguments made in consulting reports and journal papers), students need to learn how to earn the respect and trust of communities. In addition to understanding workplace culture, norms, and expectations, and cultivating negotiation skills, they need to know to research the history, interests, racial, cultural, and equitable identities, and power imbalances in communities before beginning their work. They need to appreciate the community’s interconnected and, at times, conflicting needs and aspirations. And they need to learn how to communicate and collaborate effectively, to build allies and coalitions, to follow through, and to neither overpromise nor prematurely design the “solution” before fully understanding the problem. They must do all this while staying within the project budget, schedule, and scope—and maintaining high quality in their work.

One of the problems is that many STEM faculty lack these skills themselves. Some may consider the social good implications only after a project has been completed. Others may be so used to a journal paper as the culmination of research that they forget to relay and interpret their technical findings to the groups who could benefit most from them. Though I agree that an increasing number of faculty appear to be motivated by equity and multidisciplinarity in research, translation of research findings into real world recommendations is much less common. If it happens at all, it frequently oversimplifies key logistical, institutional, cultural, legal, or regulatory factors that made the problem challenging in the first place. Both outcomes greatly limit the social value of STEM research. While faculty in many fields now use problem-based learning to tackle real world problems in teaching, we are also notorious for attempting to address a generational problem in one semester, then shifting our attention to something else. We request that community members enrich our classrooms by sharing their lived experiences and perspectives with our students without giving much back in return.

Such practices must end if we, as STEM faculty, are to retain our credibility both in the community and with our students, and if we wish to see our graduates embraced by the communities they seek to serve.

Such practices must end if we, as STEM faculty, are to retain our credibility both in the community and with our students, and if we wish to see our graduates embraced by the communities they seek to serve. The formative years of today’s students were juxtaposed on a backdrop of bad news. If they chose STEM because of a belief that science has answers to these maddening challenges, these students need real evidence that their professional actions will yield tangible and positive outcomes. Just like members of the systematically disadvantaged and marginalized communities they seek to support, these students can easily spot hypocrisy, pretense, greenwashing, and superficiality.

As a socially engaged STEM researcher and teacher, I have learned that I must be prepared to follow through with what I have started—as long as it takes. I prep my students for the complex social dynamics they will encounter, without coddling or micromanaging them. I require that they begin our projects with an overview of the work’s potential practical significance, and that our research methods answer questions that are codeveloped with external partners, who themselves are financially compensated for their time whenever possible. By modeling these best practices, I try to give my students (regardless of their cultural or racial backgrounds) competency not just in STEM, but in application of their work in real contexts.

Professor, Department of Civil, Architectural, and Environmental Engineering

Drexel University

Nilanjana Dasgupta’s article inspired reflection on our approach at the Burroughs Wellcome Fund (BWF) to promoting diversity in science nationwide along with supporting science, technology, engineering, and mathematics education specifically in North Carolina. These and other program efforts have reinforced our belief in the power of collaboration and partnership to create change.

These and other program efforts have reinforced our belief in the power of collaboration and partnership to create change.

For nearly 30 years, BWF has supported organizations across North Carolina that provide hands-on, inquiry-based activities for students outside the traditional classroom day. These programs offer a wide range of STEM experiences for students. Some of the students “tinker,” which we consider a worthwhile way to experience the nuts-and-bolts of research, and others explore more socially relevant experiences. An early example is from a nonprofit in the city of Jacksonville, located near the state’s eastern coast. In the program, the city converted an old wastewater treatment plant into an environmental education center where students researched requirements for reintroducing sturgeon and shellfish into the local bay. More than 1,000 students spent their Saturdays learning about environmental science and its application to improve the quality of water in the local watershed. The students engaged their families and communities in a dialogue about environmental awareness, civic responsibility, and local issues of substantial scientific and economic interest.

For our efforts in fostering diversity in science, we have focused primarily on early-career scientists. Our Postdoctoral Diversity Enrichment Program provides professional development support for underrepresented minority postdoctoral fellows. The program places emphasis on a strong mentoring strategy and provides opportunities for the fellows to engage with a growing network of scholars.

Recently, BWF has become active in the Civic Science movement led by the Rita Allen Foundation, which describes civic science as “broad engagement with science and evidence [that] helps to inform solutions to society’s most pressing problems.” This movement is very much in its early stages, but it holds immense possibility to connect STEM to social justice. We have supported fellows in science communication, diversity in science, and the interface of arts and science.

Another of our investments in this space is through the Our Future Is Science initiative, hosted by the Aspen Institute’s Science and Society program. The initiative aims to equip young people to become leaders and innovators in pushing science toward improving the larger society. The program’s goals include sparking curiosity and passion about the connection between science and social justice among youth and young adults who identify as Black, Indigenous, or People of Color, as well as those who have low income or reside in rural communities. Another goal is to accelerate students’ participation in the sciences to equip them to link their interests to tangible educational and career STEM opportunities that may ultimately impact their communities.

This is an area ripe for exploration, and I was pleased to read the author’s amplification of this message. At the Burroughs Wellcome Fund, we welcome the opportunity to collaborate on connecting STEM and social justice work to ignite societal change. As a philanthropic organization, we strive to holistically connect the dots of STEM education, diversity in science, and scientific research.

President and CEO

Burroughs Wellcome Fund

As someone who works on advancing diversity, equity, and inclusion in science, technology, engineering, and mathematics higher education, I welcome Nilanjana Dasgupta’s pointed recommendation to better connect STEM research with social justice. Gone are the days of the academy being reserved for wealthy, white men to socialize and explore the unknown, largely for their own benefit. Instead, today’s academy should be rooted in addressing the challenges that the whole of society faces, whether that be how to sustain food systems, build more durable infrastructure, or identify cures for heretofore intractable diseases.

Approaching STEM research with social justice in mind is the right thing to do both morally and socially. And our educational environments will be better for it, attracting more diverse and bright minds to science. As Dasgupta demonstrates, research shows that when course content is made relevant to students’ lives, students show increases in interest, motivation, and success—and all these findings are particularly pronounced for students of color.

Despite focused attention on increasing diversity, equity, and inclusion over the past several decades, Black, Indigenous, and Latine students continue to remain underrepresented in STEM disciplines, especially in graduate education and the careers that require such advanced training. In 2020, only 24% of master’s and 16% of doctoral degrees in science and engineering went to Black, Indigenous, and Latine graduates, despite these groups collectively accounting for roughly 37% of the US population aged 18 through 34. Efforts to increase representation have also faced significant setbacks due to the recent Supreme Court ruling on the consideration of race in admissions. However, Dasgupta’s suggestion may be one way we continue to further the nation’s goal of diversifying STEM fields in legally sustainable ways, by centering individuals’ commitments to social justice rather than, say, explicitly considering race or ethnicity in admissions processes.

What if universities centered faculty hiring efforts on scholars who are addressing social issues and seeking to make the world a more equitable place, rather than relying on the otherwise standard approach of hiring graduates from prestigious institutions who publish in top-tier journals?

Moreover, while Dasgupta does well to provide examples of how we might transform STEM education for students, the underlying premise of her article—that connecting STEM to social justice is an underutilized tool—is relevant to several other aspects of academia as well.

For instance, what if universities centered faculty hiring efforts on scholars who are addressing social issues and seeking to make the world a more equitable place, rather than relying on the otherwise standard approach of hiring graduates from prestigious institutions who publish in top-tier journals? The University of California, San Diego, may serve as one such example, having hired 20 STEM faculty over the past three years whose research uses social justice frameworks, including bridging Black studies and STEM. These efforts promote diverse thought and advance institutional missions to serve society.

Science philanthropy is also well poised to prioritize social justice research. At Sloan, we have a portfolio of work that examines critical and under-explored questions related to issues of energy insecurity, distributional equity, and just energy system transitions in the United States. These efforts recognize that many historically marginalized racial and ethnic communities, as well as economically vulnerable communities, are often unable to participate in the societal transition toward low-carbon energy systems due to a variety of financial, social, and technological challenges.

In short, situating STEM in social justice should be the default, not the occasional endeavor.

Program Associate

Alfred P. Sloan Foundation

Building the Quantum Workforce

In “Inviting Millions Into the Era of Quantum Technologies” (Issues, Fall 2023), Sean Dudley and Marisa Brazil convincingly argue that the lack of a qualified workforce is holding back this field from reaching its promising potential. We at IBM Quantum agree. Without intervention, the nation risks developing useful quantum computing alongside a scarcity of practitioners who are capable of using quantum computers. An IBM Institute for Business Value study found that inadequate skills is the top barrier to enterprises adopting quantum computing. The study identified a small subset of quantum-ready organizations that are talent nurturers with a greater understanding of the quantum skills gap, and that are nearly three times more effective than their cohorts at workforce development.

Quantum-ready organizations are nearly five times more effective at developing internal quantum skills, nearly twice as effective at attracting talented workers in science, technology, engineering, and mathematics, and nearly three times more effective at running internship programs. At IBM Quantum, we have directly trained more than 400 interns at all levels of higher education and have seen over 8 million learner interactions with Qiskit, including a series of online seminars on using the open-source Qiskit tool kit for useful quantum computing. However, quantum-ready organizations represent only a small fraction of the organizations and industries that need to prepare for the growth of their quantum workforce.

As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve. As we move even further toward the age of quantum-centric supercomputing, we will need a larger workforce capable of orchestrating quantum and classical computational resources in order to address domain-specific problems.

Looking to academia, we need more quantum-ready institutions that are effective not only at teaching advanced mathematics, quantum physics, and quantum algorithms, but also are effective at teaching domain-specific skills such as machine learning, chemistry, materials, or optimization, along with teaching how to utilize quantum computing as a tool for scientific discovery.

As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve.

Critically, it is imperative to invest in talent early on. The data on physics PhDs granted by race and ethnicity in the United States paint a stark picture. Industry cannot wait until students have graduated and are knocking on company doors to begin developing a talent pipeline. IBM Quantum has made a significant investment in the IBM-HBCU Quantum Center through which we collaborate with more than two dozen historically Black colleges and universities to prepare talent for the quantum future.

Academia needs to become more effective in supporting quantum research (including cultivating student contributions) and partnering with industry, in connecting students into internships and career opportunities, and in attracting students into the field of quantum. Quoting Charles Tahan, director of the National Quantum Coordination Office within the White House Office of Science and Technology Policy: “We need to get quantum computing test beds that students can learn in at a thousand schools, not 20 schools.”

Rensselaer Polytechnic Institute and IBM broke ground on the first IBM Quantum System One on a university campus in October 2023. This presents the RPI community with an unprecedented opportunity to learn and conduct research on a system powered by a utility-scale 127-qubit processor capable of tackling problems beyond the capabilities of classical computers. And as lead organizers of the Quantum Collaborative, Arizona State University—using IBM and other industry quantum computing resources—is working with other academic institutions to provide training and educational pathways across high schools and community colleges through to undergraduate and graduate studies in the field of quantum.

Our hope is that these actions will prove to be only part of a broader effort to build the quantum workforce that science, industry, and the nation will need in years to come.

IBM Quantum

Program Director, Global Skills Development

Sean Dudley and Marisa Brazil advocate for mounting a national workforce development effort to address the growing talent gap in the field. This effort, they argue, should include educating and training a range of learners, including K–12 students, community college students, and workers outside of science and technology fields, such as marketers and designers. As the field will require developers, advocates, and regulators—as well as users—with varying levels of quantum knowledge, the authors’ comprehensive and inclusive approach to building a competitive quantum workforce is refreshing and justified.

At Qubit by Qubit, founded by the Coding School and one of the largest quantum education initiatives, we have spent the past four years training over 25,000 K–12 and college students, educators, and members of the workforce in quantum information science and technology (QIST). In collaboration with school districts, community colleges and universities, and companies, we have found great excitement among all these stakeholders for QIST education. However, as Dudley and Brazil note, there is an urgent need for policymakers and funders to act now to turn this collective excitement into action.

Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large.

The authors posit that the development of a robust quantum workforce will help position the United States as a leader of Quantum 2.0, the next iteration of the quantum revolution. Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large. With the interdisciplinary nature of QIST, learners gain exposure and skills in mathematics, computer science, physics, and engineering, among other fields. Thus, even for learners who choose not to pursue a career in quantum, they will have a broad set of highly sought skills that they can apply to another field offering a rewarding future.

With the complexity of quantum technologies, there are a number of challenges in building a diverse quantum workforce. Dudley and Brazil highlight several of these, including the concentration of training programs in highly resourced institutions, and the need to move beyond the current focus on physics and adopt a more interdisciplinary approach. There are several additional challenges that need to be considered and addressed if millions of Americans are to become quantum-literate, including:

  • Funding efforts have been focused on supporting pilot educational programs instead of scaling already successful programs, meaning that educational opportunities are not accessible widely.
  • Many educational programs are one-offs that leave students without clear next steps. Because of the complexity of the subject area, learning pathways need to be established for learners to continue developing critical skills.
  • Diversity, inclusion, and equity efforts have been minimal and will require concerted work between industry, academia, and government.

Historically, the United States has begun conversations around workforce development for emerging and deep technologies too late, and thus has failed to ensure the workforce at large is equipped with the necessary technical knowledge and skills to move these fields forward quickly. We have the opportunity to get it right this time and ensure that the United States is leading the development of responsible quantum technologies.

Executive Director, Qubit by Qubit

Founder and CEO, The Coding School

To create an exceptional quantum workforce and give all Americans a chance to discover the beauty of quantum information science and technology, to contribute meaningfully to the nation’s economic and national security, and to create much-needed bridges with other like-minded nations across the world as a counterbalance to the balkanization of science, we have to change how we are teaching quantum. Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.

The imminence of quantum technologies has motivated physicists—at least in some places—to reinvent their teaching, listening to and working with their engineering, computer science, materials science, chemistry, and mathematics colleagues to create a new kind of course. In 2020, these early experiments in retooling led to a convening of 500 quantum scientists and engineers to debate undergraduate quantum education. Building on success stories such as the quantum concepts course at Virginia Tech, we laid out a plan, published in IEEE Transactions on Education in 2022, to bridge the gap between the excitement around quantum computing generated in high school and the kind of advanced graduate research in quantum information that is really so astounding. The good news is that as Virginia Tech showed, quantum information can be taught with pictures and a little algebra to first-year college students. It’s also true at the community college level, which means the massive cohort of diverse engineers who start their careers there have a shot at inventing tomorrow’s quantum technologies.

Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.

However, there are significant missing pieces. For one, there are almost no community college opportunities to learn quantum anything because such efforts are not funded at any significant level. For another, although we know how to teach the most speculative area of quantum information, namely quantum computing, to engineers, and even to new students, we really don’t know how to do that for quantum sensing, which allows us to do position, navigation, and timing without resorting to our fragile GPS system, and to measure new space-time scales in the brain without MRI, to name two of many applications. It is the most advanced area of quantum information, with successful field tests and products on the market now, yet we are currently implementing quantum engineering courses focused on a quantum computing outcome that may be a decade or more away.

How can we solve the dearth of quantum engineers? First, universities and industry can play a major role by working together—and several such collective efforts are showing the way. Arizona State University’s Quantum Collaborative is one such example. The Quantum consortium in Colorado, New Mexico, and Wyoming recently received a preliminary grant from the US Economic Development Administration to help advance both quantum development and education programs, including at community colleges, in their regions. Such efforts should be funded and expanded and the lessons they provide should be promulgated nationwide. Second, we need to teach engineers what actually works. This means incorporating quantum sensing from the outset in all budding quantum engineering education systems, building on already deployed technologies. And third, we need to recognize that much of the nation’s quantum physics education is badly out of date and start modernizing it, just as we are now modernizing engineering and computer science education with quantum content.

Quantum Engineering Program and Department of Physics

Colorado School of Mines

Preparing a skilled workforce for emerging technologies can be challenging. Training moves at the scale of years while technology development can proceed much faster or slower, creating timing issues. Thus, Sean Dudley and Marisa Brazil deserve credit for addressing the difficult topic of preparing a future quantum workforce.

At the heart of these discussions are the current efforts to move beyond Quantum 1.0 technologies that make use of quantum mechanical properties (e.g., lasers, semiconductors, and magnetic resonance imaging) to Quantum 2.0 technologies that more actively manipulate quantum states and effects (e.g., quantum computers and quantum sensors). With this focus on ramping up a skilled workforce, it is useful to pause and look at the underlying assumption that the quantum workforce requires active management.

In their analysis, Dudley and Brazil cite a report by McKinsey & Company, a global management consulting firm, which found that three quantum technology jobs exist for every qualified candidate. While this seems like a major talent shortage, the statistic is less concerning when presented in absolute numbers. Because the field is still small, the difference is less than 600 workers. And the shortage exists only when considering graduates with explicit Quantum 2.0 degrees as qualified potential employees.

McKinsey recommended closing this gap by upskilling graduates in related disciplines. Considering that 600 workers is about 33% of physics PhDs, 2% of electrical engineers, or 1% of mechanical engineers graduated annually in the United States, this seems a reasonable solution. However, employers tend to be rather conservative in their hiring and often ignore otherwise capable applicants who haven’t already demonstrated proficiency in desired skills. Thus, hiring “close-enough” candidates tends to occur only when employers feel substantial pressure to fill positions. Based on anecdotal quantum computing discussions, this probably isn’t happening yet, which suggests employers can still afford to be selective. As Ron Hira notes in “Is There Really a STEM Workforce Shortage?” (Issues, Summer 2022), shortages are best measured by wage growth. And if such price signals exist, one should expect that students and workers will respond accordingly.

When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences.

If the current quantum workforce shortage is uncertain, the future is even more uncertain. The exact size of the needed future quantum workforce depends on how Quantum 2.0 technologies develop. For example, semiconductors and MRI machines are both mature Quantum 1.0 technologies. The global semiconductor industry is a more than $500 billion business (measured in US dollars), while the global MRI business is about 100 times smaller. If Quantum 2.0 technologies follow the specialized, lab-oriented MRI model, then the workforce requirements could be more modest than many projections. More likely is a mix of market potential where technologies such as quantum sensors, which have many applications and are closer to commercialization, have a larger near-term market while quantum computers remain a complex niche technology for many years. The details are difficult to predict but will dictate workforce needs.

When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences. Overproducing STEM workers benefits industry and academia, but not necessarily the workers themselves. If we prematurely attempt to put quantum computer labs in every high school and college, we may be setting up less-privileged students to pursue jobs that may not develop, equipped with skills that may not be easily transferred to other fields.

Research Professor

Department of Technology and Society

Stony Brook University

An Evolving Need for Trusted Information

In “Informing Decisionmakers in Real Time” (Issues, Fall 2023), Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe describe how scientific organizations, funders, and researchers came together to provide vital insights in a time of global need. Their actions during the COVID-19 pandemic created new ways for researchers to coordinate with one another and better ways to communicate critical scientific insights to key end users. Collectively, these actions accelerated translations of basic research to life-saving applications.

Examples such as the Societal Experts Action Network (SEAN) that the authors highlight reveal the benefits of a new approach. While at the National Science Foundation, we pitched the initial idea for this project and the name to the National Academies of Sciences, Engineering, and Medicine (NASEM). We were inspired by NASEM’s new research-to-action workflows in biomedicine and saw opportunities for thinking more strategically about how social science could help policymakers and first responders use many kinds of research more effectively.

SEAN’s operational premise is that by building communication channels where end users can describe their situations precisely, researchers can better tailor their translations to the situations. Like NASEM, we did not want to sacrifice rigor in the process. Quality control was essential. Therefore, we designed SEAN to align translations with key properties of the underlying research designs, data, and analysis. The incredible SEAN leadership team that NASEM assembled implemented this plan. They committed to careful inferences about the extent to which key attributes of individual research findings, or collections of research findings, did or did not generalize to end users’ situations. They also committed to conducting real-time evaluations of their effectiveness. With this level of commitment to rigor, to research quality filters, and to evaluations, SEAN produced translations that were rigorous and usable.

With structures such as SEAN that more deeply connect researchers to end users, we can incentivize stronger cultures of responsiveness and accountability to thousands of end users.

There is significant benefit to supporting approaches such as this going forward. To see why, consider that many current academic ecosystems reward the creation of research, its publication in journals, and, in some fields, connections to patents. These are all worthy activities. However, societies sometimes face critical challenges where interdisciplinary collaboration, a commitment to rigor and precision, and an advanced understanding of how key decisionmakers use scientific content are collectively the difference between life and death. Ecosystems that treat journal publications and patents as the final products of research processes will have limited impact in these circumstances. What Groves and coauthors show is the value of designing ecosystems that produce externally meaningful outcomes.

Scientific organizations can do more to place modern science’s methods of measurement and inference squarely in the service of people who can save lives. With structures such as SEAN that more deeply connect researchers to end users, we can incentivize stronger cultures of responsiveness and accountability to thousands of end users. Moreover, when organizations network these quality-control structures, and then motivate researchers to collaborate and share information effectively, socially significant outcomes are easier to stack (we can more easily build on each other’s insights) and scale (we can learn more about which practices generalize across circumstances).

To better serve people across the world, and to respect the public’s sizeable investments in federally funded scientific research, we should seize opportunities to increase the impact and social value of the research that we conduct. New research-to-action workflows offer these opportunities and deserve serious attention in years to come.

Alfred P. Sloan Foundation

University of Michigan

As Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe describe in their article, the COVID-19 pandemic highlighted the value and importance of connecting social science to on-the-ground decisionmaking and solution-building processes, which require bridging societal sectors, academic fields, communities, and levels of governance. That the National Academies of Sciences, Engineering, and Medicine and public and private funders—including at the local level—created and continue to support the Societal Experts Action Network (SEAN) is encouraging. Still, the authors acknowledge that there is much work needed to normalize and sustain support for ongoing research-practice partnerships of this kind.

In academia, for example, the pandemic provided a rallying point that encouraged cross-sector collaborations, in part by forcing a change to business-as-usual practices and incentivizing social scientists to work on projects perceived to offer limited gains in academic systems, such as tenure processes. Without large-scale reconfiguration of resources and rewards, as the pandemic crisis triggered to some extent, partnerships such as those undertaken by SEAN face numerous barriers. Building trust, fostering shared goals, and implementing new operational practices across diverse participants can be slow and expensive. Fitting these efforts into existing funding is also challenging, as long-term returns may be difficult to measure or articulate. In a post-COVID world, what incentives will remain for researchers and others to pursue necessary work like SEAN’s, spanning boundaries across sectors?

In a post-COVID world, what incentives will remain for researchers and others to pursue necessary work like SEAN’s, spanning boundaries across sectors?

One answer comes from a broader ecosystem of efforts in “civic science,” of which we see SEAN as a part. Proponents of civic science argue that boundary-spanning work is needed in times of crisis as well as peace. In this light, we see a culture shift in which philanthropies, policymakers, community leaders, journalists, educators, and academics recognize that research-practice partnerships must be made routine rather than being exceptional. This culture shift has facilitated our own work as researchers and filmmakers as we explore how research informing filmmaking, and vice versa, might foster pro-democratic outcomes across diverse audiences. For example, how can science films enable holistic science literacy that supports deliberation about science-related issues among conflicted groups?

At first glance, our work may seem distant from SEAN’s policy focus. However, we view communication and storytelling (in non-fiction films particularly) as creating “publics,” or people who realize they share a stake in an issue, often despite some conflicting beliefs, and who enable new possibilities in policy and society. In this way and many others, our work aligns with a growing constellation of participants in the Civic Science Fellows program and a larger collective of collaborators who are bridging sectors and groups to address key challenges in science and society.

As the political philosopher Peter Levine has said, boundary-spanning work enables us to better respond to the civic questions asking “What should we do?” that run through science and broader society. SEAN illustrates how answering such questions cannot be done well—at the level of quality and legitimacy needed—in silos. We therefore strongly support multisector collaborations like those that SEAN and the Civic Science Fellows program model. We also underscore the opportunity and need for sustained cultural and institutional progress across the ecosystem of connections between science and civic society, to reward diverse actors for investing in these efforts despite their scope and uncertainties.

Researcher

Science Communication Lab

Associate

Morgridge Institute for Research

Documentary film director and producer

Wicked Delicate Films

Executive Producer

Science Communication Lab

Executive Director

Science Communication Lab

I read Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe’s essay with great interest. It is hard to remember the early times of COVID-19, when everyone was desperate for answers and questions popped up daily about what to do and what was right. As a former elected county official and former chair of a local board of health, I valued the welcome I received when appointed to the Societal Experts Action Network (SEAN) the authors highlight. I believe that as a nonacademic, I was able to bring a pragmatic on-the-ground perspective to the investigations and recommendations.

I believe that as a nonacademic, I was able to bring a pragmatic on-the-ground perspective to the investigations and recommendations.

At the time, local leaders were dealing with a pressing need for scientific information when politics were becoming fraught with dissension and the public had reduced trust in science. Given such pressure, it is difficult to fully appreciate the speed at which SEAN operated—light speed compared with what I viewed as the usual standards of large organizations such as its parent, the National Academies of Sciences, Engineering, and Medicine. SEAN’s efforts were nimble and focused, allowing us to collaborate while addressing massive amounts of data.

Now, the key to addressing the evolving need for trusted and reliable information, responsive to the modern world’s speed, will be supporting and replicating the work of SEAN. Relationships across jurisdictions and institutions were formed that will continue to be imperative not only for ensuring academic rigor but also for understanding how to build the bridges of trust to support the value of science, to meet the need for resilience, and to provide the wherewithal to progress in the face of constant change.

President, Langston Strategies Group

Former member of the Linn County, Iowa, Board of Supervisors

Supervisor and President, National Association of Counties

Rebuilding Public Trust in Science

As Kevin Finneran noted in “Science Policy in the Spotlight” (Issues, Fall 2023), “In the mid-1950s, 88% of Americans held a favorable attitude toward science.” But the story was even better back then. When the American National Election Study began in 1948 asking about trust in government, about three-quarters of people said they trusted the federal government to do the right thing almost always or most of the time (now under one-third and dropping, especially among Generation Z and millennials). Increasing public trust in science is important, but transforming new knowledge into societal impacts at scale will require much more. It will require meaningful public engagement and trust-building across the entire innovation cycle, from research and development to scale up, commercialization, and successful adoption and use. Public trust in this system can break down at any point—as the COVID-19 pandemic made painfully clear, robbing at least 20 million years of human life globally.

For over a decade, I had the opportunity to support dozens of focus groups and national surveys exploring public perceptions of scientific developments in areas such as nanotechnology, synthetic biology, cellular agriculture, and gene editing. Each of these exercises provided new insights and an appreciation for the often-maligned public mind. As the physicist Richard Feynman once noted, believing that “the average person is unintelligent is a very dangerous idea.”

The exercises consistently found that when confronted with the emergence of novel technologies, people were very consistent regarding their concerns and demands. For instance, there was little support for halting scientific and technological progress, with some noting, “Continue to go forward, but please be careful.” Being careful was often framed around three recurring themes.

As the physicist Richard Feynman once noted, believing that “the average person is unintelligent is a very dangerous idea.”

First, there was a desire for increased transparency, from both government and businesses. Second, people often asked for more pre-market research and risk assessment. In other words, don’t test new technologies on us—but unfortunately this now seems the default business model for social media and generative artificial intelligence. People voiced valid concerns that long-term risks would be overlooked in the rush to move products into the marketplace, and there was confusion about who exactly was responsible for such assessments, if anybody. Finally, many echoed the need for independent, third-party verification of both the risks and the benefits of new technologies, driven by suspicions of industry self-regulation and decreased trust in government oversight.

Taken as a whole, these public concerns sound reasonable, but remain a heavy lift. There is, unfortunately, very little “public” in the nation’s public policies, and we have entered an era where distrust is the default mode. Given this state of affairs, one should welcome the recent recommendations proposed to the White House by the President’s Council of Advisors on Science and Technology: to “develop public policies that are informed by scientific understanding and community values [creating] a dialogue … with the American people.” The question is whether these efforts go far enough and can occur fast enough to bend the trust curve back before the next pandemic, climate-related catastrophe, financial meltdown, geopolitical crisis, or arrival of artificial general intelligence.

Visiting Scholar

Environmental Law Institute

Coping in an Era of Disentangled Research

In “An Age of Disentangled Research?” (Issues, Fall 2023), Igor Martins and Sylvia Schwaag Serger raise interesting questions about the changing nature of international cooperation in science and about the engagement of Chinese scientists with researchers in other countries. The authors rightly call attention to the rapid expansion of cooperation as measured in particular by bibliometric analyses. But as they point out, we may be seeing “signs of a potential new era of research in which global science is divided into geopolitical blocs of comparable economic, scientific, and innovative strength.”

While bibliometric data can give us indicators of such a trend, we have to look deeper to fully understand what is happening. Clearly, significant geopolitical forces are at work, generating heightened concerns for national security and, by extension, information security pertaining to scientific research. The fact that many areas of cutting-edge science also have direct implications for economic competitiveness and military capabilities further reinforces the security concerns raised by geopolitical competition, raising barriers to cooperation.

Forms of cooperation remain, continuing to give science a sense of community and common purpose.

Competition and discord in international scientific activities are certainly not new. Yet forms of cooperation remain, continuing to give science a sense of community and common purpose. That cooperative behavior is often quite subtle and indirect, as a result of multiple modalities of contact and communication. Direct international cooperation among scientists, relations among national and international scientific organizations, the international roles of universities, and the various ways that numerous corporations engage scientists and research centers around the world illustrate the plethora of modes and platforms.

From the point of view of political authorities, devising policies for this mix of modalities is no small challenge. Concerns about maintaining national security often lead to government intrusions into the professional interactions of the scientific community. There are no finer examples of this than the security policy initiatives being implemented in the United States and China, the results of which appear in the bibliometric data presented by the authors. At the same time, we might ask whether scientific communication continues in a variety of other forms, raising hopes that political realities will change. In addition, what should we make of the development of new sites for international cooperation such as the King Abdullah University of Science and Technology in Saudi Arabia and Singapore’s emergence as an important international center of research? Further examination of such questions is warranted as we try to understand the trends suggested by Martin and Schwaag Serger.

It is tempting to discuss this moment in terms of the familiar “convergence-divergence” distinction, but such a binary formulation does not do justice to enduring “community” interests among scientists globally.

In addition, there is more to be learned about the underlying norms and motivations that constitute the “cultures” of science, in China and elsewhere. Research integrity, evaluation practices, research ethics, and science-state relations, among other issues, all involve the norms of science and pertain to its governance. In today’s world, that governance clearly involves a fusion of the policies of governments with the cultures of science. As with geopolitical tensions, matters of governance also hold the potential for producing the bifurcated world of international scientific cooperation the authors suggest. At the same time, we are not without evidence that norms diffuse, supporting cooperative behavior.

We are thus at an interesting moment in our efforts to understand international research cooperation. While signs of “disentanglement” are before us, we are also faced with complex patterns of personal and institutional interactions. It is tempting to discuss this moment in terms of the familiar “convergence-divergence” distinction, but such a binary formulation does not do justice to enduring “community” interests among scientists globally, even as government policies and intellectual traditions may make some forms of cooperation difficult.

Professor Emeritus, Political Science

University of Oregon

In Australia, the quality and impact of research is built upon uncommonly high levels of international collaboration. Compared with the global average of almost 25% cited by Igor Martins and Sylvia Schwaag Serger, over 60% of Australian research now involves international collaboration. So the questions the authors raise are essential for the future of Australian universities, research, and innovation.

While there are some early signs of “disentanglement” in Australian research—such as the recent mapping of a decline in collaboration with Chinese partners in projects funded by the Australian Research Council—the overall picture is still one of increasing international engagement. In 2022, Australian researchers coauthored more papers with Chinese colleagues than with American colleagues (but only just). This is the first time in Australian history that our major partner for collaborative research has been a country other than a Western military ally. But the fastest growth in Australia’s international research collaboration over the past decade was actually with India, not China.

At the same time, the connection between research and national and economic security is being drawn more clearly. At a major symposium at the Australian Academy of Science in Canberra in November 2023, Australia’s chief defense scientist talked about a “paradigm shift,” where the definition of excellent science was changing from “working with the best in the world” to “working with the best in the world who share our values.”

This is the first time in Australian history that our major partner for collaborative research has been a country other than a Western military ally.

Navigating these shifts in global knowledge production, collaboration, and innovation is going to require new strategies and an improved evidence base to inform the decisions of individual researchers, institutions, and governments in real time. Martins and Schwaag Serger are asking critical questions and bringing better data to the table to help us answer them.

As a country with a relatively small population (producing 4% of the world’s published research), Australia has succeeded over recent decades by being an open and multicultural trading nation, with high levels of international engagement, particularly in our Indo-Pacific region.

Increasing geostrategic competition is creating new risks for international research collaboration, and we need to manage these. In Australia in the past few years, universities and government agencies have established a joint task force for collaboration in addressing foreign interference, and there is also increased screening and government review of academic collaborations. But to balance the increased focus on the downsides of international research, we also need better evidence and analysis of the upsides—the benefits that accrue to Australia from being connected to the global cutting edge. While managing risk, we should also be alert to the risk of missing out.

Executive Director, Innovative Research Universities

Canberra, Australia

The commentary on Igor Martins and Sylvia Schwaag Serger’s article is closely in tune with recent reports published by the Policy Institute at King’s College London. Most recently, in Stumbling Bear; Soaring Dragon and The China Question Revisited, we drew attention to the extraordinary rising research profile of China, which has disrupted the G7’s dominance of the global science network. This is a reality that scientists in other countries cannot ignore, not least because it is only by working with colleagues at the laboratory bench that we develop a proper understanding of the aims, methods, and outcomes of their work. If China is now producing as many highly cited research papers as the United States and the European Union, then knowing only by reading is blind folly.

A strong, interconnected global network underpins the vast majority of highly cited papers that signal change and innovation. How could climate science, epidemiology, and health management work without such links?

These considerations need to be set in a context of international collaboration, rising over the past four decades as travel got cheaper and communications improved. In 1980, less than 10% of articles and reviews published in the United Kingdom had an international coauthor; that now approaches 70% and is greatest among the leading research-intensive universities. A similar pattern occurs across the European Union. The United States is somewhat less international, having the challenge of a continent to span domestically. However, a strong, interconnected global network underpins the vast majority of highly cited papers that signal change and innovation. How could climate science, epidemiology, and health management work without such links?

The spread across disciplines is lumpy. Much of the trans-Atlantic research trade is biomedical and molecular biology. The bulk of engagement with China has been in technology and the physical sciences. That is unsurprising since this is where China had historical strength and where Western researchers were more open for new collaborations. Collaboration in social sciences and in humanities is sparse because many priority topics are regional or local. But collaboration is growing in almost every discipline and is shifting from bilateral to multilateral. Constraining this to certain subjects and politically correct partners would be a disaster for global knowledge horizons.

Visiting Professor at the Policy Institute, King’s College London

Chief Scientist at the Institute for Scientific Information, Clarivate

Founder and Director of Education Insight

Visiting Professor at the Policy Institute, King’s College London

Former UK Minister of State for Universities, Science, Research and Innovation

Lessons from Ukraine for Civil Engineering

The resilience of Ukraine’s infrastructure in the face of both conventional and cyber warfare, as well as attacks on the knowledge systems that underpin its operations, is no doubt rooted in the country’s history. Ukraine has been living with the prospect of warfare and chaos for over a century. This “normal” appears to have produced an agile and flexible infrastructure system that every day shows impressive capacity to adapt.

In “What Ukraine Can Teach the World About Resilience and Civil Engineering,” Daniel Armanios, Jonas Skovrup Christensen, and Andriy Tymoshenko leverage concepts from sociology to explain how the country is building agility and flexibility into its infrastructure system. They identify key tenets that provide resilience: a shared threat that unites and motivates, informal supply networks, decentralized management, learning from recent crises (namely COVID-19), and modular and distributed systems. Resilience naturally requires coupled social, ecological, and technological systems assessment, recognizing that sustained and expedited adaptation is predicated on complex dynamics that occur within and across these systems. As such, there is much to learn from sociology, but also other disciplines as we unpack what’s at the foundation of these tenets.

Agile and flexible infrastructure systems ultimately produce a repertoire of responses as large as or greater than the variety of conditions produced in their environments. This is known as requisite complexity. Thriving under a shared threat is rooted in the notion that systems can do a lot of innovation at the edge of chaos (complexity theory), if resources including knowledge are available and there is flexibility to reorganize as stability wanes. The informal networks Ukraine has used to source resources exist because formal networks are likely unavailable or unreliable. We often ignore ad hoc networks in stable situations, and even during periods of chaos such as extreme weather events, because the organization is viewed as unable to fail—and therefore too often falls back to its siloed and rigid structures to ineffectively deal with prevailing conditions.

Thriving under a shared threat is rooted in the notion that systems can do a lot of innovation at the edge of chaos, if resources including knowledge are available and there is flexibility to reorganize as stability wanes.

Ukraine didn’t have this luxury. Management and leadership science describe how informal networks are more adept at finding balance than are rigid and siloed organizations. Related, the proposition of decentralized management is akin to imbuing those closest to the chaos, who are better attuned to the specifics of what is unfolding, with greater decisionmaking authority. This is related to the concept of near decomposability (complexity science). This decentralized model works well during periods of instability, but can lead to inefficiencies during stable times. During rebuilding, you may not want decentralization as you try to efficiently use limited resources.

Lastly, modularity and distributed systems are often touted as resilience solutions, and indeed they can have benefits under the right circumstances. However, network science teaches us that decentralized systems shift the nature of the system from one big producer supplying many consumers (vulnerable to attack) to many small producers supplying many consumers (resilient). Distributed systems link decentralized and modular assets together so that greater cognition and functionality are achieved. But caution should be used in moving toward purely decentralized systems for resilience, as there are situations where resilience is more closely realized with centralized configurations.

Fundamentally, as the authors note, Ukraine is showing us how to build and operate infrastructure in rapidly changing and chaotic environments. But it is also important to recognize that infrastructure in regions not facing warfare is likely to experience shifts between chaotic (e.g., extreme weather events, cyberattacks, failure due to aging) and stable conditions. This cycling necessitates being able to pivot infrastructure organizations and their technologies between chaos and non-chaos innovation. The capabilities produced from these innovation sets become the cornerstone for agile and flexible infrastructure to respond at pace and scale to known challenges and perhaps, most importantly, to surprise.

Professor of Civil, Environmental, and Sustainable Engineering

Arizona State University

Coauthor, with Braden Allenby, of The Rightful Place of Science: Infrastructure and the Anthropocene

In their essay, Daniel Armanios, Jonas Skovrup Christensen, and Andriy Tymoshenko provide insightful analysis of the Ukraine conflict and how the Ukrainian people are able to manage the crisis. Their recounting reminds me of an expression frequently used in the US Marines: improvise, adapt, and overcome. Having lived and worked for many years in Ukraine, and having returned for multiple visits since the Russian invasion, leaves me convinced that while the conflict will be long, Ukraine will succeed in the end. The five propositions the authors lay out as the key to success are spot on.

Ukraine’s common goal of bringing its people together (authors’ Proposition 1), along with the Slavic culture and a particular legacy of the Soviet system, combine to form the fundamental core of why the Ukrainian people not only survive but often flourish during times of crisis. Slavic people are, by my observation, tougher and more resilient than the average. Some will call it “grit,” some may call it “stoic”—but make no mistake, a country that has experienced countless invasions, conflicts, famines, and other hardships imbues its people with a special character. It is this character that serves as the cornerstone of their attitude and in the end their response. Unified hard people can endure hard things.

Some will call it “grit,” some may call it “stoic”—but make no mistake, a country that has experienced countless invasions, conflicts, famines, and other hardships imbues its people with a special character.

A point to remember is that Ukraine, like most of the former Soviet Union, benefits from a legacy infrastructure based on redundancy and simplicity. This is complementary to the authors’ Proposition 5 (a modular, distributed, and renewable energy infrastructure is more resilient in time of crisis). It was Vladimir Lenin who said, “Communism equals Soviet power plus the electrification of the whole country.” As a consequence, the humblest village in Ukraine has some form of electricity, and given each system’s robust yet simple connection, it is easily repaired when broken. Combine this with distributed generation (be it gensets or wind, solar, or some other type of renewable energy) and you have built-in redundancy.

During Soviet times, everyone needed to develop a “work-around” to source what they sometimes needed or wanted. Waiting for the Soviet state to supply something could take forever, if it ever happened at all. As a consequence, there were microentrepreneurs everywhere who could source, build, or repair just about everything, either for themselves or their neighbors. This system continues to flourish in Ukraine, and the nationalistic sentiment pervading the country makes it easier to recover from infrastructure damages. As the authors point out in Proposition 3, decentralized management allows for a more agile response.

The “lessons learned” from the ongoing conflict, as the authors describe, include, perhaps most importantly, that learning from previous incidents can help develop a viable incident response plan. Such planning, however, should be realistic and focus on the “probable” and not so much on the “possible,” since every situation and plan is resource-constrained to some degree. The weak link in any society is the civilian infrastructure, and failure to ensure redundancy and rapid restoration is not an option. Ukraine is showing the world how it can be accomplished.

Supervisory Board Member

Ukrhydroenergo

Ground Truths Are Human Constructions

Artificial intelligence algorithms are human-made, cultural constructs, something I saw first-hand as a scholar and technician embedded with AI teams for 30 months. Among the many concrete practices and materials these algorithms need in order to come into existence are sets of numerical values that enable machine learning. These referential repositories are often called “ground truths,” and when computer scientists construct or use these datasets to design new algorithms and attest to their efficiency, the process is called “ground-truthing.”

Understanding how ground-truthing works can reveal inherent limitations of algorithms—how they enable the spread of false information, pass biased judgments, or otherwise erode society’s agency—and this could also catalyze more thoughtful regulation. As long as ground-truthing remains clouded and abstract, society will struggle to prevent algorithms from causing harm and to optimize algorithms for the greater good.

Ground-truth datasets define AI algorithms’ fundamental goal of reliably predicting and generating a specific output—say, an image with requested specifications that resembles other input, such as web-crawled images. In other words, ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them

Ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them. 

Ground-truth datasets fall into at least two subsets: input data (what the algorithm should process) and output targets (what the algorithm should produce). In supervised machine learning, computer scientists start by building new algorithms using one part of the output targets annotated by human labelers, before evaluating their built algorithms on the remaining part. In the unsupervised (or “self-supervised”) machine learning that underpins most generative AI, output targets are used only to evaluate new algorithms.

Most production-grade generative AI systems are assemblages of algorithms built from both supervised and self-supervised machine learning. For example, an AI image generator depends on self-supervised diffusion algorithms (which create a new set of data based on a given set) and supervised noise reduction algorithms. In other words, generative AI is thoroughly dependent on ground truths and their socioculturally oriented nature, even if it is often presented—and rightly so—as a significant application of self-supervised learning.

Why does that matter? Much of AI punditry asserts that we live in a post-classification, post-socially constructed world in which computers have free access to “raw data,” which they refine into actionable truth. Yet data are never raw, and consequently actionable truth is never totally objective.

Algorithms do not create so much as retrieve what has already been supplied and defined—albeit repurposed and with varying levels of human intervention. This observation rebuts certain promises around AI and may sound like a disadvantage, but I believe that it could instead be an opportunity for social scientists to begin new collaborations with computer scientists. This could take the form of a professional social activity, people working together to describe the ground-truthing processes that underpin new algorithms, and so help make them more accountable and worthy.

AI Lacks Ethic Checks for Human Experimentation

Following Nazi medical experiments in World War II and outrage over the US Public Health Service’s four-decade-long Tuskegee syphilis study, bioethicists laid out frameworks, such as the 1947 Nuremberg Code and the 1979 Belmont Report, to regulate medical experimentation on human subjects. Today social media—and, increasingly, generative artificial intelligence—are constantly experimenting on human subjects, but without institutional checks to prevent harm.

In fact, over the last two decades, individuals have become so used to being part of large-scale testing that society has essentially been configured to produce human laboratories for AI. Examples include experiments with biometric and payment systems in refugee camps (designed to investigate use cases for blockchain applications), urban living labs where families are offered rent-free housing in exchange for serving as human subjects in a permanent marketing and branding experiment, and a mobile money research and development program where mobile providers offer their African consumers to firms looking to test new biometric and fintech applications. Originally put forward as a simpler way to test applications, the convention of software as “continual beta” rather than more discrete releases has enabled business models that depend on the creation of laboratory populations whose use of the software is observed in real time.

Generative AI is an extreme case of unregulated experimentation-as-innovation, with no formal mechanism for considering potential harms.

This experimentation on human populations has become normalized, and forms of AI experimentation are touted as a route to economic development. The Digital Europe Programme launched AI testing and experimentation facilities in 2023 to support what the program calls “regulatory sandboxes,” where populations will interact with AI deployments in order to produce information for regulators on harms and benefits. The goal is to allow some forms of real-world testing for smaller tech companies “without undue pressure from industry giants.” It is unclear, however, what can pressure the giants and what constitutes a meaningful sandbox for generative AI; given that it is already being incorporated into the base layers of applications we would be hard-pressed to avoid, the boundaries between the sandbox and the world are unclear.

Generative AI is an extreme case of unregulated experimentation-as-innovation, with no formal mechanism for considering potential harms. These experiments are already producing unforeseen ruptures in professional practice and knowledge: students are using ChatGPT to cheat on exams, and lawyers are filing AI-drafted briefs with fabricated case citations. Generative AI also undermines the public’s grip on the notion of “ground truth” by hallucinating false information in subtle and unpredictable ways.

Much of current regulation places the responsibility for AI safety on individuals, whereas in reality they are the subjects of an experiment being conducted across society.

These two breakdowns constitute an abrupt removal of what philosopher Regina Rini has termed “the epistemic backstop,”—that is, the benchmark for considering something real. Generative AI subverts information-seeking practices that professional domains such as law, policy, and medicine rely on; it also corrupts the ability to draw on common truth in public debates. Ironically, that disruption is being classed as success by the developers of such systems, emphasizing that this is not an experiment we are conducting but one that is being conducted upon us.

This is problematic from a governance point of view because much of current regulation places the responsibility for AI safety on individuals, whereas in reality they are the subjects of an experiment being conducted across society. The challenge this creates for researchers is to identify the kinds of rupture generative AI can cause and at what scales, and then translate the problem into a regulatory one. Then authorities can formalize and impose accountability, rather than creating diffuse and ill-defined forms of responsibility for individuals. Getting this right will guide how the technology develops and set the risks AI will pose in the medium and longer term.

Much like what happened with biomedical experimentation in the twentieth century, the work of defining boundaries for AI experimentation goes beyond “AI safety” to AI legitimacy, and this is the next frontier of conceptual social scientific work. Sectors, disciplines, and regulatory authorities must work to update the definition of experimentation so that it includes digitally enabled and data-driven forms of testing. It can no longer be assumed that experimentation is a bounded activity with impacts only on a single, visible group of people. Experimentation at scale is frequently invisible to its subjects, but this does not render it any less problematic or absolve regulators from creating ways of scrutinizing and controlling it.

Generative AI Is a Crisis for Copyright Law

Generative artificial intelligence is driving copyright into a crisis. More than a dozen copyright cases about AI were filed in the United States last year, up severalfold from all filings from 2020 to 2022. In early 2023, the US Copyright Office launched the most comprehensive review of the entire copyright system in 50 years, with a focus on generative AI. Simply put, the widespread use of AI is poised to force a substantial reworking of how, where, and to whom copyright should apply.

Starting with the 1710 British statute, “An Act for the Encouragement of Learning,” Anglo-American copyright law has provided a framework around creative production and ownership. Copyright is even embedded in the US Constitution as a tool “to promote the Progress of Science and useful Arts.” Now generative AI is destabilizing the foundational concepts of copyright law as it was originally conceived.

Typical copyright lawsuits focus on a single work and a single unauthorized copy, or “output,” to determine if infringement has occurred. When it comes to the capture of online data to train AI systems, the sheer scale and scope of these datasets overwhelms traditional analysis. The LAION 5-B dataset, used to train the AI image generator Stable Diffusion, contains 5 billion images and text captions harvested from the internet, while CommonPool (a collection of datasets released by nonprofit LAION in April to democratize machine learning), offers 12.8 billion images and captions. Generative AI systems have used datasets like these to produce billions of outputs.

US courts are likely to find that training AI systems on copyrighted works is acceptable under the fair use exemption, which allows for limited use of copyrighted works without permission in some cases.

For many artists and designers, this feels like an existential threat. Their work is being used to train AI systems, which can then create images and texts that replicate their artistic style. But to date, no court has considered AI training to be copyright infringement: following the Google Books case in 2015, which assessed scanning books to create a searchable index, US courts are likely to find that training AI systems on copyrighted works is acceptable under the fair use exemption, which allows for limited use of copyrighted works without permission in some cases when the use serves the public interest. It is also permitted in the European Union under the text and data mining exception of EU digital copyright law.

Copyright law has also struggled with authorship by AI systems. Anglo-American law presumes that work has an “author” somewhere. To encourage human creativity, some authors need the economic incentive of a time-limited monopoly on making, selling, and showing their work. But algorithms don’t need incentives. So according to the US Copyright Office they aren’t entitled to copyright. The same reasoning applied to other cases involving nonhuman authors, including the case where a macaque took selfies using a nature photographer’s camera. Generative AI is the latest in a line of nonhumans deemed unfit to hold copyright.

Nor are human prompters likely to have copyrights in AI-generated work. The algorithms and neural net architectures behind generative AI algorithms produce outputs that are inherently unpredictable, and any human prompter has less control over a creation than the model does.

Where does this leave us? For the moment, in limbo. The billions of works produced by generative AI are unowned and can be used anywhere, by anyone, for any purpose. Whether a ChatGPT novella or a Stable Diffusion artwork, output now exists as unclaimable content in the commercial workings of copyright itself. This is a radical moment in creative production: a stream of works without any legally recognizable author.

This is a radical moment in creative production: a stream of works without any legally recognizable author.

There is an equivalent crisis in proving copyright infringement. Historically, this has been easy, but when a generative AI system produces infringing content, be it an image of Mickey Mouse or Pikachu, courts will struggle with the question of who is initiating the copying. The AI researchers who gathered the training dataset? The company that trained the model? The user who prompted the model? It’s unclear where agency and accountability lie, so how can courts order an appropriate remedy?

Copyright law was developed by eighteenth-century capitalists to intertwine art with commerce. In the twenty-first century, it is being used by technology companies to allow them to exploit all the works of human creativity that are digitized and online. But the destabilization around generative AI is also an opportunity for a more radical reassessment of the social, legal, and cultural frameworks underpinning creative production.

What expectations of consent, credit, or compensation should human creators have going forward, when their online work is routinely incorporated into training sets? What happens when humans make works using generative AI that cannot have copyright protection? And how does our understanding of the value of human creativity change when it is increasingly mediated by technology, be it the pen, paintbrush, Photoshop, or DALL-E?

It may be time to develop concepts of intellectual property with a stronger focus on equity and creativity as opposed to economic incentives for media corporations. We are seeing early prototypes emerge from the recent collective bargaining agreements for writers, actors, and directors, many of whom lack copyrights but are nonetheless at the creative core of filmmaking. The lessons we learn from them could set a powerful precedent for how to pluralize intellectual property. Making a better world will require a deeper philosophical engagement with what it is to create, who has a say in how creations can be used, and who should profit.

Science Lessons from an Old Coin

In “What a Coin From 1792 Reveals About America’s Scientific Enterprise” (Issues, Fall 2023), Michael M. Crow, Nicole K. Mayberry, and Derrick M. Anderson make an adroit analogy between the origins of the Birch Cent and the two sides of the nation’s research endeavors, namely democracy and science. The noise and seeming dysfunction in the way science is adjudicated and revealed is, they say, a feature and not a bug.

I agree. I have written extensively about how scientists should embrace their humanity. That means we express emotions when we are ignored by policymakers, we have strong convictions and therefore are subject to motivated reasoning, and we make both intentional and inadvertent errors. Efforts to curb this humanity have all failed. We are not going to silence those who are passionate about science—nor should we. Why would someone study climate change unless they are passionate about the fact that it’s an existential crisis? We want and need that passion to drive effort and creativity. Does this make scientists outspoken and subject to—at least initially—looking for evidence that supports their passion? Of course. And does that same humanity mean that errors can appear in scientific papers that were missed by the authors, editors, and reviewers? Also yes.

We are not going to silence those who are passionate about science—nor should we. Why would someone study climate change unless they are passionate about the fact that it’s an existential crisis?

There’s a solution to this that also embraces the messy and glorious vision presented by Crow et al. And that is not to quell scientists’ passion and humanity, but rather to better explain and demonstrate that science operates within a system that ultimately corrects for human frailty. This requires better explaining the fact that scientists are competitive—another human trait—and that leads to arguments about data and papers that converge on the right answer, even when motivated reasoning may have been there to start with. It also requires courageous and forthright correction of the scientific record when errors have been made for any reason. Science is seriously falling short on this right now. The correction and retraction of scientific papers has become far too contentious—often publicly—and stigma is associated with these actions. This stigma arises from the perception that all errors are due to deliberate misconduct, even when journals are explicit that correction of the record does not imply fraud.

This must change. The public must experience—and perceive—that science is honorably self-correcting. That will require hard changes in scientists’ attitude and execution when concerns are raised about published papers. But fixing this is going to be a lot easier than lowering the noise level. And as the authors point out, that noise is a feature, not a bug, and therefore should be celebrated.

Editor-in-Chief of Science

Professor of Chemistry and Medicine

George Washington University

In their engaging article, Michael M. Crow, Nicole K. Mayberry, and Derrick M. Anderson rightly point to the centrality of science in US history—and to how much “centrality” has meant entanglement in controversy, not clarity of purpose.

The motto on the Birch Cent, “Liberty, Parent of Science and Industry,” brings out the importance of freedom of inquiry. This is not readily separable from freedom of interpretation and even freedom to disregard. The authors quote the slogan “follow the science” that attempts to counter the recent waves of distrust and denial. But while science may inform policy, it doesn’t dictate it. Liberty signals also the importance of political debate over whether and how to follow science.

Science and technology developed in a dialectical relationship between centralization and decentralization, large-scale and small, elite domination and democratic opportunities.

In 1792, science was largely a small-scale craft enterprise. Over time, universities, corporations, government agencies, and markets all became crucial. A science and technology system developed, greatly increasing support for science but also shaping which possible advances in knowledge were pursued. Potential usefulness was privileged, as were certain sectors, such as defense and health, and potential for profit. Different levels of risk and “disutility” were tolerated. The patent system developed not only to share useful knowledge but, as Crow and his coauthors emphasize, to secure private property rights. All this complicated and limited the liberty of scientific inquiry.

Comparing the United States to the United Kingdom, the authors sensibly emphasize the contrast of egalitarian to aristocratic norms. But the United States was not purely egalitarian. The Constitution is full of protections for inequality and protections from too much equality. Conversely, UK science was not all aristocratic nor entirely top-down and managed. Though the Royal Society secured formal recognition under King Charles II, it was created in the midst of (and influenced by) the English Civil War. Bottom-up self-organization among scientists was important. Most were elite, but not all statist. And the same went for a range of other self-organized groups, such as Birmingham’s Lunar Men, who shared a common interest in experiment and invention. These groups joined in creating “invisible colleges” that contributed to state power but were not controlled by it. Even more basically, perhaps, the authors’ contrast of egalitarian to aristocratic norms implies a contrast of common men to elites that obscures the rising industrial middle class. It was no accident the Lunar Men were in the English Midlands.

Crow and his coauthors correctly stress that neither scientific knowledge nor technological innovation has simply progressed along a linear path. In both the United States and the United Kingdom, science and technology developed in a dialectical relationship between centralization and decentralization, large-scale and small, elite domination and democratic opportunities. Universities, scientific societies, and indeed business corporations all cut both ways. They were upscaling and centralizing compared with autonomous, local craft workshops. They worked partly for honor and partly for profit. But they also formed intermediate associations in relation to the state and brought dynamism to local communities and regions. Universities joined science productively to critical and humanistic inquiry. Liberty remained the parent of science and industry because institutional supports remained diverse, allowing for creativity, debate, and exploration of different possible futures. There are lessons here for today.

University Professor of Social Sciences

Arizona State University

Securing Semiconductor Supply Chains

Global supply chains, particularly in technologies of strategic value, are undergoing a remarkable reevaluation as geopolitical events weigh on the minds of decisionmakers across government and industry. The rise of an aggressive and revisionist China, a devastating global pandemic, and the rapid churn of technological advancement are among the factors prompting a dramatic rethinking of the value of lean, globally distributed supply chains.

These complex supply networks evolved over several decades of relative geopolitical stability to capture the efficiency gains of specialization and trade on a global scale. Yet in today’s world, efficiency must be recast in terms of reliable and resilient supply chains better adapted to geopolitical uncertainties rather than purely on the basis of lowest cost.

Indeed, nations worldwide have belatedly discovered a crippling lack of redundancy in supply chains necessary to produce and distribute products essential to their economies and welfare, including such diverse goods as vaccines and medical supplies, semiconductors and other electronic components, and the wide variety of technologies reliant on semiconductors. A drive to “rewire” these networks must balance the manifest advantages of globally connected innovation and production with the need for improved national and regional resiliency. This would include more investment in traditional technologies—for example, a more robust regional electrical grid in Texas, whose failure contributed to the supply disruption of automotive chips that Abigail Berger, Hassan Khan, Andrew Schrank, and Erica R. H. Fuchs describe in “A New Policy Toolbox for Semiconductor Supply Chains” (Issues, Summer 2023).

Efficiency must be recast in terms of reliable and resilient supply chains better adapted to geopolitical uncertainties rather than purely on the basis of lowest cost.

Of course, given its globalized operations, the semiconductor industry is at the forefront of these challenges. In particular, there is a need to distribute risks of single-point failures, such as those found in the global concentration of semiconductor manufacturing in East Asia. Taiwan and South Korea, which together account for roughly half of global semiconductor fabrication capacity, sit astride major geopolitical and geological fault lines, with the dangers of the latter often underestimated.

Recent investments to renew semiconductor manufacturing capacity in the United States are a key element of this rewiring. Through the CHIPS for America Act of 2021, lawmakers have authorized $52 billion to support restoring US capacity in advanced chip manufacturing, with $39 billion in subsidies for the construction of fabrication plants, or “fabs,” backed by substantial tax credits, and roughly $12 billion for related advanced chip research and development initiatives.

Berger and her colleagues argue cogently that it may also be possible to design greater resiliency directly into semiconductor chips. In some cases, greater standardization in chip architecture may allow some chips to be built at multiple fabs, reducing “foundry lock-in.” Such gains will depend on trusted networks among multiple firms as well as governments of US allies and strategic partners—although sorting the practical realities of commercial and national competition in a rapidly innovating industry that marches to the cadence of Moore’s Law will be challenging. The authors rightly point out that focusing on distinct market segments with similar use cases may offer win-win opportunities, but these, too, will require incentives to drive cooperation.

It is clear that global supply chains need a greater level of resiliency, not least through greater geographic dispersion of production across the supply chain. But whether generated by greater standardization, stronger trusted relationships, or through the redistribution of assets, the continued national economic security of the United States and its allies depends on a comprehensive, cooperative, and steady implementation of this rewiring. The authors propose a novel approach that should be pursued, but the broader rewiring will not happen quickly or easily. We still need to move forward with ongoing incentives for industry, more cooperative relationships, and major new investments in talent. We are not done. We need to think of semiconductors like nuclear energy, one involving sustained and substantial commitments of funds and policy attention.

Senior Fellow and Director, Renewing American Innovation

Center for Strategic and International Studies

Time for an Engineering Biennale

Guru Madhavan’s “Creative Intolerance” (Issues, Summer 2023) is exceptionally rich in compelling metaphors and potent messages. I am enthusiastic about the idea of an engineering biennale to showcase innovations and provoke discussions about specific projects and the methods of engineering.

I am enthusiastic about the idea of an engineering biennale to showcase innovations and provoke discussions about specific projects and the methods of engineering.

The Venice Arts Biennale Architeturra 2023 that the author highlights, which impressed me with its scale and diversity, provides an excellent model for an Engineering Biennale. Could the US Department of Energy Solar Decathlon be scaled up? Could the Design Museum in London play a role? Maybe multiple design showcases—such as those at the University of British Columbia; the Jacobs Institute for Design Innovation at the University of California, Berkeley; or the MIT Design Lab—could grow into something larger?

The support for design could expand the role of the National Academies of Sciences, Engineering, and Medicine by building bridges with an increasing number of researchers in this field, ultimately leading to a National Academy of Design.

Professor Emeritus

University of Maryland

Member, National Academy of Engineering

The Strength of Weak Ties

“It was the best of times; it was the worst of times,” Charles Dickens famously began in A Tale of Two Cities. So it was for scientific research in early 2020 as a number of forces came together to create a unique set of opportunities and challenges.

First, the COVID-19 pandemic itself. The disease was so contagious and so serious that physical, human-to-human proximity was canceled except for interactions essential to life. Laboratories closed; lecture theaters and libraries lay empty; people barely left their homes.

Second, the emergence of technology-mediated collaboration. Video conferencing became the new meeting space; social media were repurposed for exchanging real-time information and ideas; and digital architects put their skills to building bespoke platforms.

Third, the scientific world united around a common purpose: generating the evidence base that would end the pandemic. Goodwill and reciprocity ruled. We forgot about academic league tables, promotion bottlenecks, h-indices, or longstanding rivalries. We switched gear from competing to collaborating. We pooled our data and our expertise for the good of humanity (and, perhaps, with a view to saving ourselves and our loved ones). And not to be overlooked, the red tape of research governance was cut. Our institutions and funders allowed us—indeed, required us—to divert our funds, equipment, and brainpower to the only work that now mattered. Journal paywalls were torn down. It became possible to build best teams from across the world, to get fast-track ethics approval within hours rather than weeks, to generate and test bold hypotheses, to publish almost instantly, and to replicate studies quickly when the science required it. The downside, of course, was the haystack of preprints that nobody had time to peer-review, but that’s a subject for another day.

The scientific world united around a common purpose: generating the evidence base that would end the pandemic.

In “How to Catalyze a Collaboration” (Issues, Summer 2023), Annamaria Carusi, Laure-Alix Clerbaux, and Clemens Wittwehr describe one initiative that emerged from those strange, wonderful, and terrifying times. The project, dubbed CIAO—short for Modelling COVID-19 Using the Adverse Outcome Pathway Framework—happened because a handful of toxicologists and virologists came together, on a platform designed for exchanging pictures of kittens, to solve an urgent puzzle. Through 280-character tweets and judiciously pitched hashtags, they began to learn each other’s language, reasoned collectively and abductively, and brought in others with different skills as the initiative unfolded.

Online collaborative groups need two things to thrive: a strong sense of common purpose, and a tight central administration (to do the inevitable paperwork, for example). In addition, as the sociologist Mark Granovetter has observed, such groups offer “the strength of weak ties”—people we hardly know are often more useful to us than people we are close to (because we already have too much in common with the latter). An online network tends to operate both through weak ties (the “town square,” where scientists from different backgrounds get to know each other a bit better) and through stronger ties (the “clique,” where scientists who find they have a lot in common peel off to share data and write a paper together).

The result, Carusi and her colleagues say, was 11 peer-reviewed papers and explanation of some scientific mysteries—such as why people with COVID-19 lose their sense of smell. Congratulations to the CIAO team for making the best of the “worst of times.”

Professor of Primary Care Health Sciences

University of Oxford, United Kingdom

Annamaria Carusi, Laure-Alix Clerbaux, and Clemens Wittwehr candidly and openly describe their technical and soft-skill experiences in fostering a global collaboration to address COVID-19 during the pandemic, drawing from an existing Adverse Outcome Pathway approach developed within the Organisation for Economic Co-Operation and Development. The collaborative, nicknamed CIAO (by the Italian members who would like to say, “Ciao COVID!”), found much-needed structure in the integrative construct of adverse outcome pathways (AOPs), or structured representations of biological events. In particular, one tool the researchers adopted—the AOP-Wiki—provided an increasingly agile web-based application that offered contributors a place and space to work on project tasks regardless of time zone. In addition, the AOP structure and AOP-Wiki both have predefined (and globally accepted) standards that obviate the need for semantics debates.

Yet the technical challenges were meager compared with the social challenges of people “being human” and the practical challenges of bringing people together when the world was essentially closed and physical interactions very limited. Carusi, Clerbaux, Wittwehr and their colleagues stepped up during this time of crisis by exercising not only scientific ingenuity but also social and emotional intelligence. They helped bring about, in essence, a paradigm shift. There was no choice but to abandon traditional in-person approaches that were no longer feasible and to embrace virtual and web-based applications. Collaborative leads leveraged their own social networks in virtual space to rapidly make connections that critically helped the AOP framework become the proverbial (and virtual) “campfire” for bringing the collaborative together.

Carusi, Clerbaux, Wittwehr and their colleagues stepped up during this time of crisis by exercising not only scientific ingenuity but also social and emotional intelligence. They helped bring about, in essence, a paradigm shift.

Importantly, this work was not constrained by geography or language. For instance, the AOP-Wiki allowed for asynchronous project management by people living across 20 countries, breaking down language barriers through incorporation of globally accepted lingua franca for documenting and reporting COVID-19 biological pathways. Data entered into the AOP-Wiki were controlled using globally accepted standards and data management practices, such as controlled data extraction fields, vocabularies and ontologies, and FAIR (findable, accessible, interoperable, and reusable) data standards. These ingredients provided the collaborative its perfect campfire for cooking up COVID-19 pathways. All that was needed were the “enzymes” to get it all digested. That’s where the authors stepped in, gently “simmering” the collaborative toward a banquet of digitally documented COVID-19 web-based applications.

The collaborative’s resultant work was the personification of the adage when there is a will, there is a way. The group’s way was greatly facilitated by a willingness to accept and leverage new(er) technology and methods (i.e., web applications and digital data) that enable humans—and their computers—flexibility and efficiency across the globe. Novel virtual/digital models enhanced the collaborative’s experience. Notably, the collaborative’s acceptance and use of the AOP framework and AOP-Wiki’s data management interface means the COVID-19 AOPs are digitally documented, readable by both machines and humans, and globally accessible. The AOP framework has not only catalyzed the collaboration, but prospectively catalyzes the ability to use generative artificial intelligence to find and refine additional data with similar characteristics. This means the COVID-19 AOPs may evolve with the virus, updating over time as new information is automatically ingested.

Toxicologist

US Environmental Protection Agency

Centering Equity and Inclusion in STEM

As the United States seeks to tap every available resource for talent and innovation to keep pace with global competition, institutional leadership in building research capacity at historically Black colleges and universities (HBCUs) and other minority-serving institutions (MSIs) is essential, as Fay Cobb Payton and Ann Quiroz Gates explain in “The Role of Institutional Leaders in Driving Lasting Change in the STEM Ecosystem” (Issues, Summer 2023). Transformational leadership, such as that displayed by Chancellor Harold Martin and North Carolina Agricultural and Technical State University as it elevates itself to the Carnegie R1 designation of “very high research activity,” and by former President Diana Natalicio to position the University of Texas at El Paso as an R1 institution, provides role models for other institutions.

Payton and Gates argue elegantly for utilization of the National Science Foundation’s Eddie Bernice Johnson INCLUDES Theory of Change model. For fullest effect, I suggest that this model must include two additional elements for institutional leaders to consider: the role of institutional board members and the role of minority technical organizations (MTOs). To achieve improved and lasting research capacity, the boards at HBCUs and MSIs must view research as part of the institutional DNA. Many of these institutions are in the midst of transforming from primarily teaching institutions to both teaching and research universities. For public institutions, the governors or oversight authorities should appoint board members with research experience and members who have large influence in the business community, as one outcome from university research is technology commercialization. HBCUs and MSIs need board members with “juice”—because, as the saying goes, “if you’re not at the table, you’re on the menu.”

To achieve improved and lasting research capacity, the boards at HBCUs and MSIs must view research as part of the institutional DNA.

Finally, as the nation witnesses increasing enrollments at HBCUs and MSIs, the role of minority technical organizations cannot be understated. If we are to achieve the National Science Board’s Vision 2030 of a robust, diverse, domestic workforce in science, technology, engineering, and mathematics—the STEM fields—these organizations are crucial. MTOs such as the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers and the Society for the Advancement of Chicanos/Hispanics and Native Americans in Science are two of the many MTOs that provide role models for STEM students, hold annual conferences for students and professionals, and foster retention of Black and brown students in the STEM fields. As part of the NSF INCLUDES ecosystem, let’s also not forget the major events that recognize outstanding individuals at HBCUs and MSIs, such as the Black Engineer of the Year awards and the Great Minds in STEM annual conferences.

Vice President for Research, University of the District of Columbia

Vice Chair, National Science Board

As the president of a national foundation focused exclusively on postsecondary education, I was especially intrigued with Fay Cobb Payton and Ann Quiroz Gates’s ambitious recommendations for the philanthropic community. The authors challenge traditional foundations to make bigger and lengthier investments in higher education, especially minority-serving institutions (MSIs). At ECMC Foundation, we do just this. By making large, multi-year investments in projects led by public two- and four-year colleges and universities, intermediaries and even start-ups through our program-related investments, we aim to advance wholesale change for broad swaths of students, particularly those who come from underserved backgrounds.

One project worth noting is the Transformational Partnerships Fund. Along with support from Ascendium Education Group, the Kresge Foundation, and the Michael and Susan Dell Foundation, we have created a fund that provides support to higher education leaders interested in recalibrating the strategic trajectory of their institutions in service to students. Such recalibrations might be mergers, course sharing, or collaborations that streamline back-end administrative functions. Although this fund does not offer large grants or long-term support, it nonetheless helps higher education leaders understand more deeply how they need to respond to the challenges that lay ahead for their colleges and universities.

Barring incentives that might make significant change possible, college leaders often stick with the status quo, preferring tactical, stop-gap measures rather than strategic reform.

Payton and Gates advance a compelling moral argument about the need to better support MSIs and the students they serve, especially in STEM-related majors. What they do not emphasize, however, are specific institutional incentives that will drive lasting improvements in diversity and inclusion. Presidents and chancellors report to trustees, whose primary fiduciary obligation is to keep their institutions in business. Barring incentives that might make significant change possible, college leaders often stick with the status quo, preferring tactical, stop-gap measures rather than strategic reform.

Arguments for institutional change that appeal to our better angels, although earnest and well-intentioned, have failed thus far to significantly alter the postsecondary education landscape for our most vulnerable students. The consequence is that too many students choose to leave before completing their degree. According to the National Student Clearinghouse Research Center, the population of students with some college and no credential has reached 40.4 million. The loss of talent in STEM-related and other disciplines is staggering, and a reversal of institutional inertia is required to alter course.

Still, the authors offer a theory of change that makes a positive, forward-looking contribution to our thinking about institutional transformation. I eagerly await the authors’ future work as they translate their powerful worldview into a bold set of recommendations that offer up key incentives for higher education leaders to employ as they address the challenges their institutions face in postpandemic America.

President, ECMC Foundation

Fay Cobb Payton and Ann Quiroz Gates summarize the critical challenges and opportunities ahead for science, technology, engineering, and mathematics education. The STEM ecosystem is vast, complex, and stubbornly anchored in inertia. The authors present a compelling vision for the future: institutional excellence will be defined by inclusion, actions will be centered on accountability, and the effectiveness of leadership will be measured by the ability to drive systemic and sustained culture change.

Achieving inclusive excellence begins with a commitment to change the STEM culture. Here is a to-do list requiring skillful leadership:

  • Redefine the STEM curriculum, especially at the introductory level.
  • Resist the impulse of requiring STEM students to go too deep too soon. Instead, encourage them to explore the arts, humanities, and social sciences.
  • Review admissions criteria and STEM course prerequisites.
  • Reward instructors and advisers who practice the skills of equitable and inclusive teaching and mentoring.
  • Increase representation of persons heretofore excluded from STEM by valuing relevant lived experiences more than pedigree.

We yearn for leaders with the vision, strength, and patience to drive lasting culture change. We must nurture the next generation of leaders so that today’s modest changes will be amplified and sustained.

Inclusive excellence, already challenging, is made more difficult because of the pressures exerted by powerful forces. Many institutions succumb to the false promise of external validation based on criteria that are contradictory to the values of equity and inclusion. The current system selects for competition instead of community, exclusion instead of inclusion, a white-centered culture instead of equity. The “very high research activity” (R1) classification for institutions is based on external funding, the number of PhD degrees and postdoctoral researchers, and citations to published work. In the current US News and World Report “Best Colleges” ranking, half of an institution’s score is based on just four (of 24) criteria: six-year graduation rates, reputation, standardized test scores, and faculty salaries.

Many institutions succumb to the false promise of external validation based on criteria that are contradictory to the values of equity and inclusion.

It is time to disrupt the incentives system, as the medical scholar Simon Grassmann recently argued in Jacobin magazine. It is wrong to believe that quantitative metrics such as the selectivity of admissions and the number of research grants are an accurate measure of the quality of an institution. Instead, let us develop the means to recognize institutions that make a genuine difference for their students and employees—call it an “Institutional Delta.” Students will learn and instructors will thrive when the learning environment is centered on belonging and the campus commits to the success of everyone. Finding reliable ways to measure the Institutional Delta and assess institutional culture will require new qualitative approaches and courageous leadership. An important lever is the accreditation process, in which accrediting organizations can explicitly evaluate how well an institution’s governing board understands and encourages equity and inclusion.

The STEM culture must be disrupted so that it is centered on equity and inclusion. This requires committed leaders with the courage to battle the contradictions of an outdated rewards system. Culture disruptors must be supported by governing boards and accreditation agencies. Let leaders lead!

Senior Director, Center for the Advancement of Science Leadership and Culture

Howard Hughes Medical Institute

Fay Cobb Payton and Ann Quiroz Gates emphasize that systemic change to raise attainment of scientists from historically underrepresented backgrounds must engage stakeholders at multiple levels and from multiple organizations. These stakeholders include positional and grassroots leaders in postsecondary institutions, industry leaders, and public and private funders. The authors posit that “revisiting theories of change, understanding the way STEM academic ecosystems work, and fully accounting for the role that leadership plays in driving change and accountability are all necessary to transform a system built upon historical inequities.”

The National Academies of Sciences, Engineering, and Medicine report Minority Serving Institutions: America’s Underutilized Resource for Strengthening the STEM Workforce, released in 2019, highlighted that such institutions graduate disproportionately high shares of students from minoritized backgrounds in STEM fields. The report found that minority-serving institutions (MSIs) receive significantly less federal funding than other institutions and recommended increased investment in MSIs for their critical work in educating minoritized STEM students. To reinforce this work, the report also called for expanding “mutually beneficial partnerships” between MSIs and other higher education institutions, industry stakeholders, and public and private funders.

Even research that has attempted to link higher education organizational studies with STEM education reform has primarily been conducted in highly selective, historically and predominantly white institutions that are predicated on exclusion.

Payton and Gates rightfully recommend that strengthening the ecosystem to diversify science should “build initiatives on MSIs’ successes.” Yet the National Academies report on MSIs noted that research on why and how some MSIs are so successful in educating minoritized STEM students has been scant. Conversely, most research on this topic has been conducted in highly selective, historically white institutions. Paradoxically then, most of this research has neglected the institutional contexts that many racially minoritized STEM students navigate, including the MSI contexts in which they are often more likely to succeed.

The authors also call to revisit organizational theories of change as a step toward transforming STEM ecosystems in more equitable directions. Yet the social science research on higher education organizational change has historically been disconnected from research on improving STEM education. The American Association for the Advancement of Science report Levers for Change, released in 2019, highlighted this very disconnection as a key barrier to reform in undergraduate STEM education.

Even research that has attempted to link higher education organizational studies with STEM education reform has primarily been conducted in highly selective, historically and predominantly white institutions that are predicated on exclusion. Limited organizational knowledge about how MSIs educate minoritized students and how that knowledge can be adapted to different institutional contexts have together hindered the development of a STEM ecosystem predicated on inclusion. Enacting Payton and Gates’s recommendation to revisit organizational theories of change to transform STEM ecosystems will require that scholarly communities and funders generate more incentives and opportunities to conduct research that integrates higher education organizational change, STEM reform approaches, and the very MSI institutional contexts that can offer models of inclusive excellence in STEM. Such social science research can yield the most promising leadership tools to transform STEM ecosystems toward inclusive excellence.

Executive Director, Diana Natalicio Institute for Hispanic Student Success

Distinguished Centennial Professor, Educational Leadership and Foundations

The University of Texas at El Paso

Fay Cobb Payton and Ann Quiroz Gates highlight the role of leadership in transforming the academic system built upon the nation’s historical inequities. Women, African Americans, Hispanic Americans, and Native Americans remain inadequately represented in science, technology, engineering, and mathematics relative to their proportions in the larger population.

For the United States to maintain leadership in and keep up with expected growth of STEM-related jobs, academic institutions must envision and embrace strategies to educate the future diverse workforce. At the same time, federal funding agencies need to support strategies to encourage universities to pursue strategic alliances with the private sector to recruit, train, and retain a diverse workforce. We need visionary strategies and intentionality to make changes, with accountability frameworks for assessing progress.

Leadership is one key element in strategies of change. Thus, Payton and Gates perspicuously illustrate the role of leadership in advancing the STEM research enterprise at minority-serving institutions. At the University of Texas at El Paso, its president established new academic programs, offered open admissions to students, recruited faculty members from diverse groups, and built the necessary infrastructure to support research and learning. As a result, in a matter of a few years the university achieved the Carnegie “R1” designation signifying “very high research activity” while transforming the university community to reflect the diverse community it serves. Thanks to such visionary leadership, it now leads R1 universities in STEM graduate degrees awarded to Hispanic students.

We need visionary strategies and intentionality to make changes, with accountability frameworks for assessing progress.

At North Carolina Agricultural and Technical State University, the chancellor has likewise transformed the institution, increasing student enrollment by nearly 30% in 12 years and doubling undergraduate graduation. Because of the strategies intentionally implemented by the chancellor, the university during the past decade experienced a more than 60% increase in its research enterprise supported by new graduate programs. Similarly, the president of Southwestern Indian Polytechnic Institute, a community college for Native Americans, has led in forging partnerships with Tribal colleges, universities, and the private sector to ensure that graduates can develop successful careers or pursue advanced studies.

As Payton and Gates note, the private sector has a major role to play in training a diverse STEM workforce, citing as exemplar the $1.5 billion Freeman Hrabowski Scholars Program established in 2022 by the Howard Hughes Medical Institute. Every other year, the program will appoint 30 early-career scientists from diverse groups, supporting a total of 150 scientists over a decade. This long-term project will likely yield outcomes to transform the diversity of the nation’s biomedical workforce.

It is clear, then, that the nation needs to embrace sustained and multipronged strategies involving academic institutions, government agencies, private enterprises, and even families to achieve an equitable level of diversity in STEM fields. It is also clear that investments in leadership development and academic infrastructure can help foster the growth of a more capable and diverse workforce and advance the nation’s overall innovation capability. The good news is that Payton and Gates provide proof positive that institutions and partnerships can achieve the desired outcomes.

Professor of Atmospheric Science

Pennsylvania State University

The author chairs the Committee on Equal Opportunities in Science and Engineering, chartered by Congress to advise the National Science Foundation on achieving diversity across the nation’s STEM enterprise.

Fay Cobb Payton and Ann Quiroz Gates remind us that despite some positive movement, the United States has substantially more to do in broadening participation in science, technology, engineering, and mathematics—the STEM fields. The authors promote two often overlooked contributions to change: the key role of institutional leaders and the importance of minority-serving institutions. Even with their additions, however, I believe there is a significant deficiency in building out an appropriate theory of change to address the overall challenges we face in STEM.

The authors recount that the National Science Foundation’s Eddie Bernice Johnson INCLUDES Initiative was established to leverage the many discrete efforts underway. They note that “episodic efforts, or those that are not coordinated, intentional, or mutually reinforcing, have not proven effective.” They advocate revisiting theories of change, understanding how STEM academic ecosystems work, and fully accounting for the role that leadership plays in driving change and accountability. But while I strongly agree with their case—as far as it goes—I believe there is considerably more that ought to be added to the theory of change embraced by the INCLUDES Initiative to make it more useful and impactful.

I posit that to successfully guide STEM systems change at scale, a theory of change ought to incorporate at least three (simplified) dimensions:

  • Institution. At its core, change is local. Classroom, department, and institution levels are where policies, practices, and culture have to change.
  • Institution/national interface. Initiatives must have bidirectional interaction. National initiatives influence institutions, and a change by an institution reflects back to a national initiative, hopefully multiplying its success through adaptation by other network members.
  • Multiple dimensions of change. Changes in policy and culture must be translated into specific changes in pedagogy, student belonging, and faculty professional development. We also need better ways to track the translation of these changes into the STEM ecosystem, such as graduating a more diverse class of engineers.

The INCLUDES theory of change focuses almost exclusively on the second dimension. It presents an important progression for initiatives from building collaborative infrastructure to fostering networks, then leveraging allied efforts. It captures the institution/national interface with a box on expansion and adaptation of better-aligned policies, practices, and culture, but only alludes to the institutional change on which such advances rest. Payton and Gates add to the theory by focusing mostly on the missing role of leadership in fostering institutional change. They describe examples of key leaders who have been critical drivers of specific changes. They also devote attention to multiple dimensions of change by describing important successes that minority-serving institutions have had in increasing student graduation in STEM and to the policy and program changes by leadership that made such change possible.

I believe there is considerably more that ought to be added to the theory of change embraced by the INCLUDES Initiative to make it more useful and impactful.

Even after Payton and Gates’s critical additions, I’m left with deep discomfort over a major omission: in the theory of change they offer for the STEM ecosystem, there is virtually nothing specific to STEM activity in it. While well-conceived, it appears entirely process-oriented and doesn’t directly translate to metrics enabling an assessment of progress toward broadening participation in STEM. Surely increased collaboration and changes in policy and culture are imperative, but they can apply to virtually any societal policy shift. What makes the INCLUDES theory of change applicable to whether the United States can produce a more diverse engineering graduating class?

Having offered this challenge—stay tuned from this quarter.

Senior Vice President for STEM Education and Research Policy

Association of Public and Land-grant Universities

Creating transformative (not transactional), intentional, and lasting change in higher education—specifically in a STEM ecosystem—requires continuity, commitment, and lived experiences from leaders who are not afraid to lead change and disrupt inefficient policies and practices that do not support the success of all students in an equitable context. The long-standing work of higher education presidents or chancellors such as Diana Natalicio at the University of Texas at El Paso, Harold Martin at the North Carolina Agricultural and Technical State University, Freeman Hrabowski at the University of Maryland Baltimore County, and Tamarah Pfeiffer at the Southwestern Indian Polytechnic Institute would not have materialized if they were conflict-adverse.

What do each of these dynamic leaders have in common? They were responsible for leading minority-serving institutions (MSIs) of higher education, which they transformed through deliberate actions. More importantly, their deliberate actions were intentionally grounded in understanding the mission of the institution, understanding the historically minoritized populations for which the institution served (among others), and understanding that a long-term commitment to doing the work would be required, even if that meant disrupting “business as usual” for the institution and setting a trajectory towards accelerating systemic change.

Fay Cobb Payton and Ann Quiroz Gates make a compelling case for what is required of institutional leaders to harness and mobilize systemic change in the STEM ecosystem by using the National Science Foundation’s INCLUDES model as a case study. Payton and Gates argue that “higher education leaders (e.g., presidents, provosts, and deans) set the tone for inclusion through their behaviors and expectations.” This argument is tantamount to the individual leaders’ strengths, strategies, and successes at the types of institutions highlighted in the article. Moreover, Payton and Gates point out that “leaders can hold themselves and the organization accountable by identifying measures of excellence to determine whether improvements in access, equity, and excellence are being achieved.”

As a former dean of a college of liberal arts and a college of arts and sciences at two MSIs (Jackson State University, an urban, historically Black college and university—HBCU—and the University of La Verne, a Hispanic serving institution, respectively) and now serving as the chief academic officer and provost at the only public HBCU and exclusively urban land-grant university in the United States—the University of the District of Columbia—I know firsthand the role that institutional leaders must play in moving the needle to “broaden participation” and the need for urgent inclusion of historically minoritized participants in the STEM ecosystem. As leaders operating within the MSI spaces, we recognize that meeting students where they are is crucial to developing the skilled technical workforce that our country so desperately needs.

We must do more to address the barriers that prevent individuals from embarking on or completing a STEM education that prepares them for the workforce. According to a 2017 National Academies of Sciences, Engineering, and Medicine report, by 2022 “the percentage of skilled technical job openings is likely to exceed the percentage of skilled technical workers in the labor force by 1.3 percentage points or about 3.4 million technical jobs.” The report finds that the number of skilled technical workers will likely fall short of demand, even when accounting for how technological innovation may change workforce needs (e.g., shortages of electricians, welders, and programmers).

As leaders operating within the MSI spaces, we recognize that meeting students where they are is crucial to developing the skilled technical workforce that our country so desperately needs.

At the same time, economic shifts toward jobs that put a premium on many lines of work on science and engineering knowledge and skills are leaving behind too many Americans. Therefore, as institutional leaders, we must harness the power of partnerships with industry, nonprofits, and community and technical colleges to increase awareness and understanding of skilled technical workforce careers and employment opportunities. This will be an enduring challenge to balance traditional and emerging research. In the long term, we demonstrate what Payton and Gates argue is necessary for lasting change—a change that affects multiple courses, departments, programs, and/or divisions and alters policies, procedures, norms, cultures, and/or structures.

Suggestions for next steps:

  • The challenge for MSIs in the twenty-first century is to figure out how to collaborate among institutions to renew, reform, and expand programs to ensure students have the opportunity for educational and career success.
  • As we think about MSI collaborations, there needs to be a broader discussion to include efforts that will yield high levels of public-private collaboration in STEM education, advocating policies and budgets focused on maximizing investments to increase student access and engagement in active, rigorous STEM learning experiences.
  • If we are to reimagine a twenty-first century where we have fewer HBCU mergers and closures, we must recognize that leadership at the top of our organizations must also come together to learn best practices for leading change. The old mindsets, habits, and practices of running our colleges and universities must be reset.
  • Through collaboration, HBCUs can pool resources and extend their reach. Collaboration opens communication channels, knowledge-sharing, and community-building between HBCUs and MSIs.

Chief Academic Officer

University of the District of Columbia

Fay Cobb Payton and Ann Quiroz Gates effectively conceptualize how inclusive STEM ecosystems are developed and sustained over time. The Eddie Bernice Johnson INCLUDES initiative at the National Science Foundation (NSF), which the authors write about, is a significant investment in moving the needle of underrepresentation in STEM. After thirty years as a STEM scholar, practitioner, and administrator in academia, industry, and government, I believe we are finally at an inflection point, although inflection can go both ways: negative or positive—and possibly only incrementally positive. For me, Payton and Gates’s framework triggered thoughts on the meaning of inclusion and why leadership is instrumental in building STEM ecosystems.

Inclusion has many different meanings, and those meanings have shifted over the years depending on context and purpose. Without consistent linguistic framing, inclusion can be decontextualized—rendering it into a passive concept rather than an action to be taken or an engine to be used to drive culture and climate. NSF’s INCLUDES program emphasizes collaboration, alliances, and connectors. The program is designed to inspire investigators to actively engage in inclusive change, a mechanism that requires us to use both inclusively informed and inclusively responsive approaches.

After thirty years as a STEM scholar, practitioner, and administrator in academia, industry, and government, I believe we are finally at an inflection point, although inflection can go both ways: negative or positive—and possibly only incrementally positive.

Diversifying STEM is challenged by the lack of a shared concept. Although the concept of “inclusion” does not have to be identical among institutions, it should be semantically aligned. As an example, Kaja A. Brix, Olivia A. Lee, and Sorina G. Stalla used a crowd-sourced approach to capture the meaning of “inclusion within diversity.” Their grounded theory methodology yielded four shared concepts: (1) access and participation; (2) embracing diverse perspectives; (3) welcoming participation; and (4) team belonging. For those of us who have advised doctoral students, inclusion is sort of like a good dissertation: there is no real formula for a high-quality dissertation, but you know it when you see it.

Another point made by Payton and Gates relates to sustained leadership and accountability. When she was president of the University of Texas at El Paso (UTEP), Diana Natalicio was highly effective in framing diversity, equity, inclusion, and accessibility (DEIA) to support action. Given the historical disadvantages experienced by UTEP, President Natalicio never seemed to waiver on UTEP’s right to become an R1 university in a collaborative DEIA context. Her degree in linguistics may have facilitated her skill in framing ideas that move people to real action.

Effective leadership in support of inclusion must be boldly voiced in multiple ways for multiple audiences. This form of institutional voice matters to all stakeholders, both within the institution and externally, because failure to voice DEIA says something as well: it means a leader is not really committed to change. Giving voice means leaders must consult with groups on their own campus and in their own communities to understand how to elevate DEIA using multipronged, systems-wide actions.

Payton and Gates also highlight Harold Martin, an electrical engineer who is recognized for his effective leadership of the North Carolina Agricultural and Technical State University. Among other accomplishments, Chancellor Martin’s leadership and practice have established an institution that strategically applies data-informed methods to advance excellence. Application of data-informed approaches is not a panacea, but metrics and measures serve to find the “proof in the pudding” regarding inclusive change. Inclusive change management is facilitated by thoughtful creation, elicitation, review, and interpretation of data in quantitative and qualitative forms. Without data, institutions will only check anecdotal boxes around inclusion, leading to no real or lasting change.

We must pay attention to shared meanings and effective leadership when leading inclusive change in STEM. Ecosystems thrive because of successful interaction and interconnection. Unfortunately, many leaders focus only on culture. While key to lasting change, culture is grounded in shared meaning, values, beliefs, etc. But culture change without climate change is ineffective. In organizational research, culture is what we say; climate is what we do. It is high time we are all about the “doing” because full reliance on the “saying” may not move diversity in a positive direction.

Provost and Executive Vice Chancellor for Academic Affairs

North Carolina Agricultural and Technical State University

Fay Cobb Payton and Ann Quiroz Gates shed light on the critical role of leadership in addressing historical inequities in the STEM fields, particularly in higher education. One of the key takeaways is the importance of visionary and committed leadership in fostering lasting change. Although their article provides valuable insights into the importance of leadership in promoting STEM equity, there are a couple areas that could use additional examination.

First, their argument would benefit from further exploration of systemic challenges and proven strategies. Payton and Gates focus primarily on leadership within educational institutions but do not address external factors that can influence STEM diversity. For example, they don’t discuss the role of government policies, industry partnerships, K–12 preparation, or societal attitudes in shaping STEM demographics. Understanding the specific obstacles faced by underrepresented groups and how leadership can address them will add value to the discussion. While the article mentions the importance of inclusive excellence, it would be helpful to provide specific strategies that college and university leadership can implement immediately to create lasting change in STEM.

It would be helpful to provide specific strategies that college and university leadership can implement immediately to create lasting change in STEM.

Second, there should be a wider discussion of intersectionality. The article primarily discusses diversity in terms of race and ethnicity but does not adequately address other dimensions of diversity, such as gender, disability, or socioeconomic background. Recognizing the intersectionality of identities and experiences is crucial for creating inclusive STEM environments.

To create lasting change in STEM, college and university leadership can take several additional steps, including collecting and analyzing data on the representation of underrepresented groups in STEM programs and faculty positions. These data can help identify disparities and inform targeted interventions. Leadership also needs to review and revise the curriculum to ensure it reflects diverse perspectives and contributions in STEM fields. Faculty must be encouraged and rewarded for incorporating inclusive teaching and research practices.

Creating lasting change in STEM demographics is a long-term commitment. Institutions must maintain their dedication to diversity and inclusion even when faced with challenges and changes in institutional leadership. Payton and Gates beautifully articulate the case that college and university leadership can create lasting change in STEM by implementing data-driven initiatives, fostering local and national collaborations, and maintaining a long-term institutional commitment to diversity and inclusion.

Associate Professor of Mathematics

Mathematics Clinic Program Director

Harvey Mudd College

Agricultural Research for the Public Good

Norman Borlaug succeeded at something that no one had done before—applying wide area adaptation for a specific trait in a specific crop for yield enhancement. This worked beyond all expectations in field trials conducted in environments that favored the expression of the new genetic material, which in this case had been developed from a type of semidwarf wheat native to Japan. Of course, wide area adaptation must be put in perspective as per the trait, farmer, crop genetics, and environment in which such a package is intended to be used.

The more traditional “local adaptation” typically happens in a farmer’s fields. In these and other microenvironments, wheat such as Borlaug developed, or other new wheats, can be tested and, if successful, bred locally for such situations. This has been exactly the type of applied “bread and butter” work done by national program scientists and local seed companies. However, this specialized knowledge for each area of the country and a given crop is being lost, as is the ability of national program scientists to conduct multilocation trials.

The work that Borlaug did, all conducted in the public arena for the public good, is all that more important to replicate today.

This loss of talent and support has eroded not because of the work of Borlaug, but because of consolidation of the agricultural research entities into four large agricultural/pharmaceutical companies. No more are there local seed companies; no more is there a robust plant breeding community in the public sector; no longer is there a focused effort on the “public good” of agriculture. Losing this publicly supported pool of expertise is especially a concern when local needs do not align with those of commercial providers.

This is true for India, Mexico, the United States, and Canada—and one can keep on going.

Consequently, the work that Borlaug did, all conducted in the public arena for the public good, is all that more important to replicate today. Science and farming are two ends of the same rope, and while one continues to be privatized, the other cannot benefit. Thus, improving farmers’ education and “infrastructure,” however little this seems to be defined, will not keep a given farming sector free from globalized pressures or a shortage of public-minded and public-based extension agents.

The separation of plant breeding—which Marci R. Baranski’s book classifies as a capital-intensive technological approach—from farming speaks of a divorce that simply should not come to pass. Instead, depending on trait, genotypes, environment, and famers’ needs, they should be brought closer together to ensure that what is developed serves those in need, not just those who have the currency and farming practices that are compatible with commercial agriculture.

Visiting Scholar, Nicholas School of the Environment

Duke University

Beyond Stereotypes and Caricatures

In “Chinese Academics Are Becoming a Force for Good Governance” (Issues, Summer 2023), Joy Y. Zhang, Sonia Ben Ouagrham-Gormley, and Kathleen M. Vogel provide a thoughtful exploration of how bioethicists, scientists, legal scholars, and others are making important contributions to ethical debates and policy discussions in China. They are addressing such topics as what constitutes research misconduct and how it should be addressed by scientific institutions and oversight bodies, how heritable human genome editing should be regulated, and what societal responses to unethical practices are warranted when they are not proscribed by existing laws. Their essay also addresses several issues with implications that extend beyond China to global conversations about ethical, legal, and social dimensions of emerging technologies in the life sciences and other domains.

Given the growing role that academics in China are playing in shaping oversight of scientific technologies, individuals expressing dissent from official government doctrine in at least some cases risk being subjected to censorship and pressure to withdraw from public engagement. As tempting as it might be to highlight differences between public discourse under China’s Communist Party and public debate in liberal democratic societies, academics in democracies where various forms of right-wing populism have taken root are also at risk of being subjected to political orthodoxies and punishment for expressions of dissent. One important role transnational organizations can play is to promote and protect critical, thoughtful analyses of emerging technologies. They can also offer solidarity, support, and deliberative spaces to individuals subjected to censorship and political pressure.

Engagement with academics in China needs to occur without the use of self-serving and patronizing narratives about where elite science occurs, where research scandals are likely to take place, and which countries have well-regulated environments for scientific research and clinical practice.

The authors also note the challenges that scholars in China have had in advocating for more robust ethical review and regulatory oversight of scientific research funded by industry and other private-sector sources. This issue extends to other countries with stringent review of research funded by government agencies and conducted at government-supported institutions, and with comparatively lax oversight of research funded by private sources and conducted at private-sector institutions. This disparity in regulatory models is a recipe for future research scandals involving a variety of powerful technologies. In the biomedical sciences, for example, these discrepancies in governance frameworks are becoming increasingly concerning when longevity research is funded by private industry or even individual billionaires who may have well-defined objectives regarding what they hope to achieve and sometimes a willingness to challenge the authority of national regulatory bodies.

Finally, we need to move beyond the facile use of national stereotypes and caricatures when discussing China and other countries with evolving policies for responsible research. China, as the authors point out, is sometimes depicted as a “Wild East” environment in which “rogue scientists” can flourish. However, research scandals are a global phenomenon. Likewise, inadequate oversight of clinical facilities is an issue in many countries, including nations with what often are assumed to be well-resourced and effective regulatory bodies. For example, academics used to write about “stem cell tourism” to such countries as China, India, and Mexico, but clinics engaged in direct-to-consumer marketing of unlicensed and unproven stem cell interventions are now proliferating in the United States as well. Our old models of the global economy, with well-regulated countries versus out-of-control marketplaces, often have little to do with current realities. Engagement with academics in China needs to occur without the use of self-serving and patronizing narratives about where elite science occurs, where research scandals are likely to take place, and which countries have well-regulated environments for scientific research and clinical practice.

Executive Director, UCI Bioethics Program

Professor, Department of Health, Society, & Behavior

Program in Public Health

University of California, Irvine

Fairer Returns on Public Investments

The US government has a strong track record of funding innovative health technologies, including the Human Genome Project, the Epi-Pen, prescription drugs, and lifesaving vaccines.

In “ARPA-H Could Offer Taxpayers a Fairer Shake” (Issues, Summer 2023), Travis Whitfill and Mariana Mazzucato accurately describe three strategies for how the Advanced Research Project Agency for Health (ARPA-H) could structure its grant program to ensure that taxpayers receive a fairer return for their high-risk public investments in research and development to solve society’s most pressing health challenges. One of their core ideas is repurposing a successful venture capital model of converting early-stage investments into equity ownership if a product progresses successfully in the development process.

As patients face challenges in accessing affordable prescription drugs and health technologies, we believe it is imperative for policymakers and ARPA-H leaders to address two fundamental questions: How does the proposed grant program strategy directly help patients, and how will ARPA-H (or any government agency) implement and enforce this specific strategy?

The first question concerns what patients ultimately care about—how will this policy impact them and their loved ones? For example, if the government receives equity ownership in a successful company that generates revenue for the US Treasury, that has limited direct benefit for a family that cannot afford the health technology.

There should be a strong emphasis that all patients, regardless of their demographic background or insurance status, can access innovative health technologies developed with public funding at a fair price. For example, in September 2023 the Biden administration announced a $326 million contract with Regeneron to develop a monoclonal antibody for COVID-19 prevention. This contract included a pricing provision that requires the list price in the United States to be equal to or lower than the price in other major countries. Maintaining this focus will lead policymakers to address how we pay for these health technologies and consider practical steps to achieve equitable access. That may include price negotiation or reinvesting sales revenue directly into public health and the social determinants of health.

There should be a strong emphasis that all patients, regardless of their demographic background or insurance status, can access innovative health technologies developed with public funding at a fair price.

The effectiveness of any policy depends strongly on its implementation and enforcement. As Whitfill and Mazzucato mention, the US government has the legal authority to seek lower prescription drug prices through the Bayh-Dole Act for inventions with federally funded patents. However, the National Institutes of Health, which houses ARPA-H as an independent agency, has refused to exercise its license or other statutory powers, most recently with enzalutamide (Xtandi), a prostate cancer drug.

The government also has the existing legal authority under 28 US Code §1498 to make or use a patent-protected product while giving the patent owners “reasonable and entire compensation” when doing so, but it has not implemented this policy in the case of prescription drugs for many decades.

It is no secret that corporations in the US pharmaceutical market are incentivized by various forces to pursue profit maximization. In the case of public funding to support pharmaceutical innovation, we need to ensure that when taxpayers de-risk research and development, they should also share more directly in the financial benefits of that investment.

Primary Care Physician, Brigham and Women’s Hospital

Health Policy Researcher, Harvard Medical School

Program On Regulation, Therapeutics, and Law

Professor of Medicine, Department of Medicine, Division of Pharmacoepidemiology and Pharmacoeconomics

Brigham and Women’s Hospital and Harvard Medical School

Director, Program On Regulation, Therapeutics, and Law

Travis Whitfill and Mariana Mazzucato make a case that demands the attention of both leaders of the Advanced Research Project Agency for Health (ARPA-H) and policymakers: the agency’s innovation must focus not only on technology but also finance. Breaking from decades of public finance for science and technology with few strings attached, new policy-thinking is needed, they argue, if taxpayers are to get a dynamic and fair return on their ARPA-H investments. To pursue this goal, the authors make three promising proposals: capturing returns through public sector equity, curbing shareholder profiteering by promoting reinvestment in innovation, and setting conditions for access and affordability.

As the technology scholar Bhaven N. Sampath chronicled in Issues in 2020, however, debates over the structure of public financing for scientific research and development have been around since the dawn of the post-war era. But amid reassessments of long-standing orthodoxy about public and private roles in innovation, Whitfill and Mazzucato’s argument lands in at least two intriguing streams of policy rethinking.

First, debates over ARPA-H’s design could connect biomedical R&D policy to the wider “industrial strategy” paradigm being shaped across spheres of technology, from semiconductors to green energy. Passage of the CHIPS and Science Act, the Inflation Reduction Act, and the Bipartisan Infrastructure Deal has invigorated government efforts to shift from a laissez-faire posture to proactively shape markets in pursuit of specific national security, economic, and social goals. Yet biomedical research has been noticeably absent from these policy discussions, perhaps in part because of the strong grip of vested interests and narratives about the division of labor between government and industry. Seeing ARPA-H through this industrial strategy lens could instead invite a wider set of fresh proposals about its design and implementation.

Seeing ARPA-H through this industrial strategy lens could instead invite a wider set of fresh proposals about its design and implementation.

Second, bringing an industrial strategy view to ARPA-H would take advantage of new momentum to reconfigure government’s relationship with the biopharmaceutical industry, which has recently focused on drug pricing. The introduction of Medicare drug pricing negotiation in the Inflation Reduction Act for a limited set of drugs is a landmark measure for pharmaceutical affordability, yet it directs government policy to ex-post negotiations after an innovation has been developed. If done right, the agency’s investments would “crowd in” the right kind of patient, private capital with ex-ante conditions described by Mazzucato and Whitfill. In the process, ARPA-H could serve as an unprecedented “public option” for biomedical innovation, building public capacity for later stages of R&D prioritized for achieving public health goals.

Whether the authors’ ideas will find traction, however, remains uncertain. Why might change happen now, decades after the initial postwar debates settled into contemporary orthodoxies? Beyond the nascent rethinking of the prevailing neoliberal economic paradigm in policy circles, a critical factor might well be the evolution of a two-decade-old network of smart and strategic lawyers, organizers, and patient groups that comprise the “access to medicines” movements. These movements are pushing for bold changes across multiple domains, including better patenting and licensing practices, public manufacturing, and globally equitable technology transfer. Ultimately, ARPA-H’s success may well rest on citizen-led action that helps decisionmakers understand the stakes of doing public enterprise differently.

Postdoctoral Fellow, Veterans Affairs Scholar

National Clinician Scholars Program, Yale School of Medicine

Author of Capitalizing a Cure: How Finance Controls the Price and Value of Medicines (University of California Press, January 2023)

Travis Whitfill and Mariana Mazzucato’s proposal to utilize the new Advanced Research Project Agency for Health (ARPA-H) to stimulate innovation in pharmaceuticals while reducing the net costs to taxpayers is very important and welcome.

Innovation can indeed provide major benefits and is greatly stimulated by the opportunity to make profits. Yet pharmaceuticals (including vaccines and medical devices) are unlike most other products, even basics such as food and clothing. Their primary aim is not to increase people’s pleasure, as with better tasting food or more stylish clothing, but to improve their health and longevity. Also, the need for and choice of medications is usually determined not by patients but by their physicians.

Another major difference is that the National Institutes of Health and related agencies—that is, taxpayers—finance much of basic medical research. In addition, when patients use medical products, most costs are usually borne not by them but by members of the public (who support public insurance) or by other insurance holders (whose premiums are raised to cover the costs of expensive products). Furthermore, pharmaceuticals are protected from competition by patents, which are manipulated to extend for many years. Pharmaceutical companies should not, therefore, be treated like other private enterprises and be permitted to make huge profits at the expense of the public and patients.

Stimulating beneficial innovations and reducing, if not eliminating, excessive profits are far better than accepting the status quo.

In Whitfill and Mazzucato’s proposal, public monies provided to private companies and researchers would become equity, just like venture capital funds and other private investments, and taxpayers would thus become shareholders in the pharmaceutical companies. The funding could extend beyond research to clinical trials and even marketing and patient follow-up. This would create ongoing public-private collaboration that could reward the taxpayers as well as the companies and their other shareholders. In addition, the new ARPA-H could “encourage or require” companies to reinvest profits into research and development and look for other ways to restrict profit-taking, and could insist on accessible prices for the drugs it helped to finance.

But their proposal’s feasibility and impact are uncertain. To what extent would ARPA-H have to expand its current funding—$1.5 billion in 2023 and $2.5 billion requested for 2024, in contrast to $187 billion spent by the NIH to enable new drug approvals between 2010 and 2019—to make a substantial impact on the development of new, high-value pharmaceuticals? What degree of price and profit restriction would companies be willing to accept? Could the benefit of higher prices to taxpayers as shareholders be used to justify the excessive prices that benefit company executives and other shareholders even more? Should not the burden on those who pay for the pharmaceuticals by financing public and private insurances be taken into account? Finally, would politicians be willing, in the face of fierce lobbying by pharma, to provide ARPA-H with the required funds and authority?

Nonetheless, expanding an already-existing (even if newly created) agency is clearly more feasible than more radical restructuring, such as my colleagues and I have proposed. Stimulating beneficial innovations and reducing, if not eliminating, excessive profits are far better than accepting the status quo. I strongly support, therefore, implementing Whitfill and Mazzucato’s proposal.

Professor Emeritus

Departments of Internal Medicine and Pediatrics

Albany Medical College

Chaosmosis: Assigning Rhythm to the Turbulent

Roman De Giuli, “Sense of Scale,” 2022, video still.
Roman De Giuli, Sense of Scale, 2022, video still.

Chaosmosis: Assigning Rhythm to the Turbulent is an art exhibition inspired by fluid dynamics, a discipline that describes the flow of liquids and gases. The exhibition draws from past submissions to the American Physical Society’s Gallery of Fluid Motion, an annual program that serves as a visual record of the aesthetic and science of contemporary fluid dynamics. For the first time, a selection of these past submissions has been curated into an educational art exhibition to engage viewers’ senses.

The creators of these works, which range from photography and video to sculpture and sound, are scientists and artists. Their work enables us to see the invisible and understand the ever-moving elements surrounding and affecting us. Contributors to the exhibition include artists Rafael Lozano-Hemmer and Roman De Giuli, along with physicists Georgios Matheou, Alessandro Ceci, Philippe Bourrianne, Manouk Abkarian, Howard Stone, Christopher Clifford, Devesh Ranjan, Virgile Thievenaz, Yahya Modarres-Sadeghi, Alvaro Marin, Christophe Almarcha, Bruno Denet, Emmanuel Villermaux, Arpit Mishra, and Paul Branson.

Magnified frozen water droplets resemble shattered glass in a series of photographs. A video simulation depicts the confined friction occurring within a pipe with flowing liquid. In other works, the fluid motions portrayed are produced by human bodies: a video sheds light on the airflow of an opera singer while singing, and a 3D-printed sculpture reveals the flow of human breath using sound from the first dated recording of human speech. Gases and liquids are in constant motion, advancing in seemingly chaotic ways, yet the works offer a closer look, revealing elegant and poetic patterns amid atmospheric turbulence.

The term chaosmosis, coined by the philosopher Félix Guattari in the 1990s, conveys the idea of transforming chaos into complexity. It assigns rhythm to the turbulent, linking breathing with the subjective perception of time, and concluding that respiration is what unites us all.

Stephen R. Johnston, Jessica B. Imgrund, Dan Fries, Rafael Lozano-Hemmer, Stephan Schulz, Kyle C. Johnson, Johnathan T. Bolton, Christopher J. Clifford, Brian S. Thurow, Enrico Fonda, Katepalli R. Sreenivasan, and Devesh Ranjan, "Volute 1: Au Clair De La Lune," 2016, 3D-printed filament, sound, 26 x 7 x 8 inches.
Stephen R. Johnston, Jessica B. Imgrund, Dan Fries, Rafael Lozano-Hemmer, Stephan Schulz, Kyle C. Johnson, Johnathan T. Bolton, Christopher J. Clifford, Brian S. Thurow, Enrico Fonda, Katepalli R. Sreenivasan, and Devesh Ranjan, Volute 1: Au Clair De La Lune, 2016, 3D-printed filament, sound, 26 x 7 x 8 inches.
Roman De Giuli, ”Sense of Scale,” 2022, video still.
Roman De Giuli, Sense of Scale, 2022, video still.

Chaosmosis runs from October 2, 2023, through February 23, 2024, at the National Academy of Sciences building in Washington, DC. The exhibition is curated by Natalia Almonte and Nicole Economides in coordination with Azar Panah and the American Physical Society’s Division of Fluid Dynamics.

Transforming Research Participation

In “From Bedside to Bench and Back” (Issues, Summer 2023), Tania Simoncelli highlights patients moving from being subjects of biomedical research to leading that research. Patients and their families no longer simply participate in research led by others and advocate for resources. Together they design and implement research agendas, taking the practice of science into their own hands. As Simoncelli details, initiatives such as the Chan Zuckerberg Initiative’s Rare as One Project—and the patient-led partnerships it funds—are challenging longstanding power dynamics in biomedical research.

Opportunities to center the public’s questions, priorities, and values throughout the research lifecycle are not limited to research on health outcomes. And certainly, the promise of participatory approaches is not new. Yet demand for these activities is pressing.

Today’s global challenges are urgent, local, and interconnected. They require all-hands-on-deck solutions in such diverse areas as climate resilience and ecosystem protection, pandemic prevention, and the ethical deployment of artificial intelligence. Benefits of engaging the public in these collective undertakings and of centering societal considerations in research are being recognized by those who hold power in US innovation systems, including by Congress in the CHIPS and Science Act.

Opportunities to center the public’s questions, priorities, and values throughout the research lifecycle are not limited to research on health outcomes.

On August 29, 2023, the President’s Council of Advisors on Science and Technology (PCAST) issued a letter on “Recommendations for Advancing Public Engagement with the Sciences.” PCAST finds, “We must, as a country, create an ecosystem in which scientists collaborate with the public, from the identification of initial questions, to the review and analysis of new findings, to their dissemination and translation into policies.”

To some observers outside the research enterprise, this charge is long overdue. To those already operating at the boundaries of science and communities, it is a welcome door-opener. And to entrenched interests concerned about movement away from a social contract supporting curiosity-driven fundamental research toward solutions-oriented research that focuses scientific processes on solutions and public good, PCAST may be shaking bedrock.

Increased federal demand can move scientific organizations toward participatory practices. For greatest impact, more on-the-ground capacity is needed, including training of practitioners who can connect communities with research tools and collaborators. Similarly essential is continued equity work within research institutions grappling with their history of exclusionary practices.

Boundary organizations that bridge the scientific enterprise with communities of shared interest or place are connecting the public with researchers and putting data, tools, and open science hardware into the hands of more people. The Association of Science and Technology Centers, which I led from 2018 through 2020, issued a community-science framework and suite of resources to build capacity among science-engagement practitioners. The American Geophysical Union’s Thriving Earth Exchange supports community science by helping communities find resources to address their pressing concerns. Public Lab is pursuing environmental justice through community science and open technology. The Expert and Citizen Assessment of Science and Technology (ECAST) Network developed a participatory technology assessment method to support democratic science policy decisionmaking.

I applaud these patient-led partnerships and community-science collaborations, and I look forward to the solutions they produce.

Former Senior Advisor for Management at the Office of Management and Budget

President Emerita of the Association of Science and Technology Centers

Former Chief of Staff of the Office of Science and Technology Policy

Tania Simoncelli paints a powerful picture of the increasingly central role of patients and patient communities in driving medical research. The many success stories she describes of the Chan Zuckerberg Initiative’s Rare as One project provide an assertive counternarrative to the rarely explicated but deeply held presumption that only health professionals with decades of training in science and medicine can and should drive the agenda in health research. These successes confirm that those who continue to treat patient engagement in research as a box-checking exercise do themselves and the patients they claim to serve a grave disservice.

However, these narratives do more than just celebrate accomplishments. They also highlight the limitations of our current systems of funding and prioritizing health research, which require herculean efforts from patients and families already facing their own personal medical challenges. Patient communities have clearly demonstrated that they can achieve the impossible, but they do so because our current systems for funding research provide limited alternatives. How would federal funding for health research need to change such that patients and families would not have to also become scientists, clinicians, drug developers, and fundraisers for their disease to receive attention from the scientific community?

Medical research—and rare disease research in particular—urgently needs substantial investment in shared infrastructure and tools to increase efficiency, reduce costs, and facilitate engagement of diverse patients and families with variable time and resources to contribute. These investments will not only increase efficiency; they will also increase equity insomuch as they reduce the likelihood that progress in a given disease will depend on the financial resources and social capital of a particular patient community. This concern is not just hypothetical; a 2020 study of research funding in the United States for sickle cell disease, which predominantly affects Black patients, compared with cystic fibrosis, which predominantly affects white patients, found an average of $7,690 in annual foundation spending per patient affected with cystic fibrosis compared with only $102 in sickle cell disease, with predicable differences in the numbers of studies conducted and therapies developed. The investments made by the Chan Zuckerberg Initiative have been critical in leveling the playing field, but developing an efficient, equitable, and sustainable approach to rare disease research in the United States will require a commitment on the part of federal policymakers and funders as well.

Medical research—and rare disease research in particular—urgently needs substantial investment in shared infrastructure and tools to increase efficiency, reduce costs, and facilitate engagement of diverse patients and families with variable time and resources to contribute.

To achieve this seismic shift, I see few stakeholders better situated to advise policymakers and funders than the patient communities themselves. While federal funders may support patient engagement in individual research efforts, there is also the need to move this engagement upstream, allowing patients a voice in setting research funding priorities. Of course, implementing increased patient engagement in federal research funding allocation will require a careful examination of whose voices ultimately represent the many, inherently diverse patient communities. Attention to questions of representation and generalizability within and across patient communities is an ongoing challenge in all patient engagement activities, and the responsibility for addressing this challenge lies with all of us—funders, researchers, industry partners, regulators, and patient communities alike. However, it would be a mistake to treat this challenge as impossible: patient communities will undoubtedly prove otherwise.

Senior Research Scholar

Center for Biomedical Ethics

Stanford University School of Medicine

Biomedical research has blind spots that can be reduced, as Tania Simoncelli writes, by “centering the largest stakeholders in medicine—the patients.” By focusing on rare diseases, the Chan Zuckerberg Initiative is partnering with the most daring rebels of the patient-led movement. These pioneers are breaking new paths forward in clinical research, health policy, and data rights management.

But it’s not only people living with rare diseases whose needs are not being met by the current approach to health care delivery and innovation. Equally exciting is the broader coalition of people who are trying to improve their lives by optimizing diet or sleep routines based on self-tracking or building their own mobility or disease-management tools. They, too, are driving research forward, often outside the view of mainstream leaders because the investigations are focused on personal health journeys.

For example, 8 in 10 adults in the United States track some aspect of their health, according to a survey by Rock Health and Stanford University’s Center for Digital Health. These personal scientists are solving their own health mysteries, managing chronic conditions, or finding ways to achieve their goals using clinical-grade digital tools that are now available. How might we create a biomedical research intake valve for those insights and findings?

Patients know their bodies better than anyone and, with training and support, are able to accurately report any changes to their care teams, who can then respond and nip issues in the bud. In a study conducted at Memorial Sloan Kettering Cancer Center, patients being treated with routine chemotherapy who tracked their own symptoms during treatment both lived longer and felt better. Why are we not helping everyone learn how to track their symptoms?

Hardware innovation is another front in the patient-led revolution.

People living with disability ingeniously adapt home health equipment to meet their needs. By improving their own mobility, making a home safer, and creatively solving everyday problems, they and their care partners save themselves and the health care system money. We should invest in ways to lift up and publicize the best ideas related to home care, just as we celebrate advances in laboratory research.

We should invest in ways to lift up and publicize the best ideas related to home care, just as we celebrate advances in laboratory research.

Insulin-requiring diabetes requires constant vigilance and, for some people, that work is aided by continuous glucose monitors and insulin pumps. But medical device companies lock down the data generated by people’s own bodies, ignoring the possibility that patients and caregivers could contribute to innovation to improve their own lives. Happily, the diabetes rebel alliance, whose motto is #WeAreNotWaiting, found a way to not only get access to the data, but also build a do-it-yourself open-source artificial pancreas system. This, by the way, is just one example of how the diabetes community has risen up to demand—or invent—better tools.

Finally, since any conversation about biomedical innovation is now not complete without a reference to artificial intelligence, I will point to evidence that patients, survivors, and caregivers are essential partners in reducing bias on that front as well. For example, when creating an algorithm to measure the severity of osteoarthritis in knee X-rays, a team of academic and tech industry researchers fed it both clinical and patient-reported data. The result was a more accurate estimate of pain, particularly among underserved populations, whose testimony had been ignored or dismissed by human clinicians.

The patient-led revolutionaries are at the gate. Let’s let them in.

Tania Simoncelli provides a thoughtful reminder of the reality faced by many families with someone who has a rare disease. The term “rare disease” is often misunderstood. Such diseases affect an estimated 1 in 10 Americans, which means each of us likely knows someone with one of the 7,000 rare diseases that have a diagnosis. As the former executive director of FasterCures, a center of the Milken Institute, and an executive in a rare disease biotech, I have met many of these families. They see scientific advances reported every day in the news. And yet they may be part of a patient community where there are no options. As Simoncelli points out, fewer than 5% of rare diseases have a treatment approved by the US Food and Drug Administration.

The author’s personal journey is a reminder that champions exist who are dedicated to finding models that can change the system. The Chan Zuckerberg Initiative that Simoncelli works for, which has donated $75 million through its Rare as One program, believes that its funded organizations can establish enough scientific evidence and research infrastructure—and leverage the power of their voices—to attract additional investment from government and the life sciences community. Successful organizations such as the Cystic Fibrosis Foundation have leveraged their research leadership to tap into the enormous capital, talent, and sense of urgency of the private sector to transform the lives of families through the development of treatments, and they have advocated for policies that support patient access. Rare as Oneorganizations are a beacon of light for families forging new paths on behalf of their communities.

The role of philanthropy is powerful, but it does not equate to the roles government and the private sector can play.

As Simoncelli also highlights, the role of philanthropy is powerful, but it does not equate to the roles government and the private sector can play. Since 1983, the Orphan Drug Act has been a major driver spurring the development of therapeutic advances in rare disease, and one study estimates that the FDA approved 599 orphan medications between 1983 and 2020. In August 2022, Congress passed the Inflation Reduction Act authorizing the Medicare program to begin negotiating the prices of drugs that have been on the market for several years. Congress believed that tackling drug prices was a key to ensuring patient affordability. However, critics have pointed to the law’s potential impact on innovation, citing specifically how it could disincentivize research into rare disease. The implementation of the law is ongoing, so it is too early to understand the consequences. But the patient community does not need to wait to advance new innovative models to address any disincentives that may surface.

Every Cure is one of these models that may help address the consequences that new Medicare drug negotiation may have on continuing investments in specific types of research programs. Its mission is to unlock the full potential of existing medicines to treat every disease and every patient possible. Every Cure is building an artificial intelligence-enabled, comprehensive, open-source database of drug-repurposing opportunities. The goal is to create an efficient infrastructure that enables research for rare diseases as well as more common conditions. By working in partnership with the patient community, clinical trials organizations, data scientists, and funders, Every Cure hopes to be a catalyst in advancing new treatment options for patients who currently lack options. Innovation can’t wait—because patients won’t.

Vice Chair of the Board

Every Cure

Tania Simoncelli illuminates a powerful transformation in medical research: enter patients and families to center stage. No longer passive recipients and participants, they are passionate drivers of innovation, teamwork, focus, and results. Science systematically and rigorously approaches truth through cycles of hypothesis and experimentation. Yet science is a human endeavor, and scientists differ in their knowledge, tribal affinities in cultural and scientific backgrounds, bias, creativity, open-mindedness, ambition, and many other critical factors, but often lack “skin in the cure game.”

Medicine prides itself as a science, but it is a social science, as humans are observed and observers. Less “soft” a science than sociology or psychology, medicine is far closer physics or chemistry in rigor and reproducibility. Patients were traditionally viewed as biased while physicians as objective. Double-blind studies revolutionized medicine by explicitly recognizing the bias of physician scientists. Biases run deep as humans are its reservoirs, vectors, and victims. Paradoxically, patients and families with skin in the game are exceptional collaborators who are immune to academic biases. They have revolutionized medical science.

Patients and families with skin in the game are exceptional collaborators who are immune to academic biases. They have revolutionized medical science.

Academics may myopically measure success by papers published in high-impact journals, prestigious grants, promotions, and honors. Idealistic and iconoclastic views of youth give way with success to perpetuating a new status quo that reinforces their theories and tribe; blinded by bias.

True scientists, masters of doubt about their own beliefs, and people with serious medical disorders and their families seek improved outcomes and cures. Teamwork magnifies medical science’s awesome power.

Dichotomies endlessly divide the road of discovery. What is the best path? Fund basic science, untethered from therapy, answering fundamental questions in biology? Or fund translational science, laser focused on new therapies? How often are biomarkers critical, distractions, or misinformation? What leads to more seminal advances—top-down, planned A-bomb building Manhattan projects, or serendipity, propelling Fleming’s discovery of penicillin? The answer depends on your smarts, team-building skills, and luck. Who can best decide how medical research funds should be allocated? Are those with seniority in politics, science, and medicine best? Should those affected have a say? Why can’t science shine its potent lens on the science of discovery instead of defaulting to “what is established” and “works based” but is not evidence-based?

A new paradigm has arrived. Families with skin in the game have a seat at the decision table. Their motivation is pure, and although no one knows the best path before embarking to discover, choices should be guided by the desire to improve health outcomes, not protect the status quo.

Professor of Neurology and Neuroscience

New York University Grossman School of Medicine

Students seeking a meaningful career in science policy that effects real-world change could do worse than look to the career of Tania Simoncelli. Her account in Issues of how the Chan Zuckerberg Initiative (CZI) is helping build the infrastructure that can speed the development and effectiveness of treatments for rare diseases is just her most recent contribution. It follows her instrumental role in bringing the lawsuit against the drug company Myriad Genetics that ultimately ended in a unanimous US Supreme Court decision invalidating patent claims on genes, as well as her productive stints at several institutions near the centers of power in biomedicine and government.

Rare diseases are rare only in isolation. In aggregate they are not so uncommon. But because they are individually rare, they face a difficult collective action problem. There are few advocates relative to cancer, heart disease, or Alzheimer’s disease, although each of those conditions also languished in neglect at points in their history before research institutions incorporated their conquest into their missions. But rare diseases can fall between the categorical institutes of the National Institutes of Health, or find research on them distributed among multiple institutes, no one of which has sufficient heft to be a champion.

The Chan Zuckerberg team that Simoncelli leads has taken a patient-driven approach. Mary Lasker and Florence Mahoney, who championed cancer and heart disease research by lobbying Congress, giving rise to the modern NIH, might well be proud of this legacy. Various other scientific and policy leaders at the time opposed Lasker and Mahoney’s approach, especially during the run-up to the National Cancer Act of 1971, favoring instead NIH’s scientist-driven research, responding to scientific opportunity. But patient-driven research is a closer proxy to social need. Whether health needs or scientific opportunity should guide research priorities has been the hardy perennial question facing biomedical research policy as it grew ten thousandfold in scale since the end of World War II.

Whether health needs or scientific opportunity should guide research priorities has been the hardy perennial question facing biomedical research policy as it grew ten thousandfold in scale since the end of World War II.

CZI and its Rare As One project are not starting from scratch. They are building on research and advocacy movements that have arisen for chordoma, amyotrophic lateral sclerosis, Castleman disease, and many other conditions. And they are drawing on the strategies of AIDS/HIV activists and breast cancer research advocates who directly influenced national research priorities by systematic attention to science, communication, and disciplined priority-setting from outside government.

Where the Howard Hughes Medical Institute and many other research funders have built on the broad base of NIH research by selecting particularly promising investigators or seizing on emerging scientific opportunities, which is indeed an effective cherry-picking strategy, the CZI is instead building capacity for many organizations to get up to speed on science, think through the challenges and resource needs required to address their particular condition, and develop a research strategy to address it. The scientific elite and grass-roots fertilization strategies are complements, but the resources devoted to the patient-driven side of the scale are far less well established, financed, and institutionalized. That makes the effort all the more intriguing.

The Chan Zuckerberg Initiative is at once helping address the collective action problem of small constituencies, many of which cannot easily harness all the knowledge and tools they need, and also building a network of expertise and experts who mutually reinforce one another. It is a potentially powerful new approach, and a promising frontier of philanthropy.

Professor, School for the Future of Innovation in Society and the Consortium for Science, Policy & Outcomes

Arizona State University