In “Fifty Years of Strategies for Equal Access to Graduate Fellowships” (Issues, Fall 2023), Gisèle Muller-Parker and Jason Bourke suggest that examining the National Science Foundation’s efforts to increase the representation of racially minoritized groups in science, technology, engineering, and mathematics “may offer useful lessons” to administrators at colleges and universities seeking to “broaden access and participation” in the aftermath of the US Supreme Court’s 2023 decision limiting the use of race as a primary factor in student admissions.
Perhaps the most important takeaway from the authors’ analysis—and that also aligns with the court’s decision—is that there are no shortcuts to achieving inclusion. Despite its rejection of race as a category in the admissions process, the court’s decision does not bar universities from considering race on an individualized basis. Chief Justice John Roberts maintained that colleges can, for instance, constitutionally consider a student’s racial identity and race-based experience, be it “discrimination, inspiration or otherwise,” if aligned with a student’s unique abilities and skills, such as “courage, determination” or “leadership”—all of which “must be tied to that student’s unique ability to contribute to the university.” This individualized approach to race implies a more qualitatively focused application and review process.
The NSF experience, as Muller-Parker and Bourke show, also underscores the significance of qualitative applications and review processes for achieving more inclusive outcomes. Despite the decline in fellowship awards to racially minoritized groups starting in 1999, when the foundation ended its initial race-targeted fellowships, the awards pick up and even surpass previous levels of inclusion as the foundation shifted from numeric criteria to a holistic qualitative evaluation and review, for instance, by eliminating summary scores and GRE results and placing more importance on reference letters.
Importantly, the individualized approach to race will place additional burdens on students of color to effectively make their case for how race has uniquely qualified them and made them eligible for admission, and on administrators to reconceptualize, reimagine, and reorganize the admissions process as a whole. Students, particularly from underserved high schools, will need even more institutional help and clearer instructions when writing their college essays, to know how to tie race and their racial experience to their academic eligibility.
In the context of college admissions, enhancing equal access in race-neutral ways will require significant changes in reconceptualizing applicants—as people rather than numbers or categories—and in connecting student access more closely to student participation. This will require significant resources and organizational change: admissions’ access goals would need to be closely integrated with participation goals of other offices such as student life, residence life, student careers, as well as with academic units; and universities would need to regularly conduct campus climate surveys, assessing not just the quantity of diverse students in the student body but also the quality of their experiences and the ways by which their inclusion enhances the quality of education provided by the university.
These holistic measures are easier said than done, especially among smaller teaching-centered or decentralized colleges and universities, and a measurable commitment to diversity will be even more patchy than is currently achieved across higher education, given the existence of numerous countervailing forces (political, social, financial) that differentially impact public and private institutions and vary significantly from state to state. However, as Justice Sotomayor wrote in closing in her dissenting opinion, “Although the court has stripped almost all uses of race in college admissions…universities can and should continue to use all available tools to meet society’s needs for diversity in education.” The NSF’s story provides some hope that this can be achieved if administrators are able and willing to reimagine (and not just obliterate) racial inclusion as a crucial goal for academic excellence.
Gwendoline Alphonso
Professor of Politics
Cochair, College of Arts and Sciences, Diversity, Equity and Inclusion Committee
Fairfield University
Gisele Muller-Parker and Jason Bourke’s discussion of what we might learn from the forced closure of the National Science Foundation’s Minority Graduate Fellowship Program and subsequent work to redesign the foundation’s Graduate Research Fellowship Program (GRFP) succinctly illustrates the hard work required to construct programs that identify and equitably promote talent development. As the authors point out, GRFP, established in 1952, has awarded fellowships to more than 70,000 students, paving the way for at least 40 of those fellows to become Nobel laureates and more than 400 to become members of the National Academy of Sciences.
The program provides a $37,000 annual stipend for three years and a $12,000 cost of education allowance with no postgraduate service requirement. It is a phenomenal fellowship, yet the program’s history demonstrates how criteria, processes, and structures can make opportunities disproportionally unavailable to talented persons based on their gender, racial identities, socioeconomic status, and where they were born and lived.
This is the great challenge that education, workforce preparation, and talent development leaders must confront: how to parse concepts of talent and opportunity such that we are able to equitably leverage the whole capacity of the nation. This work must be undertaken now for America to meet its growing workforce demands in science, technology, engineering, mathematics, and medicine—the STEMM fields. This is the only way we will be able to rise to the grandest challenges threatening the world, such as climate change, food and housing instability, and intractable medical conditions.
By and large, most institutions of higher education are shamefully underperforming in meeting those challenges. Here, I point to the too-often overlooked and underfunded regional colleges and universities that were barely affected by the US Supreme Court’s recent decision to end the use of race-conscious admissions policies. Most regional institutions, by nature of their missions and students they serve, have never used race as a factor in enrollment, and yet they still serve more students from minoritized backgrounds than their Research-1 peers, as demonstrated by research from the Brookings Institution. Higher education leaders must undertake the difficult work of examining the ways in which historic and contemporaneous bias has created exclusionary structures, processes, and policies that helped reproduce social inequality instead of increasing access and opportunity for all parts of the nation.
The American Association for the Advancement of Science’s SEA Change initiative cultivates that exact capacity building among an institution’s leaders, enabling them to make data-driven, law-attentive, and people-focused change to meet their institutional goals. Finally, I must note one correction to the authors’ otherwise fantastic article: the Supreme Court’s pivotal decision in Students for Fair Admissions v. Harvard and Students for Fair Admissions v. University of North Carolina did not totally eliminate race and ethnicity as a factor in college admissions. Rather, the decision removed the opportunity for institutions to use race as a “bare consideration” and instead reinforced that a prospective student’s development of specific knowledge, skills, and character traits as they related to race, along with the student’s other lived experiences, can and should be used in the admissions process.
Travis T. York
Director, Inclusive STEMM Ecosystems for Equity & Diversity
American Association for the Advancement of Science
The US Supreme Court’s 2023 rulings on race and admissions have required universities to closely review their policies and practices for admitting students. While the rulings focused on undergraduate admissions, graduate institutions face distinct challenges as they work to comply with the new legal standards. Notably, graduate education tends to be highly decentralized, representing a variety of program cultures and admissions processes. This variety may lead to uncertainty about legally sound practice and, in some cases, a tendency to overcorrect or default to “safe”—because they have been uncontested—standards of academic merit.
Gisèle Muller-Parker and Jason Bourke propose that examining the history of the National Science Foundation’s Graduate Research Fellowship Program (GRFP) can provide valuable information for university leaders and faculty in science, technology, engineering, and mathematics working to reevaluate graduate admissions. The authors demonstrate the potential impact of admission practices often associated with the type of holistic review that NSF currently uses for selecting its fellows: among them, reducing emphasis on quantitative measures, notably GRE scores and undergraduate GPA, and giving careful consideration to personal experiences and traits associated with success. In 2014, for example, the GRFP replaced a requirement for a “Previous Research” statement, which privileged students with access to traditional research opportunities, with an essay that “allows applicants flexibility in the types of evidence they provide about their backgrounds, scientific ability, and future potential.”
These changes made a real difference in the participation of underrepresented students in the GRFP and made it possible for students from a broader range of educational institutions to have a shot at this prestigious fellowship.
Critics of these changes may say that standards were lowered. But the education community at large must unequivocally challenge this view. There is no compelling evidence to support the idea that traditional criteria for admitting students are the best. Scientists must be prepared to study the customs of their field, examining assumptions (“Are experiences in well-known laboratories the only way to prepare undergraduates for research?”) and asking new questions (“To what extent does a diversity of perspectives and problem-solving strategies affect programs and research?”).
As we look to the future, collecting evidence on the effects of new practices, we will need to give special consideration to the following issues:
First, in introducing new forms of qualitative materials, we must not let bias in the back door. Letters and personal statements need careful consideration, both in their construction and in their evaluation.
Second, we must clearly articulate the ways that diversity and inclusion relate to program goals. The evaluation of personal and academic characteristics is more meaningful, and legally sound, when these criteria are transparent to all.
Finally, we must think beyond the admissions process. In what ways can institutions make diversity, equity, and inclusion integral to their cultures and to the social practices supporting good science?
As the history of the GFRP shows, equity-minded approaches to graduate education bring us closer to finding and supporting what the National Science Board calls the “Missing Millions” in STEM. We must question what we know about academic merit and rigorously test the impact of new practices—on individual students, on program environments, and on the health and integrity of science.
Julia D. Kent
Vice President, Best Practices and Strategic Initiatives
Council of Graduate Schools
Native Voices in STEM
“Many of the research meetings I have participated in take place at long rectangular tables where the power and primary conversation participants are at one end. I don’t experience this hierarchical power differential in talking circles. Talking circles are democratic and inclusive. There is still a circle at the rectangular table, just a circle that does not include everyone at the table. I find this to be representative of experiences I have had in my STEM discipline, in which it was difficult to find a place in a community or team or in which I did not feel valued or included.”
Native Voices in STEM: An Exhibition of Photographs and Interviews is a collection of photographs and texts created by Native scientists and funded by the National Science Foundation. It grew from a mixed-methods study conducted by researchers from TERC, the University of Georgia, and the American Indian Science and Engineering Society (AISES). According to the exhibition creators, the artworks speak to the photographers’ experiences of “Two-Eyed Seeing,” or the tensions and advantages from braiding together traditional Native and Western ways of knowing. The exhibition was shown at the 2022 AISES National Conference.
Getting the Most From New ARPAs
The Fall 2023 Issues included three articles—“No, We Don’t Need Another ARPA” by John Paschkewitz and Dan Patt, “Building a Culture of Risk-Taking” by Jennifer E. Gerbi, and “How I Learned to Stop Worrying and Love Intelligible Failure” by Adam Russell—discussing several interesting dimensions of new civilian organizations modeled on the Advanced Research Projects Agency at the Department of Defense. One dimension that could use further elucidation starts with the observation that ARPAs are meant to deliver innovative technology to be utilized by some end customer. The stated mission of the original DARPA is to bridge between “fundamental discoveries and their military use.” The mission of ARPA-H, the newest proposed formulation, is to “deliver … health solutions,” presumably to the US population.
When an ARPA is extraordinarily successful, it delivers an entirely new capability that can be adopted by its end customer. For example, DARPA delivered precursor technology (and prototype demonstrations) for stealth aircraft and GPS. Both were very successfully adopted.
Such adoption requires that the new capability coexist or operate within the existing processes, systems, and perhaps even culture of the customer. Understanding the very real constraints on adoption is best achieved when the ARPA organization has accurate insight into specific, high-priority needs, as well as the operations or lifestyle, of the customer. This requires more than expertise in the relevant technology.
DARPA uses several mechanisms to attain that insight: technology-savvy military officers take assignments in DARPA, then return to their military branch; military departments partner, via co-funding, on projects; and often the military evaluates a DARPA prototype to determine effectiveness. These relations with the end customer are facilitated because DARPA is housed in the same department as its military customer, the Department of Defense.
The health and energy ARPAs face a challenge: attaining comparable insight into their end customers. The Department of Health and Human Services does not deliver health solutions to the US population; the medical-industrial complex does. The Department of Energy does not deliver electric power or electrical appliances; the energy utilities and private industry do. ARPA-H and ARPA-E are organizationally removed from those end customers, both businesses (for profit or not) and the citizen consumer.
Technology advancement enables. But critical to innovating an adoptable solution is identification of the right problem, together with a clear understanding of the real-world constraints that will determine adoptability of the solution. Because civilian ARPAs are removed from many end customers, ARPAs would seem to need management processes and organizational structures that increase the probability of producing an adoptable solution from among the many alternative solutions that technology enables.
Anita Jones
Former Director of Defense Research and Engineering
Department of Defense
University Professor Emerita
University of Virginia
Connecting STEM with Social Justice
The United States faces a significant and stubbornly unyielding racialized persistence gap in science, technology, engineering, and mathematics. Nilanjana Dasgupta sums up one needed solution in the title of her article: “To Make Science and Engineering More Diverse, Make Research Socially Relevant” (Issues, Fall 2023).
Among the students who enter college intending to study STEM, persons excluded because of ethnicity or race (PEERs) which includes students identifying as Black, Indigenous, and Latine, have a twofold greater likelihood of leaving these disciplines than do non-PEERs. While we know what are not the reasons for the racialized gap—not lack of interest or preparation—we largely don’t know how to effectively close the gap. We know engaging undergraduates in mentored, authentic scientific research raises their self-efficacy and feeling of belonging. However, effective research experiences are difficult to scale because they require significant investments in mentoring and research infrastructure capacity.
Another intervention is much less expensive and much more scalable. Utility-value interventions (UVIs) provide a remarkably long-lasting positive effect on students. In this approach, over an academic term students in an introductory science course spend a modest amount of class time reflecting and writing about how the scientific topic just introduced is personally related to them and their communities. The UVIs benefit all students, resulting in little or no difference in STEM persistence between PEERs and non-PEERs.
Can we do more? Rather than occasionally interrupting class to allow students to connect a science concept with real-world social needs, can we change the way we present the concept? The UVI inspires a vision of a new STEM curriculum comprising reimagined courses. We might call the result Socially Responsive STEM, or SR-STEM. SR-STEM would be more than distribution or general education requirements, and more than learning science in the context of a liberal arts education. Instead, the overhaul will be the creation of new courses that seamlessly integrate basic science concepts with society and social justice. The courses would encourage students to think critically about the interplay between STEM and non-STEM disciplines such as history, literature, religion, and economics, and explore how STEM affects society.
Here are a few examples from the life sciences; I think similar approaches can be developed for other STEM disciplines. When learning about evolution, students would investigate and discuss the evidence used to create the false polygenesis theory of human races. In genetics, students would evaluate the evidence of epigenetics effects resulting from the environment and poverty. In immunology, students would explore the sociology and politics of vaccine avoidance. The mechanisms of natural phenomena would be discussed from different perspectives, including indigenous ways of knowing about nature.
Implementing SR-STEM will require a complete overhaul of the learning infrastructure, including instructor preparation, textbooks, Advanced Placement courses, GRE and other standardized exams, and accreditation (e.g., ACS and ABET) criteria. The stories of discoveries we tell in class will change, from the “founding (mostly white and dead) fathers” to contemporary heroes of many identities and from all backgrounds.
It is time to begin a movement in which academic departments, professional societies, and funding organizations build Socially Responsive STEM education so that the connection of STEM to society and social justice is simply what we do.
David J. Asai
Former Senior Director for Science Education
Howard Hughes Medical Institute
To maximize the impact of science, technology, engineering, and mathematics in society, we need to do more than attract a diverse, socially concerned cohort of students to pursue and persist through our academic programs. We need to combine the technical training of these students with social skill building.
To advance sustainability, justice, and resilience goals in the real world (not just through arguments made in consulting reports and journal papers), students need to learn how to earn the respect and trust of communities. In addition to understanding workplace culture, norms, and expectations, and cultivating negotiation skills, they need to know to research the history, interests, racial, cultural, and equitable identities, and power imbalances in communities before beginning their work. They need to appreciate the community’s interconnected and, at times, conflicting needs and aspirations. And they need to learn how to communicate and collaborate effectively, to build allies and coalitions, to follow through, and to neither overpromise nor prematurely design the “solution” before fully understanding the problem. They must do all this while staying within the project budget, schedule, and scope—and maintaining high quality in their work.
One of the problems is that many STEM faculty lack these skills themselves. Some may consider the social good implications only after a project has been completed. Others may be so used to a journal paper as the culmination of research that they forget to relay and interpret their technical findings to the groups who could benefit most from them. Though I agree that an increasing number of faculty appear to be motivated by equity and multidisciplinarity in research, translation of research findings into real world recommendations is much less common. If it happens at all, it frequently oversimplifies key logistical, institutional, cultural, legal, or regulatory factors that made the problem challenging in the first place. Both outcomes greatly limit the social value of STEM research. While faculty in many fields now use problem-based learning to tackle real world problems in teaching, we are also notorious for attempting to address a generational problem in one semester, then shifting our attention to something else. We request that community members enrich our classrooms by sharing their lived experiences and perspectives with our students without giving much back in return.
Such practices must end if we, as STEM faculty, are to retain our credibility both in the community and with our students, and if we wish to see our graduates embraced by the communities they seek to serve. The formative years of today’s students were juxtaposed on a backdrop of bad news. If they chose STEM because of a belief that science has answers to these maddening challenges, these students need real evidence that their professional actions will yield tangible and positive outcomes. Just like members of the systematically disadvantaged and marginalized communities they seek to support, these students can easily spot hypocrisy, pretense, greenwashing, and superficiality.
As a socially engaged STEM researcher and teacher, I have learned that I must be prepared to follow through with what I have started—as long as it takes. I prep my students for the complex social dynamics they will encounter, without coddling or micromanaging them. I require that they begin our projects with an overview of the work’s potential practical significance, and that our research methods answer questions that are codeveloped with external partners, who themselves are financially compensated for their time whenever possible. By modeling these best practices, I try to give my students (regardless of their cultural or racial backgrounds) competency not just in STEM, but in application of their work in real contexts.
Franco Montalto
Professor, Department of Civil, Architectural, and Environmental Engineering
Drexel University
Nilanjana Dasgupta’s article inspired reflection on our approach at the Burroughs Wellcome Fund (BWF) to promoting diversity in science nationwide along with supporting science, technology, engineering, and mathematics education specifically in North Carolina. These and other program efforts have reinforced our belief in the power of collaboration and partnership to create change.
For nearly 30 years, BWF has supported organizations across North Carolina that provide hands-on, inquiry-based activities for students outside the traditional classroom day. These programs offer a wide range of STEM experiences for students. Some of the students “tinker,” which we consider a worthwhile way to experience the nuts-and-bolts of research, and others explore more socially relevant experiences. An early example is from a nonprofit in the city of Jacksonville, located near the state’s eastern coast. In the program, the city converted an old wastewater treatment plant into an environmental education center where students researched requirements for reintroducing sturgeon and shellfish into the local bay. More than 1,000 students spent their Saturdays learning about environmental science and its application to improve the quality of water in the local watershed. The students engaged their families and communities in a dialogue about environmental awareness, civic responsibility, and local issues of substantial scientific and economic interest.
For our efforts in fostering diversity in science, we have focused primarily on early-career scientists. Our Postdoctoral Diversity Enrichment Program provides professional development support for underrepresented minority postdoctoral fellows. The program places emphasis on a strong mentoring strategy and provides opportunities for the fellows to engage with a growing network of scholars.
Recently, BWF has become active in the Civic Science movement led by the Rita Allen Foundation, which describes civic science as “broad engagement with science and evidence [that] helps to inform solutions to society’s most pressing problems.” This movement is very much in its early stages, but it holds immense possibility to connect STEM to social justice. We have supported fellows in science communication, diversity in science, and the interface of arts and science.
Another of our investments in this space is through the Our Future Is Science initiative, hosted by the Aspen Institute’s Science and Society program. The initiative aims to equip young people to become leaders and innovators in pushing science toward improving the larger society. The program’s goals include sparking curiosity and passion about the connection between science and social justice among youth and young adults who identify as Black, Indigenous, or People of Color, as well as those who have low income or reside in rural communities. Another goal is to accelerate students’ participation in the sciences to equip them to link their interests to tangible educational and career STEM opportunities that may ultimately impact their communities.
This is an area ripe for exploration, and I was pleased to read the author’s amplification of this message. At the Burroughs Wellcome Fund, we welcome the opportunity to collaborate on connecting STEM and social justice work to ignite societal change. As a philanthropic organization, we strive to holistically connect the dots of STEM education, diversity in science, and scientific research.
Louis J. Muglia
President and CEO
Burroughs Wellcome Fund
As someone who works on advancing diversity, equity, and inclusion in science, technology, engineering, and mathematics higher education, I welcome Nilanjana Dasgupta’s pointed recommendation to better connect STEM research with social justice. Gone are the days of the academy being reserved for wealthy, white men to socialize and explore the unknown, largely for their own benefit. Instead, today’s academy should be rooted in addressing the challenges that the whole of society faces, whether that be how to sustain food systems, build more durable infrastructure, or identify cures for heretofore intractable diseases.
Approaching STEM research with social justice in mind is the right thing to do both morally and socially. And our educational environments will be better for it, attracting more diverse and bright minds to science. As Dasgupta demonstrates, research shows that when course content is made relevant to students’ lives, students show increases in interest, motivation, and success—and all these findings are particularly pronounced for students of color.
Despite focused attention on increasing diversity, equity, and inclusion over the past several decades, Black, Indigenous, and Latine students continue to remain underrepresented in STEM disciplines, especially in graduate education and the careers that require such advanced training. In 2020, only 24% of master’s and 16% of doctoral degrees in science and engineering went to Black, Indigenous, and Latine graduates, despite these groups collectively accounting for roughly 37% of the US population aged 18 through 34. Efforts to increase representation have also faced significant setbacks due to the recent Supreme Court ruling on the consideration of race in admissions. However, Dasgupta’s suggestion may be one way we continue to further the nation’s goal of diversifying STEM fields in legally sustainable ways, by centering individuals’ commitments to social justice rather than, say, explicitly considering race or ethnicity in admissions processes.
Moreover, while Dasgupta does well to provide examples of how we might transform STEM education for students, the underlying premise of her article—that connecting STEM to social justice is an underutilized tool—is relevant to several other aspects of academia as well.
For instance, what if universities centered faculty hiring efforts on scholars who are addressing social issues and seeking to make the world a more equitable place, rather than relying on the otherwise standard approach of hiring graduates from prestigious institutions who publish in top-tier journals? The University of California, San Diego, may serve as one such example, having hired 20 STEM faculty over the past three years whose research uses social justice frameworks, including bridging Black studies and STEM. These efforts promote diverse thought and advance institutional missions to serve society.
Science philanthropy is also well poised to prioritize social justice research. At Sloan, we have a portfolio of work that examines critical and under-explored questions related to issues of energy insecurity, distributional equity, and just energy system transitions in the United States. These efforts recognize that many historically marginalized racial and ethnic communities, as well as economically vulnerable communities, are often unable to participate in the societal transition toward low-carbon energy systems due to a variety of financial, social, and technological challenges.
In short, situating STEM in social justice should be the default, not the occasional endeavor.
Tyler Hallmark
Program Associate
Alfred P. Sloan Foundation
Building the Quantum Workforce
In “Inviting Millions Into the Era of Quantum Technologies” (Issues, Fall 2023), Sean Dudley and Marisa Brazil convincingly argue that the lack of a qualified workforce is holding back this field from reaching its promising potential. We at IBM Quantum agree. Without intervention, the nation risks developing useful quantum computing alongside a scarcity of practitioners who are capable of using quantum computers. An IBM Institute for Business Value study found that inadequate skills is the top barrier to enterprises adopting quantum computing. The study identified a small subset of quantum-ready organizations that are talent nurturers with a greater understanding of the quantum skills gap, and that are nearly three times more effective than their cohorts at workforce development.
Quantum-ready organizations are nearly five times more effective at developing internal quantum skills, nearly twice as effective at attracting talented workers in science, technology, engineering, and mathematics, and nearly three times more effective at running internship programs. At IBM Quantum, we have directly trained more than 400 interns at all levels of higher education and have seen over 8 million learner interactions with Qiskit, including a series of online seminars on using the open-source Qiskit tool kit for useful quantum computing. However, quantum-ready organizations represent only a small fraction of the organizations and industries that need to prepare for the growth of their quantum workforce.
As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve. As we move even further toward the age of quantum-centric supercomputing, we will need a larger workforce capable of orchestrating quantum and classical computational resources in order to address domain-specific problems.
Looking to academia, we need more quantum-ready institutions that are effective not only at teaching advanced mathematics, quantum physics, and quantum algorithms, but also are effective at teaching domain-specific skills such as machine learning, chemistry, materials, or optimization, along with teaching how to utilize quantum computing as a tool for scientific discovery.
Critically, it is imperative to invest in talent early on. The data on physics PhDs granted by race and ethnicity in the United States paint a stark picture. Industry cannot wait until students have graduated and are knocking on company doors to begin developing a talent pipeline. IBM Quantum has made a significant investment in the IBM-HBCU Quantum Center through which we collaborate with more than two dozen historically Black colleges and universities to prepare talent for the quantum future.
Academia needs to become more effective in supporting quantum research (including cultivating student contributions) and partnering with industry, in connecting students into internships and career opportunities, and in attracting students into the field of quantum. Quoting Charles Tahan, director of the National Quantum Coordination Office within the White House Office of Science and Technology Policy: “We need to get quantum computing test beds that students can learn in at a thousand schools, not 20 schools.”
Rensselaer Polytechnic Institute and IBM broke ground on the first IBM Quantum System One on a university campus in October 2023. This presents the RPI community with an unprecedented opportunity to learn and conduct research on a system powered by a utility-scale 127-qubit processor capable of tackling problems beyond the capabilities of classical computers. And as lead organizers of the Quantum Collaborative, Arizona State University—using IBM and other industry quantum computing resources—is working with other academic institutions to provide training and educational pathways across high schools and community colleges through to undergraduate and graduate studies in the field of quantum.
Our hope is that these actions will prove to be only part of a broader effort to build the quantum workforce that science, industry, and the nation will need in years to come.
Bradley Holt
IBM Quantum
Program Director, Global Skills Development
Sean Dudley and Marisa Brazil advocate for mounting a national workforce development effort to address the growing talent gap in the field. This effort, they argue, should include educating and training a range of learners, including K–12 students, community college students, and workers outside of science and technology fields, such as marketers and designers. As the field will require developers, advocates, and regulators—as well as users—with varying levels of quantum knowledge, the authors’ comprehensive and inclusive approach to building a competitive quantum workforce is refreshing and justified.
At Qubit by Qubit, founded by the Coding School and one of the largest quantum education initiatives, we have spent the past four years training over 25,000 K–12 and college students, educators, and members of the workforce in quantum information science and technology (QIST). In collaboration with school districts, community colleges and universities, and companies, we have found great excitement among all these stakeholders for QIST education. However, as Dudley and Brazil note, there is an urgent need for policymakers and funders to act now to turn this collective excitement into action.
The authors posit that the development of a robust quantum workforce will help position the United States as a leader of Quantum 2.0, the next iteration of the quantum revolution. Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large. With the interdisciplinary nature of QIST, learners gain exposure and skills in mathematics, computer science, physics, and engineering, among other fields. Thus, even for learners who choose not to pursue a career in quantum, they will have a broad set of highly sought skills that they can apply to another field offering a rewarding future.
With the complexity of quantum technologies, there are a number of challenges in building a diverse quantum workforce. Dudley and Brazil highlight several of these, including the concentration of training programs in highly resourced institutions, and the need to move beyond the current focus on physics and adopt a more interdisciplinary approach. There are several additional challenges that need to be considered and addressed if millions of Americans are to become quantum-literate, including:
Funding efforts have been focused on supporting pilot educational programs instead of scaling already successful programs, meaning that educational opportunities are not accessible widely.
Many educational programs are one-offs that leave students without clear next steps. Because of the complexity of the subject area, learning pathways need to be established for learners to continue developing critical skills.
Diversity, inclusion, and equity efforts have been minimal and will require concerted work between industry, academia, and government.
Historically, the United States has begun conversations around workforce development for emerging and deep technologies too late, and thus has failed to ensure the workforce at large is equipped with the necessary technical knowledge and skills to move these fields forward quickly. We have the opportunity to get it right this time and ensure that the United States is leading the development of responsible quantum technologies.
Kiera Peltz
Executive Director, Qubit by Qubit
Founder and CEO, The Coding School
To create an exceptional quantum workforce and give all Americans a chance to discover the beauty of quantum information science and technology, to contribute meaningfully to the nation’s economic and national security, and to create much-needed bridges with other like-minded nations across the world as a counterbalance to the balkanization of science, we have to change how we are teaching quantum. Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.
The imminence of quantum technologies has motivated physicists—at least in some places—to reinvent their teaching, listening to and working with their engineering, computer science, materials science, chemistry, and mathematics colleagues to create a new kind of course. In 2020, these early experiments in retooling led to a convening of 500 quantum scientists and engineers to debate undergraduate quantum education. Building on success stories such as the quantum concepts course at Virginia Tech, we laid out a plan, published in IEEE Transactions on Education in 2022, to bridge the gap between the excitement around quantum computing generated in high school and the kind of advanced graduate research in quantum information that is really so astounding. The good news is that as Virginia Tech showed, quantum information can be taught with pictures and a little algebra to first-year college students. It’s also true at the community college level, which means the massive cohort of diverse engineers who start their careers there have a shot at inventing tomorrow’s quantum technologies.
However, there are significant missing pieces. For one, there are almost no community college opportunities to learn quantum anything because such efforts are not funded at any significant level. For another, although we know how to teach the most speculative area of quantum information, namely quantum computing, to engineers, and even to new students, we really don’t know how to do that for quantum sensing, which allows us to do position, navigation, and timing without resorting to our fragile GPS system, and to measure new space-time scales in the brain without MRI, to name two of many applications. It is the most advanced area of quantum information, with successful field tests and products on the market now, yet we are currently implementing quantum engineering courses focused on a quantum computing outcome that may be a decade or more away.
How can we solve the dearth of quantum engineers? First, universities and industry can play a major role by working together—and several such collective efforts are showing the way. Arizona State University’s Quantum Collaborative is one such example. The Quantum consortium in Colorado, New Mexico, and Wyoming recently received a preliminary grant from the US Economic Development Administration to help advance both quantum development and education programs, including at community colleges, in their regions. Such efforts should be funded and expanded and the lessons they provide should be promulgated nationwide. Second, we need to teach engineers what actually works. This means incorporating quantum sensing from the outset in all budding quantum engineering education systems, building on already deployed technologies. And third, we need to recognize that much of the nation’s quantum physics education is badly out of date and start modernizing it, just as we are now modernizing engineering and computer science education with quantum content.
Lincoln D. Carr
Quantum Engineering Program and Department of Physics
Colorado School of Mines
Preparing a skilled workforce for emerging technologies can be challenging. Training moves at the scale of years while technology development can proceed much faster or slower, creating timing issues. Thus, Sean Dudley and Marisa Brazil deserve credit for addressing the difficult topic of preparing a future quantum workforce.
At the heart of these discussions are the current efforts to move beyond Quantum 1.0 technologies that make use of quantum mechanical properties (e.g., lasers, semiconductors, and magnetic resonance imaging) to Quantum 2.0 technologies that more actively manipulate quantum states and effects (e.g., quantum computers and quantum sensors). With this focus on ramping up a skilled workforce, it is useful to pause and look at the underlying assumption that the quantum workforce requires active management.
In their analysis, Dudley and Brazil cite a report by McKinsey & Company, a global management consulting firm, which found that three quantum technology jobs exist for every qualified candidate. While this seems like a major talent shortage, the statistic is less concerning when presented in absolute numbers. Because the field is still small, the difference is less than 600 workers. And the shortage exists only when considering graduates with explicit Quantum 2.0 degrees as qualified potential employees.
McKinsey recommended closing this gap by upskilling graduates in related disciplines. Considering that 600 workers is about 33% of physics PhDs, 2% of electrical engineers, or 1% of mechanical engineers graduated annually in the United States, this seems a reasonable solution. However, employers tend to be rather conservative in their hiring and often ignore otherwise capable applicants who haven’t already demonstrated proficiency in desired skills. Thus, hiring “close-enough” candidates tends to occur only when employers feel substantial pressure to fill positions. Based on anecdotal quantum computing discussions, this probably isn’t happening yet, which suggests employers can still afford to be selective. As Ron Hira notes in “Is There Really a STEM Workforce Shortage?” (Issues, Summer 2022), shortages are best measured by wage growth. And if such price signals exist, one should expect that students and workers will respond accordingly.
If the current quantum workforce shortage is uncertain, the future is even more uncertain. The exact size of the needed future quantum workforce depends on how Quantum 2.0 technologies develop. For example, semiconductors and MRI machines are both mature Quantum 1.0 technologies. The global semiconductor industry is a more than $500 billion business (measured in US dollars), while the global MRI business is about 100 times smaller. If Quantum 2.0 technologies follow the specialized, lab-oriented MRI model, then the workforce requirements could be more modest than many projections. More likely is a mix of market potential where technologies such as quantum sensors, which have many applications and are closer to commercialization, have a larger near-term market while quantum computers remain a complex niche technology for many years. The details are difficult to predict but will dictate workforce needs.
When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences. Overproducing STEM workers benefits industry and academia, but not necessarily the workers themselves. If we prematurely attempt to put quantum computer labs in every high school and college, we may be setting up less-privileged students to pursue jobs that may not develop, equipped with skills that may not be easily transferred to other fields.
Daniel J. Rozell
Research Professor
Department of Technology and Society
Stony Brook University
An Evolving Need for Trusted Information
In “Informing Decisionmakers in Real Time” (Issues, Fall 2023), Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe describe how scientific organizations, funders, and researchers came together to provide vital insights in a time of global need. Their actions during the COVID-19 pandemic created new ways for researchers to coordinate with one another and better ways to communicate critical scientific insights to key end users. Collectively, these actions accelerated translations of basic research to life-saving applications.
Examples such as the Societal Experts Action Network (SEAN) that the authors highlight reveal the benefits of a new approach. While at the National Science Foundation, we pitched the initial idea for this project and the name to the National Academies of Sciences, Engineering, and Medicine (NASEM). We were inspired by NASEM’s new research-to-action workflows in biomedicine and saw opportunities for thinking more strategically about how social science could help policymakers and first responders use many kinds of research more effectively.
SEAN’s operational premise is that by building communication channels where end users can describe their situations precisely, researchers can better tailor their translations to the situations. Like NASEM, we did not want to sacrifice rigor in the process. Quality control was essential. Therefore, we designed SEAN to align translations with key properties of the underlying research designs, data, and analysis. The incredible SEAN leadership team that NASEM assembled implemented this plan. They committed to careful inferences about the extent to which key attributes of individual research findings, or collections of research findings, did or did not generalize to end users’ situations. They also committed to conducting real-time evaluations of their effectiveness. With this level of commitment to rigor, to research quality filters, and to evaluations, SEAN produced translations that were rigorous and usable.
There is significant benefit to supporting approaches such as this going forward. To see why, consider that many current academic ecosystems reward the creation of research, its publication in journals, and, in some fields, connections to patents. These are all worthy activities. However, societies sometimes face critical challenges where interdisciplinary collaboration, a commitment to rigor and precision, and an advanced understanding of how key decisionmakers use scientific content are collectively the difference between life and death. Ecosystems that treat journal publications and patents as the final products of research processes will have limited impact in these circumstances. What Groves and coauthors show is the value of designing ecosystems that produce externally meaningful outcomes.
Scientific organizations can do more to place modern science’s methods of measurement and inference squarely in the service of people who can save lives. With structures such as SEAN that more deeply connect researchers to end users, we can incentivize stronger cultures of responsiveness and accountability to thousands of end users. Moreover, when organizations network these quality-control structures, and then motivate researchers to collaborate and share information effectively, socially significant outcomes are easier to stack (we can more easily build on each other’s insights) and scale (we can learn more about which practices generalize across circumstances).
To better serve people across the world, and to respect the public’s sizeable investments in federally funded scientific research, we should seize opportunities to increase the impact and social value of the research that we conduct. New research-to-action workflows offer these opportunities and deserve serious attention in years to come.
Daniel Goroff
Alfred P. Sloan Foundation
Arthur Lupia
University of Michigan
As Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe describe in their article, the COVID-19 pandemic highlighted the value and importance of connecting social science to on-the-ground decisionmaking and solution-building processes, which require bridging societal sectors, academic fields, communities, and levels of governance. That the National Academies of Sciences, Engineering, and Medicine and public and private funders—including at the local level—created and continue to support the Societal Experts Action Network (SEAN) is encouraging. Still, the authors acknowledge that there is much work needed to normalize and sustain support for ongoing research-practice partnerships of this kind.
In academia, for example, the pandemic provided a rallying point that encouraged cross-sector collaborations, in part by forcing a change to business-as-usual practices and incentivizing social scientists to work on projects perceived to offer limited gains in academic systems, such as tenure processes. Without large-scale reconfiguration of resources and rewards, as the pandemic crisis triggered to some extent, partnerships such as those undertaken by SEAN face numerous barriers. Building trust, fostering shared goals, and implementing new operational practices across diverse participants can be slow and expensive. Fitting these efforts into existing funding is also challenging, as long-term returns may be difficult to measure or articulate. In a post-COVID world, what incentives will remain for researchers and others to pursue necessary work like SEAN’s, spanning boundaries across sectors?
One answer comes from a broader ecosystem of efforts in “civic science,” of which we see SEAN as a part. Proponents of civic science argue that boundary-spanning work is needed in times of crisis as well as peace. In this light, we see a culture shift in which philanthropies, policymakers, community leaders, journalists, educators, and academics recognize that research-practice partnerships must be made routine rather than being exceptional. This culture shift has facilitated our own work as researchers and filmmakers as we explore how research informing filmmaking, and vice versa, might foster pro-democratic outcomes across diverse audiences. For example, how can science films enable holistic science literacy that supports deliberation about science-related issues among conflicted groups?
At first glance, our work may seem distant from SEAN’s policy focus. However, we view communication and storytelling (in non-fiction films particularly) as creating “publics,” or people who realize they share a stake in an issue, often despite some conflicting beliefs, and who enable new possibilities in policy and society. In this way and many others, our work aligns with a growing constellation of participants in the Civic Science Fellows program and a larger collective of collaborators who are bridging sectors and groups to address key challenges in science and society.
As the political philosopher Peter Levine has said, boundary-spanning work enables us to better respond to the civic questions asking “What should we do?” that run through science and broader society. SEAN illustrates how answering such questions cannot be done well—at the level of quality and legitimacy needed—in silos. We therefore strongly support multisector collaborations like those that SEAN and the Civic Science Fellows program model. We also underscore the opportunity and need for sustained cultural and institutional progress across the ecosystem of connections between science and civic society, to reward diverse actors for investing in these efforts despite their scope and uncertainties.
Emily L. Howell
Researcher
Science Communication Lab
Nicole M. Krause
Associate
Morgridge Institute for Research
Ian Cheney
Documentary film director and producer
Wicked Delicate Films
Elliot Kirschner
Executive Producer
Science Communication Lab
Sarah S. Goodwin
Executive Director
Science Communication Lab
I read Robert Groves, Mary T. Bassett, Emily P. Backes, and Malvern Chiweshe’s essay with great interest. It is hard to remember the early times of COVID-19, when everyone was desperate for answers and questions popped up daily about what to do and what was right. As a former elected county official and former chair of a local board of health, I valued the welcome I received when appointed to the Societal Experts Action Network (SEAN) the authors highlight. I believe that as a nonacademic, I was able to bring a pragmatic on-the-ground perspective to the investigations and recommendations.
At the time, local leaders were dealing with a pressing need for scientific information when politics were becoming fraught with dissension and the public had reduced trust in science. Given such pressure, it is difficult to fully appreciate the speed at which SEAN operated—light speed compared with what I viewed as the usual standards of large organizations such as its parent, the National Academies of Sciences, Engineering, and Medicine. SEAN’s efforts were nimble and focused, allowing us to collaborate while addressing massive amounts of data.
Now, the key to addressing the evolving need for trusted and reliable information, responsive to the modern world’s speed, will be supporting and replicating the work of SEAN. Relationships across jurisdictions and institutions were formed that will continue to be imperative not only for ensuring academic rigor but also for understanding how to build the bridges of trust to support the value of science, to meet the need for resilience, and to provide the wherewithal to progress in the face of constant change.
Linda Langston
President, Langston Strategies Group
Former member of the Linn County, Iowa, Board of Supervisors
Supervisor and President, National Association of Counties
Rebuilding Public Trust in Science
As Kevin Finneran noted in “Science Policy in the Spotlight” (Issues, Fall 2023), “In the mid-1950s, 88% of Americans held a favorable attitude toward science.” But the story was even better back then. When the American National Election Study began in 1948 asking about trust in government, about three-quarters of people said they trusted the federal government to do the right thing almost always or most of the time (now under one-third and dropping, especially among Generation Z and millennials). Increasing public trust in science is important, but transforming new knowledge into societal impacts at scale will require much more. It will require meaningful public engagement and trust-building across the entire innovation cycle, from research and development to scale up, commercialization, and successful adoption and use. Public trust in this system can break down at any point—as the COVID-19 pandemic made painfully clear, robbing at least 20 million years of human life globally.
For over a decade, I had the opportunity to support dozens of focus groups and national surveys exploring public perceptions of scientific developments in areas such as nanotechnology, synthetic biology, cellular agriculture, and gene editing. Each of these exercises provided new insights and an appreciation for the often-maligned public mind. As the physicist Richard Feynman once noted, believing that “the average person is unintelligent is a very dangerous idea.”
The exercises consistently found that when confronted with the emergence of novel technologies, people were very consistent regarding their concerns and demands. For instance, there was little support for halting scientific and technological progress, with some noting, “Continue to go forward, but please be careful.” Being careful was often framed around three recurring themes.
First, there was a desire for increased transparency, from both government and businesses. Second, people often asked for more pre-market research and risk assessment. In other words, don’t test new technologies on us—but unfortunately this now seems the default business model for social media and generative artificial intelligence. People voiced valid concerns that long-term risks would be overlooked in the rush to move products into the marketplace, and there was confusion about who exactly was responsible for such assessments, if anybody. Finally, many echoed the need for independent, third-party verification of both the risks and the benefits of new technologies, driven by suspicions of industry self-regulation and decreased trust in government oversight.
Taken as a whole, these public concerns sound reasonable, but remain a heavy lift. There is, unfortunately, very little “public” in the nation’s public policies, and we have entered an era where distrust is the default mode. Given this state of affairs, one should welcome the recent recommendations proposed to the White House by the President’s Council of Advisors on Science and Technology: to “develop public policies that are informed by scientific understanding and community values [creating] a dialogue … with the American people.” The question is whether these efforts go far enough and can occur fast enough to bend the trust curve back before the next pandemic, climate-related catastrophe, financial meltdown, geopolitical crisis, or arrival of artificial general intelligence.
David Rejeski
Visiting Scholar
Environmental Law Institute
Coping in an Era of Disentangled Research
In “An Age of Disentangled Research?” (Issues, Fall 2023), Igor Martins and Sylvia Schwaag Serger raise interesting questions about the changing nature of international cooperation in science and about the engagement of Chinese scientists with researchers in other countries. The authors rightly call attention to the rapid expansion of cooperation as measured in particular by bibliometric analyses. But as they point out, we may be seeing “signs of a potential new era of research in which global science is divided into geopolitical blocs of comparable economic, scientific, and innovative strength.”
While bibliometric data can give us indicators of such a trend, we have to look deeper to fully understand what is happening. Clearly, significant geopolitical forces are at work, generating heightened concerns for national security and, by extension, information security pertaining to scientific research. The fact that many areas of cutting-edge science also have direct implications for economic competitiveness and military capabilities further reinforces the security concerns raised by geopolitical competition, raising barriers to cooperation.
Competition and discord in international scientific activities are certainly not new. Yet forms of cooperation remain, continuing to give science a sense of community and common purpose. That cooperative behavior is often quite subtle and indirect, as a result of multiple modalities of contact and communication. Direct international cooperation among scientists, relations among national and international scientific organizations, the international roles of universities, and the various ways that numerous corporations engage scientists and research centers around the world illustrate the plethora of modes and platforms.
From the point of view of political authorities, devising policies for this mix of modalities is no small challenge. Concerns about maintaining national security often lead to government intrusions into the professional interactions of the scientific community. There are no finer examples of this than the security policy initiatives being implemented in the United States and China, the results of which appear in the bibliometric data presented by the authors. At the same time, we might ask whether scientific communication continues in a variety of other forms, raising hopes that political realities will change. In addition, what should we make of the development of new sites for international cooperation such as the King Abdullah University of Science and Technology in Saudi Arabia and Singapore’s emergence as an important international center of research? Further examination of such questions is warranted as we try to understand the trends suggested by Martin and Schwaag Serger.
In addition, there is more to be learned about the underlying norms and motivations that constitute the “cultures” of science, in China and elsewhere. Research integrity, evaluation practices, research ethics, and science-state relations, among other issues, all involve the norms of science and pertain to its governance. In today’s world, that governance clearly involves a fusion of the policies of governments with the cultures of science. As with geopolitical tensions, matters of governance also hold the potential for producing the bifurcated world of international scientific cooperation the authors suggest. At the same time, we are not without evidence that norms diffuse, supporting cooperative behavior.
We are thus at an interesting moment in our efforts to understand international research cooperation. While signs of “disentanglement” are before us, we are also faced with complex patterns of personal and institutional interactions. It is tempting to discuss this moment in terms of the familiar “convergence-divergence” distinction, but such a binary formulation does not do justice to enduring “community” interests among scientists globally, even as government policies and intellectual traditions may make some forms of cooperation difficult.
Richard “Pete” Suttmeier
Professor Emeritus, Political Science
University of Oregon
In Australia, the quality and impact of research is built upon uncommonly high levels of international collaboration. Compared with the global average of almost 25% cited by Igor Martins and Sylvia Schwaag Serger, over 60% of Australian research now involves international collaboration. So the questions the authors raise are essential for the future of Australian universities, research, and innovation.
While there are some early signs of “disentanglement” in Australian research—such as the recent mapping of a decline in collaboration with Chinese partners in projects funded by the Australian Research Council—the overall picture is still one of increasing international engagement. In 2022, Australian researchers coauthored more papers with Chinese colleagues than with American colleagues (but only just). This is the first time in Australian history that our major partner for collaborative research has been a country other than a Western military ally. But the fastest growth in Australia’s international research collaboration over the past decade was actually with India, not China.
At the same time, the connection between research and national and economic security is being drawn more clearly. At a major symposium at the Australian Academy of Science in Canberra in November 2023, Australia’s chief defense scientist talked about a “paradigm shift,” where the definition of excellent science was changing from “working with the best in the world” to “working with the best in the world who share our values.”
Navigating these shifts in global knowledge production, collaboration, and innovation is going to require new strategies and an improved evidence base to inform the decisions of individual researchers, institutions, and governments in real time. Martins and Schwaag Serger are asking critical questions and bringing better data to the table to help us answer them.
As a country with a relatively small population (producing 4% of the world’s published research), Australia has succeeded over recent decades by being an open and multicultural trading nation, with high levels of international engagement, particularly in our Indo-Pacific region.
Increasing geostrategic competition is creating new risks for international research collaboration, and we need to manage these. In Australia in the past few years, universities and government agencies have established a joint task force for collaboration in addressing foreign interference, and there is also increased screening and government review of academic collaborations. But to balance the increased focus on the downsides of international research, we also need better evidence and analysis of the upsides—the benefits that accrue to Australia from being connected to the global cutting edge. While managing risk, we should also be alert to the risk of missing out.
Paul Harris
Executive Director, Innovative Research Universities
Canberra, Australia
The commentary on Igor Martins and Sylvia Schwaag Serger’s article is closely in tune with recent reports published by the Policy Institute at King’s College London. Most recently, in Stumbling Bear; Soaring Dragon and The China Question Revisited, we drew attention to the extraordinary rising research profile of China, which has disrupted the G7’s dominance of the global science network. This is a reality that scientists in other countries cannot ignore, not least because it is only by working with colleagues at the laboratory bench that we develop a proper understanding of the aims, methods, and outcomes of their work. If China is now producing as many highly cited research papers as the United States and the European Union, then knowing only by reading is blind folly.
These considerations need to be set in a context of international collaboration, rising over the past four decades as travel got cheaper and communications improved. In 1980, less than 10% of articles and reviews published in the United Kingdom had an international coauthor; that now approaches 70% and is greatest among the leading research-intensive universities. A similar pattern occurs across the European Union. The United States is somewhat less international, having the challenge of a continent to span domestically. However, a strong, interconnected global network underpins the vast majority of highly cited papers that signal change and innovation. How could climate science, epidemiology, and health management work without such links?
The spread across disciplines is lumpy. Much of the trans-Atlantic research trade is biomedical and molecular biology. The bulk of engagement with China has been in technology and the physical sciences. That is unsurprising since this is where China had historical strength and where Western researchers were more open for new collaborations. Collaboration in social sciences and in humanities is sparse because many priority topics are regional or local. But collaboration is growing in almost every discipline and is shifting from bilateral to multilateral. Constraining this to certain subjects and politically correct partners would be a disaster for global knowledge horizons.
Jonathan Adams
Visiting Professor at the Policy Institute, King’s College London
Chief Scientist at the Institute for Scientific Information, Clarivate
Janet Ilieva
Founder and Director of Education Insight
Jo Johnson
Visiting Professor at the Policy Institute, King’s College London
Former UK Minister of State for Universities, Science, Research and Innovation
Lessons from Ukraine for Civil Engineering
The resilience of Ukraine’s infrastructure in the face of both conventional and cyber warfare, as well as attacks on the knowledge systems that underpin its operations, is no doubt rooted in the country’s history. Ukraine has been living with the prospect of warfare and chaos for over a century. This “normal” appears to have produced an agile and flexible infrastructure system that every day shows impressive capacity to adapt.
In “What Ukraine Can Teach the World About Resilience and Civil Engineering,” Daniel Armanios, Jonas Skovrup Christensen, and Andriy Tymoshenko leverage concepts from sociology to explain how the country is building agility and flexibility into its infrastructure system. They identify key tenets that provide resilience: a shared threat that unites and motivates, informal supply networks, decentralized management, learning from recent crises (namely COVID-19), and modular and distributed systems. Resilience naturally requires coupled social, ecological, and technological systems assessment, recognizing that sustained and expedited adaptation is predicated on complex dynamics that occur within and across these systems. As such, there is much to learn from sociology, but also other disciplines as we unpack what’s at the foundation of these tenets.
Agile and flexible infrastructure systems ultimately produce a repertoire of responses as large as or greater than the variety of conditions produced in their environments. This is known as requisite complexity. Thriving under a shared threat is rooted in the notion that systems can do a lot of innovation at the edge of chaos (complexity theory), if resources including knowledge are available and there is flexibility to reorganize as stability wanes. The informal networks Ukraine has used to source resources exist because formal networks are likely unavailable or unreliable. We often ignore ad hoc networks in stable situations, and even during periods of chaos such as extreme weather events, because the organization is viewed as unable to fail—and therefore too often falls back to its siloed and rigid structures to ineffectively deal with prevailing conditions.
Ukraine didn’t have this luxury. Management and leadership science describe how informal networks are more adept at finding balance than are rigid and siloed organizations. Related, the proposition of decentralized management is akin to imbuing those closest to the chaos, who are better attuned to the specifics of what is unfolding, with greater decisionmaking authority. This is related to the concept of near decomposability (complexity science). This decentralized model works well during periods of instability, but can lead to inefficiencies during stable times. During rebuilding, you may not want decentralization as you try to efficiently use limited resources.
Lastly, modularity and distributed systems are often touted as resilience solutions, and indeed they can have benefits under the right circumstances. However, network science teaches us that decentralized systems shift the nature of the system from one big producer supplying many consumers (vulnerable to attack) to many small producers supplying many consumers (resilient). Distributed systems link decentralized and modular assets together so that greater cognition and functionality are achieved. But caution should be used in moving toward purely decentralized systems for resilience, as there are situations where resilience is more closely realized with centralized configurations.
Fundamentally, as the authors note, Ukraine is showing us how to build and operate infrastructure in rapidly changing and chaotic environments. But it is also important to recognize that infrastructure in regions not facing warfare is likely to experience shifts between chaotic (e.g., extreme weather events, cyberattacks, failure due to aging) and stable conditions. This cycling necessitates being able to pivot infrastructure organizations and their technologies between chaos and non-chaos innovation. The capabilities produced from these innovation sets become the cornerstone for agile and flexible infrastructure to respond at pace and scale to known challenges and perhaps, most importantly, to surprise.
Mikhail Chester
Professor of Civil, Environmental, and Sustainable Engineering
Arizona State University
Coauthor, with Braden Allenby, of The Rightful Place of Science: Infrastructure and the Anthropocene
In their essay, Daniel Armanios, Jonas Skovrup Christensen, and Andriy Tymoshenko provide insightful analysis of the Ukraine conflict and how the Ukrainian people are able to manage the crisis. Their recounting reminds me of an expression frequently used in the US Marines: improvise, adapt, and overcome. Having lived and worked for many years in Ukraine, and having returned for multiple visits since the Russian invasion, leaves me convinced that while the conflict will be long, Ukraine will succeed in the end. The five propositions the authors lay out as the key to success are spot on.
Ukraine’s common goal of bringing its people together (authors’ Proposition 1), along with the Slavic culture and a particular legacy of the Soviet system, combine to form the fundamental core of why the Ukrainian people not only survive but often flourish during times of crisis. Slavic people are, by my observation, tougher and more resilient than the average. Some will call it “grit,” some may call it “stoic”—but make no mistake, a country that has experienced countless invasions, conflicts, famines, and other hardships imbues its people with a special character. It is this character that serves as the cornerstone of their attitude and in the end their response. Unified hard people can endure hard things.
A point to remember is that Ukraine, like most of the former Soviet Union, benefits from a legacy infrastructure based on redundancy and simplicity. This is complementary to the authors’ Proposition 5 (a modular, distributed, and renewable energy infrastructure is more resilient in time of crisis). It was Vladimir Lenin who said, “Communism equals Soviet power plus the electrification of the whole country.” As a consequence, the humblest village in Ukraine has some form of electricity, and given each system’s robust yet simple connection, it is easily repaired when broken. Combine this with distributed generation (be it gensets or wind, solar, or some other type of renewable energy) and you have built-in redundancy.
During Soviet times, everyone needed to develop a “work-around” to source what they sometimes needed or wanted. Waiting for the Soviet state to supply something could take forever, if it ever happened at all. As a consequence, there were microentrepreneurs everywhere who could source, build, or repair just about everything, either for themselves or their neighbors. This system continues to flourish in Ukraine, and the nationalistic sentiment pervading the country makes it easier to recover from infrastructure damages. As the authors point out in Proposition 3, decentralized management allows for a more agile response.
The “lessons learned” from the ongoing conflict, as the authors describe, include, perhaps most importantly, that learning from previous incidents can help develop a viable incident response plan. Such planning, however, should be realistic and focus on the “probable” and not so much on the “possible,” since every situation and plan is resource-constrained to some degree. The weak link in any society is the civilian infrastructure, and failure to ensure redundancy and rapid restoration is not an option. Ukraine is showing the world how it can be accomplished.
Steve Walsh
Supervisory Board Member
Ukrhydroenergo
Ground Truths Are Human Constructions
Artificial intelligence algorithms are human-made, cultural constructs, something I saw first-hand as a scholar and technician embedded with AI teams for 30 months. Among the many concrete practices and materials these algorithms need in order to come into existence are sets of numerical values that enable machine learning. These referential repositories are often called “ground truths,” and when computer scientists construct or use these datasets to design new algorithms and attest to their efficiency, the process is called “ground-truthing.”
Understanding how ground-truthing works can reveal inherent limitations of algorithms—how they enable the spread of false information, pass biased judgments, or otherwise erode society’s agency—and this could also catalyze more thoughtful regulation. As long as ground-truthing remains clouded and abstract, society will struggle to prevent algorithms from causing harm and to optimize algorithms for the greater good.
Ground-truth datasets define AI algorithms’ fundamental goal of reliably predicting and generating a specific output—say, an image with requested specifications that resembles other input, such as web-crawled images. In other words, ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them.
Ground-truth datasets fall into at least two subsets: input data (what the algorithm should process) and output targets (what the algorithm should produce). In supervised machine learning, computer scientists start by building new algorithms using one part of the output targets annotated by human labelers, before evaluating their built algorithms on the remaining part. In the unsupervised (or “self-supervised”) machine learning that underpins most generative AI, output targets are used only to evaluate new algorithms.
Most production-grade generative AI systems are assemblages of algorithms built from both supervised and self-supervised machine learning. For example, an AI image generator depends on self-supervised diffusion algorithms (which create a new set of data based on a given set) and supervised noise reduction algorithms. In other words, generative AI is thoroughly dependent on ground truths and their socioculturally oriented nature, even if it is often presented—and rightly so—as a significant application of self-supervised learning.
Why does that matter? Much of AI punditry asserts that we live in a post-classification, post-socially constructed world in which computers have free access to “raw data,” which they refine into actionable truth. Yet data are never raw, and consequently actionable truth is never totally objective.
Algorithms do not create so much as retrieve what has already been supplied and defined—albeit repurposed and with varying levels of human intervention. This observation rebuts certain promises around AI and may sound like a disadvantage, but I believe that it could instead be an opportunity for social scientists to begin new collaborations with computer scientists. This could take the form of a professional social activity, people working together to describe the ground-truthing processes that underpin new algorithms, and so help make them more accountable and worthy.
AI Lacks Ethic Checks for Human Experimentation
Following Nazi medical experiments in World War II and outrage over the US Public Health Service’s four-decade-long Tuskegee syphilis study, bioethicists laid out frameworks, such as the 1947 Nuremberg Code and the 1979 Belmont Report, to regulate medical experimentation on human subjects. Today social media—and, increasingly, generative artificial intelligence—are constantly experimenting on human subjects, but without institutional checks to prevent harm.
In fact, over the last two decades, individuals have become so used to being part of large-scale testing that society has essentially been configured to produce human laboratories for AI. Examples include experiments with biometric and payment systems in refugee camps (designed to investigate use cases for blockchain applications), urban living labs where families are offered rent-free housing in exchange for serving as human subjects in a permanent marketing and branding experiment, and a mobile money research and development program where mobile providers offer their African consumers to firms looking to test new biometric and fintech applications. Originally put forward as a simpler way to test applications, the convention of software as “continual beta” rather than more discrete releases has enabled business models that depend on the creation of laboratory populations whose use of the software is observed in real time.
This experimentation on human populations has become normalized, and forms of AI experimentation are touted as a route to economic development. The Digital Europe Programme launched AI testing and experimentation facilities in 2023 to support what the program calls “regulatory sandboxes,” where populations will interact with AI deployments in order to produce information for regulators on harms and benefits. The goal is to allow some forms of real-world testing for smaller tech companies “without undue pressure from industry giants.” It is unclear, however, what can pressure the giants and what constitutes a meaningful sandbox for generative AI; given that it is already being incorporated into the base layers of applications we would be hard-pressed to avoid, the boundaries between the sandbox and the world are unclear.
Generative AI is an extreme case of unregulated experimentation-as-innovation, with no formal mechanism for considering potential harms. These experiments are already producing unforeseen ruptures in professional practice and knowledge: students are using ChatGPT to cheat on exams, and lawyers are filing AI-drafted briefs with fabricated case citations. Generative AI also undermines the public’s grip on the notion of “ground truth” by hallucinating false information in subtle and unpredictable ways.
These two breakdowns constitute an abrupt removal of what philosopher Regina Rini has termed “the epistemic backstop,”—that is, the benchmark for considering something real. Generative AI subverts information-seeking practices that professional domains such as law, policy, and medicine rely on; it also corrupts the ability to draw on common truth in public debates. Ironically, that disruption is being classed as success by the developers of such systems, emphasizing that this is not an experiment we are conducting but one that is being conducted upon us.
This is problematic from a governance point of view because much of current regulation places the responsibility for AI safety on individuals, whereas in reality they are the subjects of an experiment being conducted across society. The challenge this creates for researchers is to identify the kinds of rupture generative AI can cause and at what scales, and then translate the problem into a regulatory one. Then authorities can formalize and impose accountability, rather than creating diffuse and ill-defined forms of responsibility for individuals. Getting this right will guide how the technology develops and set the risks AI will pose in the medium and longer term.
Much like what happened with biomedical experimentation in the twentieth century, the work of defining boundaries for AI experimentation goes beyond “AI safety” to AI legitimacy, and this is the next frontier of conceptual social scientific work. Sectors, disciplines, and regulatory authorities must work to update the definition of experimentation so that it includes digitally enabled and data-driven forms of testing. It can no longer be assumed that experimentation is a bounded activity with impacts only on a single, visible group of people. Experimentation at scale is frequently invisible to its subjects, but this does not render it any less problematic or absolve regulators from creating ways of scrutinizing and controlling it.
Generative AI Is a Crisis for Copyright Law
Generative artificial intelligence is driving copyright into a crisis. More than a dozen copyright cases about AI were filed in the United States last year, up severalfold from all filings from 2020 to 2022. In early 2023, the US Copyright Office launched the most comprehensive review of the entire copyright system in 50 years, with a focus on generative AI. Simply put, the widespread use of AI is poised to force a substantial reworking of how, where, and to whom copyright should apply.
Starting with the 1710 British statute, “An Act for the Encouragement of Learning,” Anglo-American copyright law has provided a framework around creative production and ownership. Copyright is even embedded in the US Constitution as a tool “to promote the Progress of Science and useful Arts.” Now generative AI is destabilizing the foundational concepts of copyright law as it was originally conceived.
Typical copyright lawsuits focus on a single work and a single unauthorized copy, or “output,” to determine if infringement has occurred. When it comes to the capture of online data to train AI systems, the sheer scale and scope of these datasets overwhelms traditional analysis. The LAION 5-B dataset, used to train the AI image generator Stable Diffusion, contains 5 billion images and text captions harvested from the internet, while CommonPool (a collection of datasets released by nonprofit LAION in April to democratize machine learning), offers 12.8 billion images and captions. Generative AI systems have used datasets like these to produce billions of outputs.
For many artists and designers, this feels like an existential threat. Their work is being used to train AI systems, which can then create images and texts that replicate their artistic style. But to date, no court has considered AI training to be copyright infringement: following the Google Books case in 2015, which assessed scanning books to create a searchable index, US courts are likely to find that training AI systems on copyrighted works is acceptable under the fair use exemption, which allows for limited use of copyrighted works without permission in some cases when the use serves the public interest. It is also permitted in the European Union under the text and data mining exception of EU digital copyright law.
Copyright law has also struggled with authorship by AI systems. Anglo-American law presumes that work has an “author” somewhere. To encourage human creativity, some authors need the economic incentive of a time-limited monopoly on making, selling, and showing their work. But algorithms don’t need incentives. So according to the US Copyright Office they aren’t entitled to copyright. The same reasoning applied to other cases involving nonhuman authors, including the case where a macaque took selfies using a nature photographer’s camera. Generative AI is the latest in a line of nonhumans deemed unfit to hold copyright.
Nor are human prompters likely to have copyrights in AI-generated work. The algorithms and neural net architectures behind generative AI algorithms produce outputs that are inherently unpredictable, and any human prompter has less control over a creation than the model does.
Where does this leave us? For the moment, in limbo. The billions of works produced by generative AI are unowned and can be used anywhere, by anyone, for any purpose. Whether a ChatGPT novella or a Stable Diffusion artwork, output now exists as unclaimable content in the commercial workings of copyright itself. This is a radical moment in creative production: a stream of works without any legally recognizable author.
There is an equivalent crisis in proving copyright infringement. Historically, this has been easy, but when a generative AI system produces infringing content, be it an image of Mickey Mouse or Pikachu, courts will struggle with the question of who is initiating the copying. The AI researchers who gathered the training dataset? The company that trained the model? The user who prompted the model? It’s unclear where agency and accountability lie, so how can courts order an appropriate remedy?
Copyright law was developed by eighteenth-century capitalists to intertwine art with commerce. In the twenty-first century, it is being used by technology companies to allow them to exploit all the works of human creativity that are digitized and online. But the destabilization around generative AI is also an opportunity for a more radical reassessment of the social, legal, and cultural frameworks underpinning creative production.
What expectations of consent, credit, or compensation should human creators have going forward, when their online work is routinely incorporated into training sets? What happens when humans make works using generative AI that cannot have copyright protection? And how does our understanding of the value of human creativity change when it is increasingly mediated by technology, be it the pen, paintbrush, Photoshop, or DALL-E?
It may be time to develop concepts of intellectual property with a stronger focus on equity and creativity as opposed to economic incentives for media corporations. We are seeing early prototypes emerge from the recent collective bargaining agreements for writers, actors, and directors, many of whom lack copyrights but are nonetheless at the creative core of filmmaking. The lessons we learn from them could set a powerful precedent for how to pluralize intellectual property. Making a better world will require a deeper philosophical engagement with what it is to create, who has a say in how creations can be used, and who should profit.
Science Lessons from an Old Coin
In “What a Coin From 1792 Reveals About America’s Scientific Enterprise” (Issues, Fall 2023), Michael M. Crow, Nicole K. Mayberry, and Derrick M. Anderson make an adroit analogy between the origins of the Birch Cent and the two sides of the nation’s research endeavors, namely democracy and science. The noise and seeming dysfunction in the way science is adjudicated and revealed is, they say, a feature and not a bug.
I agree. I have written extensively about how scientists should embrace their humanity. That means we express emotions when we are ignored by policymakers, we have strong convictions and therefore are subject to motivated reasoning, and we make both intentional and inadvertent errors. Efforts to curb this humanity have all failed. We are not going to silence those who are passionate about science—nor should we. Why would someone study climate change unless they are passionate about the fact that it’s an existential crisis? We want and need that passion to drive effort and creativity. Does this make scientists outspoken and subject to—at least initially—looking for evidence that supports their passion? Of course. And does that same humanity mean that errors can appear in scientific papers that were missed by the authors, editors, and reviewers? Also yes.
There’s a solution to this that also embraces the messy and glorious vision presented by Crow et al. And that is not to quell scientists’ passion and humanity, but rather to better explain and demonstrate that science operates within a system that ultimately corrects for human frailty. This requires better explaining the fact that scientists are competitive—another human trait—and that leads to arguments about data and papers that converge on the right answer, even when motivated reasoning may have been there to start with. It also requires courageous and forthright correction of the scientific record when errors have been made for any reason. Science is seriously falling short on this right now. The correction and retraction of scientific papers has become far too contentious—often publicly—and stigma is associated with these actions. This stigma arises from the perception that all errors are due to deliberate misconduct, even when journals are explicit that correction of the record does not imply fraud.
This must change. The public must experience—and perceive—that science is honorably self-correcting. That will require hard changes in scientists’ attitude and execution when concerns are raised about published papers. But fixing this is going to be a lot easier than lowering the noise level. And as the authors point out, that noise is a feature, not a bug, and therefore should be celebrated.
H. Holden Thorp
Editor-in-Chief of Science
Professor of Chemistry and Medicine
George Washington University
In their engaging article, Michael M. Crow, Nicole K. Mayberry, and Derrick M. Anderson rightly point to the centrality of science in US history—and to how much “centrality” has meant entanglement in controversy, not clarity of purpose.
The motto on the Birch Cent, “Liberty, Parent of Science and Industry,” brings out the importance of freedom of inquiry. This is not readily separable from freedom of interpretation and even freedom to disregard. The authors quote the slogan “follow the science” that attempts to counter the recent waves of distrust and denial. But while science may inform policy, it doesn’t dictate it. Liberty signals also the importance of political debate over whether and how to follow science.
In 1792, science was largely a small-scale craft enterprise. Over time, universities, corporations, government agencies, and markets all became crucial. A science and technology system developed, greatly increasing support for science but also shaping which possible advances in knowledge were pursued. Potential usefulness was privileged, as were certain sectors, such as defense and health, and potential for profit. Different levels of risk and “disutility” were tolerated. The patent system developed not only to share useful knowledge but, as Crow and his coauthors emphasize, to secure private property rights. All this complicated and limited the liberty of scientific inquiry.
Comparing the United States to the United Kingdom, the authors sensibly emphasize the contrast of egalitarian to aristocratic norms. But the United States was not purely egalitarian. The Constitution is full of protections for inequality and protections from too much equality. Conversely, UK science was not all aristocratic nor entirely top-down and managed. Though the Royal Society secured formal recognition under King Charles II, it was created in the midst of (and influenced by) the English Civil War. Bottom-up self-organization among scientists was important. Most were elite, but not all statist. And the same went for a range of other self-organized groups, such as Birmingham’s Lunar Men, who shared a common interest in experiment and invention. These groups joined in creating “invisible colleges” that contributed to state power but were not controlled by it. Even more basically, perhaps, the authors’ contrast of egalitarian to aristocratic norms implies a contrast of common men to elites that obscures the rising industrial middle class. It was no accident the Lunar Men were in the English Midlands.
Crow and his coauthors correctly stress that neither scientific knowledge nor technological innovation has simply progressed along a linear path. In both the United States and the United Kingdom, science and technology developed in a dialectical relationship between centralization and decentralization, large-scale and small, elite domination and democratic opportunities. Universities, scientific societies, and indeed business corporations all cut both ways. They were upscaling and centralizing compared with autonomous, local craft workshops. They worked partly for honor and partly for profit. But they also formed intermediate associations in relation to the state and brought dynamism to local communities and regions. Universities joined science productively to critical and humanistic inquiry. Liberty remained the parent of science and industry because institutional supports remained diverse, allowing for creativity, debate, and exploration of different possible futures. There are lessons here for today.
Craig Calhoun
University Professor of Social Sciences
Arizona State University
Securing Semiconductor Supply Chains
Global supply chains, particularly in technologies of strategic value, are undergoing a remarkable reevaluation as geopolitical events weigh on the minds of decisionmakers across government and industry. The rise of an aggressive and revisionist China, a devastating global pandemic, and the rapid churn of technological advancement are among the factors prompting a dramatic rethinking of the value of lean, globally distributed supply chains.
These complex supply networks evolved over several decades of relative geopolitical stability to capture the efficiency gains of specialization and trade on a global scale. Yet in today’s world, efficiency must be recast in terms of reliable and resilient supply chains better adapted to geopolitical uncertainties rather than purely on the basis of lowest cost.
Indeed, nations worldwide have belatedly discovered a crippling lack of redundancy in supply chains necessary to produce and distribute products essential to their economies and welfare, including such diverse goods as vaccines and medical supplies, semiconductors and other electronic components, and the wide variety of technologies reliant on semiconductors. A drive to “rewire” these networks must balance the manifest advantages of globally connected innovation and production with the need for improved national and regional resiliency. This would include more investment in traditional technologies—for example, a more robust regional electrical grid in Texas, whose failure contributed to the supply disruption of automotive chips that Abigail Berger, Hassan Khan, Andrew Schrank, and Erica R. H. Fuchs describe in “A New Policy Toolbox for Semiconductor Supply Chains” (Issues, Summer 2023).
Of course, given its globalized operations, the semiconductor industry is at the forefront of these challenges. In particular, there is a need to distribute risks of single-point failures, such as those found in the global concentration of semiconductor manufacturing in East Asia. Taiwan and South Korea, which together account for roughly half of global semiconductor fabrication capacity, sit astride major geopolitical and geological fault lines, with the dangers of the latter often underestimated.
Recent investments to renew semiconductor manufacturing capacity in the United States are a key element of this rewiring. Through the CHIPS for America Act of 2021, lawmakers have authorized $52 billion to support restoring US capacity in advanced chip manufacturing, with $39 billion in subsidies for the construction of fabrication plants, or “fabs,” backed by substantial tax credits, and roughly $12 billion for related advanced chip research and development initiatives.
Berger and her colleagues argue cogently that it may also be possible to design greater resiliency directly into semiconductor chips. In some cases, greater standardization in chip architecture may allow some chips to be built at multiple fabs, reducing “foundry lock-in.” Such gains will depend on trusted networks among multiple firms as well as governments of US allies and strategic partners—although sorting the practical realities of commercial and national competition in a rapidly innovating industry that marches to the cadence of Moore’s Law will be challenging. The authors rightly point out that focusing on distinct market segments with similar use cases may offer win-win opportunities, but these, too, will require incentives to drive cooperation.
It is clear that global supply chains need a greater level of resiliency, not least through greater geographic dispersion of production across the supply chain. But whether generated by greater standardization, stronger trusted relationships, or through the redistribution of assets, the continued national economic security of the United States and its allies depends on a comprehensive, cooperative, and steady implementation of this rewiring. The authors propose a novel approach that should be pursued, but the broader rewiring will not happen quickly or easily. We still need to move forward with ongoing incentives for industry, more cooperative relationships, and major new investments in talent. We are not done. We need to think of semiconductors like nuclear energy, one involving sustained and substantial commitments of funds and policy attention.
Sujai Shivakumar
Senior Fellow and Director, Renewing American Innovation
Center for Strategic and International Studies
Time for an Engineering Biennale
Guru Madhavan’s “Creative Intolerance” (Issues, Summer 2023) is exceptionally rich in compelling metaphors and potent messages. I am enthusiastic about the idea of an engineering biennale to showcase innovations and provoke discussions about specific projects and the methods of engineering.
The Venice Arts Biennale Architeturra 2023 that the author highlights, which impressed me with its scale and diversity, provides an excellent model for an Engineering Biennale. Could the US Department of Energy Solar Decathlon be scaled up? Could the Design Museum in London play a role? Maybe multiple design showcases—such as those at the University of British Columbia; the Jacobs Institute for Design Innovation at the University of California, Berkeley; or the MIT Design Lab—could grow into something larger?
The support for design could expand the role of the National Academies of Sciences, Engineering, and Medicine by building bridges with an increasing number of researchers in this field, ultimately leading to a National Academy of Design.
Ben Shneiderman
Professor Emeritus
University of Maryland
Member, National Academy of Engineering
The Strength of Weak Ties
“It was the best of times; it was the worst of times,” Charles Dickens famously began in A Tale of Two Cities. So it was for scientific research in early 2020 as a number of forces came together to create a unique set of opportunities and challenges.
First, the COVID-19 pandemic itself. The disease was so contagious and so serious that physical, human-to-human proximity was canceled except for interactions essential to life. Laboratories closed; lecture theaters and libraries lay empty; people barely left their homes.
Second, the emergence of technology-mediated collaboration. Video conferencing became the new meeting space; social media were repurposed for exchanging real-time information and ideas; and digital architects put their skills to building bespoke platforms.
Third, the scientific world united around a common purpose: generating the evidence base that would end the pandemic. Goodwill and reciprocity ruled. We forgot about academic league tables, promotion bottlenecks, h-indices, or longstanding rivalries. We switched gear from competing to collaborating. We pooled our data and our expertise for the good of humanity (and, perhaps, with a view to saving ourselves and our loved ones). And not to be overlooked, the red tape of research governance was cut. Our institutions and funders allowed us—indeed, required us—to divert our funds, equipment, and brainpower to the only work that now mattered. Journal paywalls were torn down. It became possible to build best teams from across the world, to get fast-track ethics approval within hours rather than weeks, to generate and test bold hypotheses, to publish almost instantly, and to replicate studies quickly when the science required it. The downside, of course, was the haystack of preprints that nobody had time to peer-review, but that’s a subject for another day.
In “How to Catalyze a Collaboration” (Issues, Summer 2023), Annamaria Carusi, Laure-Alix Clerbaux, and Clemens Wittwehr describe one initiative that emerged from those strange, wonderful, and terrifying times. The project, dubbed CIAO—short for Modelling COVID-19 Using the Adverse Outcome Pathway Framework—happened because a handful of toxicologists and virologists came together, on a platform designed for exchanging pictures of kittens, to solve an urgent puzzle. Through 280-character tweets and judiciously pitched hashtags, they began to learn each other’s language, reasoned collectively and abductively, and brought in others with different skills as the initiative unfolded.
Online collaborative groups need two things to thrive: a strong sense of common purpose, and a tight central administration (to do the inevitable paperwork, for example). In addition, as the sociologist Mark Granovetter has observed, such groups offer “the strength of weak ties”—people we hardly know are often more useful to us than people we are close to (because we already have too much in common with the latter). An online network tends to operate both through weak ties (the “town square,” where scientists from different backgrounds get to know each other a bit better) and through stronger ties (the “clique,” where scientists who find they have a lot in common peel off to share data and write a paper together).
The result, Carusi and her colleagues say, was 11 peer-reviewed papers and explanation of some scientific mysteries—such as why people with COVID-19 lose their sense of smell. Congratulations to the CIAO team for making the best of the “worst of times.”
Trisha Greenhalgh
Professor of Primary Care Health Sciences
University of Oxford, United Kingdom
Annamaria Carusi, Laure-Alix Clerbaux, and Clemens Wittwehr candidly and openly describe their technical and soft-skill experiences in fostering a global collaboration to address COVID-19 during the pandemic, drawing from an existing Adverse Outcome Pathway approach developed within the Organisation for Economic Co-Operation and Development. The collaborative, nicknamed CIAO (by the Italian members who would like to say, “Ciao COVID!”), found much-needed structure in the integrative construct of adverse outcome pathways (AOPs), or structured representations of biological events. In particular, one tool the researchers adopted—the AOP-Wiki—provided an increasingly agile web-based application that offered contributors a place and space to work on project tasks regardless of time zone. In addition, the AOP structure and AOP-Wiki both have predefined (and globally accepted) standards that obviate the need for semantics debates.
Yet the technical challenges were meager compared with the social challenges of people “being human” and the practical challenges of bringing people together when the world was essentially closed and physical interactions very limited. Carusi, Clerbaux, Wittwehr and their colleagues stepped up during this time of crisis by exercising not only scientific ingenuity but also social and emotional intelligence. They helped bring about, in essence, a paradigm shift. There was no choice but to abandon traditional in-person approaches that were no longer feasible and to embrace virtual and web-based applications. Collaborative leads leveraged their own social networks in virtual space to rapidly make connections that critically helped the AOP framework become the proverbial (and virtual) “campfire” for bringing the collaborative together.
Importantly, this work was not constrained by geography or language. For instance, the AOP-Wiki allowed for asynchronous project management by people living across 20 countries, breaking down language barriers through incorporation of globally accepted lingua franca for documenting and reporting COVID-19 biological pathways. Data entered into the AOP-Wiki were controlled using globally accepted standards and data management practices, such as controlled data extraction fields, vocabularies and ontologies, and FAIR (findable, accessible, interoperable, and reusable) data standards. These ingredients provided the collaborative its perfect campfire for cooking up COVID-19 pathways. All that was needed were the “enzymes” to get it all digested. That’s where the authors stepped in, gently “simmering” the collaborative toward a banquet of digitally documented COVID-19 web-based applications.
The collaborative’s resultant work was the personification of the adage when there is a will, there is a way. The group’s way was greatly facilitated by a willingness to accept and leverage new(er) technology and methods (i.e., web applications and digital data) that enable humans—and their computers—flexibility and efficiency across the globe. Novel virtual/digital models enhanced the collaborative’s experience. Notably, the collaborative’s acceptance and use of the AOP framework and AOP-Wiki’s data management interface means the COVID-19 AOPs are digitally documented, readable by both machines and humans, and globally accessible. The AOP framework has not only catalyzed the collaboration, but prospectively catalyzes the ability to use generative artificial intelligence to find and refine additional data with similar characteristics. This means the COVID-19 AOPs may evolve with the virus, updating over time as new information is automatically ingested.
Michelle Angrish
Toxicologist
US Environmental Protection Agency
Centering Equity and Inclusion in STEM
As the United States seeks to tap every available resource for talent and innovation to keep pace with global competition, institutional leadership in building research capacity at historically Black colleges and universities (HBCUs) and other minority-serving institutions (MSIs) is essential, as Fay Cobb Payton and Ann Quiroz Gates explain in “The Role of Institutional Leaders in Driving Lasting Change in the STEM Ecosystem” (Issues, Summer 2023). Transformational leadership, such as that displayed by Chancellor Harold Martin and North Carolina Agricultural and Technical State University as it elevates itself to the Carnegie R1 designation of “very high research activity,” and by former President Diana Natalicio to position the University of Texas at El Paso as an R1 institution, provides role models for other institutions.
Payton and Gates argue elegantly for utilization of the National Science Foundation’s Eddie Bernice Johnson INCLUDES Theory of Change model. For fullest effect, I suggest that this model must include two additional elements for institutional leaders to consider: the role of institutional board members and the role of minority technical organizations (MTOs). To achieve improved and lasting research capacity, the boards at HBCUs and MSIs must view research as part of the institutional DNA. Many of these institutions are in the midst of transforming from primarily teaching institutions to both teaching and research universities. For public institutions, the governors or oversight authorities should appoint board members with research experience and members who have large influence in the business community, as one outcome from university research is technology commercialization. HBCUs and MSIs need board members with “juice”—because, as the saying goes, “if you’re not at the table, you’re on the menu.”
Finally, as the nation witnesses increasing enrollments at HBCUs and MSIs, the role of minority technical organizations cannot be understated. If we are to achieve the National Science Board’s Vision 2030 of a robust, diverse, domestic workforce in science, technology, engineering, and mathematics—the STEM fields—these organizations are crucial. MTOs such as the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers and the Society for the Advancement of Chicanos/Hispanics and Native Americans in Science are two of the many MTOs that provide role models for STEM students, hold annual conferences for students and professionals, and foster retention of Black and brown students in the STEM fields. As part of the NSF INCLUDES ecosystem, let’s also not forget the major events that recognize outstanding individuals at HBCUs and MSIs, such as the Black Engineer of the Year awards and the Great Minds in STEM annual conferences.
Victor McCrary
Vice President for Research, University of the District of Columbia
Vice Chair, National Science Board
As the president of a national foundation focused exclusively on postsecondary education, I was especially intrigued with Fay Cobb Payton and Ann Quiroz Gates’s ambitious recommendations for the philanthropic community. The authors challenge traditional foundations to make bigger and lengthier investments in higher education, especially minority-serving institutions (MSIs). At ECMC Foundation, we do just this. By making large, multi-year investments in projects led by public two- and four-year colleges and universities, intermediaries and even start-ups through our program-related investments, we aim to advance wholesale change for broad swaths of students, particularly those who come from underserved backgrounds.
One project worth noting is the Transformational Partnerships Fund. Along with support from Ascendium Education Group, the Kresge Foundation, and the Michael and Susan Dell Foundation, we have created a fund that provides support to higher education leaders interested in recalibrating the strategic trajectory of their institutions in service to students. Such recalibrations might be mergers, course sharing, or collaborations that streamline back-end administrative functions. Although this fund does not offer large grants or long-term support, it nonetheless helps higher education leaders understand more deeply how they need to respond to the challenges that lay ahead for their colleges and universities.
Payton and Gates advance a compelling moral argument about the need to better support MSIs and the students they serve, especially in STEM-related majors. What they do not emphasize, however, are specific institutional incentives that will drive lasting improvements in diversity and inclusion. Presidents and chancellors report to trustees, whose primary fiduciary obligation is to keep their institutions in business. Barring incentives that might make significant change possible, college leaders often stick with the status quo, preferring tactical, stop-gap measures rather than strategic reform.
Arguments for institutional change that appeal to our better angels, although earnest and well-intentioned, have failed thus far to significantly alter the postsecondary education landscape for our most vulnerable students. The consequence is that too many students choose to leave before completing their degree. According to the National Student Clearinghouse Research Center, the population of students with some college and no credential has reached 40.4 million. The loss of talent in STEM-related and other disciplines is staggering, and a reversal of institutional inertia is required to alter course.
Still, the authors offer a theory of change that makes a positive, forward-looking contribution to our thinking about institutional transformation. I eagerly await the authors’ future work as they translate their powerful worldview into a bold set of recommendations that offer up key incentives for higher education leaders to employ as they address the challenges their institutions face in postpandemic America.
Jacob Fraire
President, ECMC Foundation
Fay Cobb Payton and Ann Quiroz Gates summarize the critical challenges and opportunities ahead for science, technology, engineering, and mathematics education. The STEM ecosystem is vast, complex, and stubbornly anchored in inertia. The authors present a compelling vision for the future: institutional excellence will be defined by inclusion, actions will be centered on accountability, and the effectiveness of leadership will be measured by the ability to drive systemic and sustained culture change.
Achieving inclusive excellence begins with a commitment to change the STEM culture. Here is a to-do list requiring skillful leadership:
Redefine the STEM curriculum, especially at the introductory level.
Resist the impulse of requiring STEM students to go too deep too soon. Instead, encourage them to explore the arts, humanities, and social sciences.
Review admissions criteria and STEM course prerequisites.
Reward instructors and advisers who practice the skills of equitable and inclusive teaching and mentoring.
Increase representation of persons heretofore excluded from STEM by valuing relevant lived experiences more than pedigree.
We yearn for leaders with the vision, strength, and patience to drive lasting culture change. We must nurture the next generation of leaders so that today’s modest changes will be amplified and sustained.
Inclusive excellence, already challenging, is made more difficult because of the pressures exerted by powerful forces. Many institutions succumb to the false promise of external validation based on criteria that are contradictory to the values of equity and inclusion. The current system selects for competition instead of community, exclusion instead of inclusion, a white-centered culture instead of equity. The “very high research activity” (R1) classification for institutions is based on external funding, the number of PhD degrees and postdoctoral researchers, and citations to published work. In the current US News and World Report “Best Colleges” ranking, half of an institution’s score is based on just four (of 24) criteria: six-year graduation rates, reputation, standardized test scores, and faculty salaries.
It is time to disrupt the incentives system, as the medical scholar Simon Grassmann recently argued in Jacobin magazine. It is wrong to believe that quantitative metrics such as the selectivity of admissions and the number of research grants are an accurate measure of the quality of an institution. Instead, let us develop the means to recognize institutions that make a genuine difference for their students and employees—call it an “Institutional Delta.” Students will learn and instructors will thrive when the learning environment is centered on belonging and the campus commits to the success of everyone. Finding reliable ways to measure the Institutional Delta and assess institutional culture will require new qualitative approaches and courageous leadership. An important lever is the accreditation process, in which accrediting organizations can explicitly evaluate how well an institution’s governing board understands and encourages equity and inclusion.
The STEM culture must be disrupted so that it is centered on equity and inclusion. This requires committed leaders with the courage to battle the contradictions of an outdated rewards system. Culture disruptors must be supported by governing boards and accreditation agencies. Let leaders lead!
David J. Asai
Senior Director, Center for the Advancement of Science Leadership and Culture
Howard Hughes Medical Institute
Fay Cobb Payton and Ann Quiroz Gates emphasize that systemic change to raise attainment of scientists from historically underrepresented backgrounds must engage stakeholders at multiple levels and from multiple organizations. These stakeholders include positional and grassroots leaders in postsecondary institutions, industry leaders, and public and private funders. The authors posit that “revisiting theories of change, understanding the way STEM academic ecosystems work, and fully accounting for the role that leadership plays in driving change and accountability are all necessary to transform a system built upon historical inequities.”
The National Academies of Sciences, Engineering, and Medicine report Minority Serving Institutions: America’s Underutilized Resource for Strengthening the STEM Workforce, released in 2019, highlighted that such institutions graduate disproportionately high shares of students from minoritized backgrounds in STEM fields. The report found that minority-serving institutions (MSIs) receive significantly less federal funding than other institutions and recommended increased investment in MSIs for their critical work in educating minoritized STEM students. To reinforce this work, the report also called for expanding “mutually beneficial partnerships” between MSIs and other higher education institutions, industry stakeholders, and public and private funders.
Payton and Gates rightfully recommend that strengthening the ecosystem to diversify science should “build initiatives on MSIs’ successes.” Yet the National Academies report on MSIs noted that research on why and how some MSIs are so successful in educating minoritized STEM students has been scant. Conversely, most research on this topic has been conducted in highly selective, historically white institutions. Paradoxically then, most of this research has neglected the institutional contexts that many racially minoritized STEM students navigate, including the MSI contexts in which they are often more likely to succeed.
The authors also call to revisit organizational theories of change as a step toward transforming STEM ecosystems in more equitable directions. Yet the social science research on higher education organizational change has historically been disconnected from research on improving STEM education. The American Association for the Advancement of Science report Levers for Change, released in 2019, highlighted this very disconnection as a key barrier to reform in undergraduate STEM education.
Even research that has attempted to link higher education organizational studies with STEM education reform has primarily been conducted in highly selective, historically and predominantly white institutions that are predicated on exclusion. Limited organizational knowledge about how MSIs educate minoritized students and how that knowledge can be adapted to different institutional contexts have together hindered the development of a STEM ecosystem predicated on inclusion. Enacting Payton and Gates’s recommendation to revisit organizational theories of change to transform STEM ecosystems will require that scholarly communities and funders generate more incentives and opportunities to conduct research that integrates higher education organizational change, STEM reform approaches, and the very MSI institutional contexts that can offer models of inclusive excellence in STEM. Such social science research can yield the most promising leadership tools to transform STEM ecosystems toward inclusive excellence.
Anne-Marie Núñez
Executive Director, Diana Natalicio Institute for Hispanic Student Success
Distinguished Centennial Professor, Educational Leadership and Foundations
The University of Texas at El Paso
Fay Cobb Payton and Ann Quiroz Gates highlight the role of leadership in transforming the academic system built upon the nation’s historical inequities. Women, African Americans, Hispanic Americans, and Native Americans remain inadequately represented in science, technology, engineering, and mathematics relative to their proportions in the larger population.
For the United States to maintain leadership in and keep up with expected growth of STEM-related jobs, academic institutions must envision and embrace strategies to educate the future diverse workforce. At the same time, federal funding agencies need to support strategies to encourage universities to pursue strategic alliances with the private sector to recruit, train, and retain a diverse workforce. We need visionary strategies and intentionality to make changes, with accountability frameworks for assessing progress.
Leadership is one key element in strategies of change. Thus, Payton and Gates perspicuously illustrate the role of leadership in advancing the STEM research enterprise at minority-serving institutions. At the University of Texas at El Paso, its president established new academic programs, offered open admissions to students, recruited faculty members from diverse groups, and built the necessary infrastructure to support research and learning. As a result, in a matter of a few years the university achieved the Carnegie “R1” designation signifying “very high research activity” while transforming the university community to reflect the diverse community it serves. Thanks to such visionary leadership, it now leads R1 universities in STEM graduate degrees awarded to Hispanic students.
At North Carolina Agricultural and Technical State University, the chancellor has likewise transformed the institution, increasing student enrollment by nearly 30% in 12 years and doubling undergraduate graduation. Because of the strategies intentionally implemented by the chancellor, the university during the past decade experienced a more than 60% increase in its research enterprise supported by new graduate programs. Similarly, the president of Southwestern Indian Polytechnic Institute, a community college for Native Americans, has led in forging partnerships with Tribal colleges, universities, and the private sector to ensure that graduates can develop successful careers or pursue advanced studies.
As Payton and Gates note, the private sector has a major role to play in training a diverse STEM workforce, citing as exemplar the $1.5 billion Freeman Hrabowski Scholars Program established in 2022 by the Howard Hughes Medical Institute. Every other year, the program will appoint 30 early-career scientists from diverse groups, supporting a total of 150 scientists over a decade. This long-term project will likely yield outcomes to transform the diversity of the nation’s biomedical workforce.
It is clear, then, that the nation needs to embrace sustained and multipronged strategies involving academic institutions, government agencies, private enterprises, and even families to achieve an equitable level of diversity in STEM fields. It is also clear that investments in leadership development and academic infrastructure can help foster the growth of a more capable and diverse workforce and advance the nation’s overall innovation capability. The good news is that Payton and Gates provide proof positive that institutions and partnerships can achieve the desired outcomes.
Jose D. Fuentes
Professor of Atmospheric Science
Pennsylvania State University
The author chairs the Committee on Equal Opportunities in Science and Engineering, chartered by Congress to advise the National Science Foundation on achieving diversity across the nation’s STEM enterprise.
Fay Cobb Payton and Ann Quiroz Gates remind us that despite some positive movement, the United States has substantially more to do in broadening participation in science, technology, engineering, and mathematics—the STEM fields. The authors promote two often overlooked contributions to change: the key role of institutional leaders and the importance of minority-serving institutions. Even with their additions, however, I believe there is a significant deficiency in building out an appropriate theory of change to address the overall challenges we face in STEM.
The authors recount that the National Science Foundation’s Eddie Bernice Johnson INCLUDES Initiative was established to leverage the many discrete efforts underway. They note that “episodic efforts, or those that are not coordinated, intentional, or mutually reinforcing, have not proven effective.” They advocate revisiting theories of change, understanding how STEM academic ecosystems work, and fully accounting for the role that leadership plays in driving change and accountability. But while I strongly agree with their case—as far as it goes—I believe there is considerably more that ought to be added to the theory of change embraced by the INCLUDES Initiative to make it more useful and impactful.
I posit that to successfully guide STEM systems change at scale, a theory of change ought to incorporate at least three (simplified) dimensions:
Institution. At its core, change is local. Classroom, department, and institution levels are where policies, practices, and culture have to change.
Institution/national interface. Initiatives must have bidirectional interaction. National initiatives influence institutions, and a change by an institution reflects back to a national initiative, hopefully multiplying its success through adaptation by other network members.
Multiple dimensions of change. Changes in policy and culture must be translated into specific changes in pedagogy, student belonging, and faculty professional development. We also need better ways to track the translation of these changes into the STEM ecosystem, such as graduating a more diverse class of engineers.
The INCLUDES theory of change focuses almost exclusively on the second dimension. It presents an important progression for initiatives from building collaborative infrastructure to fostering networks, then leveraging allied efforts. It captures the institution/national interface with a box on expansion and adaptation of better-aligned policies, practices, and culture, but only alludes to the institutional change on which such advances rest. Payton and Gates add to the theory by focusing mostly on the missing role of leadership in fostering institutional change. They describe examples of key leaders who have been critical drivers of specific changes. They also devote attention to multiple dimensions of change by describing important successes that minority-serving institutions have had in increasing student graduation in STEM and to the policy and program changes by leadership that made such change possible.
Even after Payton and Gates’s critical additions, I’m left with deep discomfort over a major omission: in the theory of change they offer for the STEM ecosystem, there is virtually nothing specific to STEM activity in it. While well-conceived, it appears entirely process-oriented and doesn’t directly translate to metrics enabling an assessment of progress toward broadening participation in STEM. Surely increased collaboration and changes in policy and culture are imperative, but they can apply to virtually any societal policy shift. What makes the INCLUDES theory of change applicable to whether the United States can produce a more diverse engineering graduating class?
Having offered this challenge—stay tuned from this quarter.
Howard Gobstein
Senior Vice President for STEM Education and Research Policy
Association of Public and Land-grant Universities
Creating transformative (not transactional), intentional, and lasting change in higher education—specifically in a STEM ecosystem—requires continuity, commitment, and lived experiences from leaders who are not afraid to lead change and disrupt inefficient policies and practices that do not support the success of all students in an equitable context. The long-standing work of higher education presidents or chancellors such as Diana Natalicio at the University of Texas at El Paso, Harold Martin at the North Carolina Agricultural and Technical State University, Freeman Hrabowski at the University of Maryland Baltimore County, and Tamarah Pfeiffer at the Southwestern Indian Polytechnic Institute would not have materialized if they were conflict-adverse.
What do each of these dynamic leaders have in common? They were responsible for leading minority-serving institutions (MSIs) of higher education, which they transformed through deliberate actions. More importantly, their deliberate actions were intentionally grounded in understanding the mission of the institution, understanding the historically minoritized populations for which the institution served (among others), and understanding that a long-term commitment to doing the work would be required, even if that meant disrupting “business as usual” for the institution and setting a trajectory towards accelerating systemic change.
Fay Cobb Payton and Ann Quiroz Gates make a compelling case for what is required of institutional leaders to harness and mobilize systemic change in the STEM ecosystem by using the National Science Foundation’s INCLUDES model as a case study. Payton and Gates argue that “higher education leaders (e.g., presidents, provosts, and deans) set the tone for inclusion through their behaviors and expectations.” This argument is tantamount to the individual leaders’ strengths, strategies, and successes at the types of institutions highlighted in the article. Moreover, Payton and Gates point out that “leaders can hold themselves and the organization accountable by identifying measures of excellence to determine whether improvements in access, equity, and excellence are being achieved.”
As a former dean of a college of liberal arts and a college of arts and sciences at two MSIs (Jackson State University, an urban, historically Black college and university—HBCU—and the University of La Verne, a Hispanic serving institution, respectively) and now serving as the chief academic officer and provost at the only public HBCU and exclusively urban land-grant university in the United States—the University of the District of Columbia—I know firsthand the role that institutional leaders must play in moving the needle to “broaden participation” and the need for urgent inclusion of historically minoritized participants in the STEM ecosystem. As leaders operating within the MSI spaces, we recognize that meeting students where they are is crucial to developing the skilled technical workforce that our country so desperately needs.
We must do more to address the barriers that prevent individuals from embarking on or completing a STEM education that prepares them for the workforce. According to a 2017 National Academies of Sciences, Engineering, and Medicine report, by 2022 “the percentage of skilled technical job openings is likely to exceed the percentage of skilled technical workers in the labor force by 1.3 percentage points or about 3.4 million technical jobs.” The report finds that the number of skilled technical workers will likely fall short of demand, even when accounting for how technological innovation may change workforce needs (e.g., shortages of electricians, welders, and programmers).
At the same time, economic shifts toward jobs that put a premium on many lines of work on science and engineering knowledge and skills are leaving behind too many Americans. Therefore, as institutional leaders, we must harness the power of partnerships with industry, nonprofits, and community and technical colleges to increase awareness and understanding of skilled technical workforce careers and employment opportunities. This will be an enduring challenge to balance traditional and emerging research. In the long term, we demonstrate what Payton and Gates argue is necessary for lasting change—a change that affects multiple courses, departments, programs, and/or divisions and alters policies, procedures, norms, cultures, and/or structures.
Suggestions for next steps:
The challenge for MSIs in the twenty-first century is to figure out how to collaborate among institutions to renew, reform, and expand programs to ensure students have the opportunity for educational and career success.
As we think about MSI collaborations, there needs to be a broader discussion to include efforts that will yield high levels of public-private collaboration in STEM education, advocating policies and budgets focused on maximizing investments to increase student access and engagement in active, rigorous STEM learning experiences.
If we are to reimagine a twenty-first century where we have fewer HBCU mergers and closures, we must recognize that leadership at the top of our organizations must also come together to learn best practices for leading change. The old mindsets, habits, and practices of running our colleges and universities must be reset.
Through collaboration, HBCUs can pool resources and extend their reach. Collaboration opens communication channels, knowledge-sharing, and community-building between HBCUs and MSIs.
Lawrence T. Potter
Chief Academic Officer
University of the District of Columbia
Fay Cobb Payton and Ann Quiroz Gates effectively conceptualize how inclusive STEM ecosystems are developed and sustained over time. The Eddie Bernice Johnson INCLUDES initiative at the National Science Foundation (NSF), which the authors write about, is a significant investment in moving the needle of underrepresentation in STEM. After thirty years as a STEM scholar, practitioner, and administrator in academia, industry, and government, I believe we are finally at an inflection point, although inflection can go both ways: negative or positive—and possibly only incrementally positive. For me, Payton and Gates’s framework triggered thoughts on the meaning of inclusion and why leadership is instrumental in building STEM ecosystems.
Inclusion has many different meanings, and those meanings have shifted over the years depending on context and purpose. Without consistent linguistic framing, inclusion can be decontextualized—rendering it into a passive concept rather than an action to be taken or an engine to be used to drive culture and climate. NSF’s INCLUDES program emphasizes collaboration, alliances, and connectors. The program is designed to inspire investigators to actively engage in inclusive change, a mechanism that requires us to use both inclusively informed and inclusively responsive approaches.
Diversifying STEM is challenged by the lack of a shared concept. Although the concept of “inclusion” does not have to be identical among institutions, it should be semantically aligned. As an example, Kaja A. Brix, Olivia A. Lee, and Sorina G. Stalla used a crowd-sourced approach to capture the meaning of “inclusion within diversity.” Their grounded theory methodology yielded four shared concepts: (1) access and participation; (2) embracing diverse perspectives; (3) welcoming participation; and (4) team belonging. For those of us who have advised doctoral students, inclusion is sort of like a good dissertation: there is no real formula for a high-quality dissertation, but you know it when you see it.
Another point made by Payton and Gates relates to sustained leadership and accountability. When she was president of the University of Texas at El Paso (UTEP), Diana Natalicio was highly effective in framing diversity, equity, inclusion, and accessibility (DEIA) to support action. Given the historical disadvantages experienced by UTEP, President Natalicio never seemed to waiver on UTEP’s right to become an R1 university in a collaborative DEIA context. Her degree in linguistics may have facilitated her skill in framing ideas that move people to real action.
Effective leadership in support of inclusion must be boldly voiced in multiple ways for multiple audiences. This form of institutional voice matters to all stakeholders, both within the institution and externally, because failure to voice DEIA says something as well: it means a leader is not really committed to change. Giving voice means leaders must consult with groups on their own campus and in their own communities to understand how to elevate DEIA using multipronged, systems-wide actions.
Payton and Gates also highlight Harold Martin, an electrical engineer who is recognized for his effective leadership of the North Carolina Agricultural and Technical State University. Among other accomplishments, Chancellor Martin’s leadership and practice have established an institution that strategically applies data-informed methods to advance excellence. Application of data-informed approaches is not a panacea, but metrics and measures serve to find the “proof in the pudding” regarding inclusive change. Inclusive change management is facilitated by thoughtful creation, elicitation, review, and interpretation of data in quantitative and qualitative forms. Without data, institutions will only check anecdotal boxes around inclusion, leading to no real or lasting change.
We must pay attention to shared meanings and effective leadership when leading inclusive change in STEM. Ecosystems thrive because of successful interaction and interconnection. Unfortunately, many leaders focus only on culture. While key to lasting change, culture is grounded in shared meaning, values, beliefs, etc. But culture change without climate change is ineffective. In organizational research, culture is what we say; climate is what we do. It is high time we are all about the “doing” because full reliance on the “saying” may not move diversity in a positive direction.
Tonya Smith-Jackson
Provost and Executive Vice Chancellor for Academic Affairs
North Carolina Agricultural and Technical State University
Fay Cobb Payton and Ann Quiroz Gates shed light on the critical role of leadership in addressing historical inequities in the STEM fields, particularly in higher education. One of the key takeaways is the importance of visionary and committed leadership in fostering lasting change. Although their article provides valuable insights into the importance of leadership in promoting STEM equity, there are a couple areas that could use additional examination.
First, their argument would benefit from further exploration of systemic challenges and proven strategies. Payton and Gates focus primarily on leadership within educational institutions but do not address external factors that can influence STEM diversity. For example, they don’t discuss the role of government policies, industry partnerships, K–12 preparation, or societal attitudes in shaping STEM demographics. Understanding the specific obstacles faced by underrepresented groups and how leadership can address them will add value to the discussion. While the article mentions the importance of inclusive excellence, it would be helpful to provide specific strategies that college and university leadership can implement immediately to create lasting change in STEM.
Second, there should be a wider discussion of intersectionality. The article primarily discusses diversity in terms of race and ethnicity but does not adequately address other dimensions of diversity, such as gender, disability, or socioeconomic background. Recognizing the intersectionality of identities and experiences is crucial for creating inclusive STEM environments.
To create lasting change in STEM, college and university leadership can take several additional steps, including collecting and analyzing data on the representation of underrepresented groups in STEM programs and faculty positions. These data can help identify disparities and inform targeted interventions. Leadership also needs to review and revise the curriculum to ensure it reflects diverse perspectives and contributions in STEM fields. Faculty must be encouraged and rewarded for incorporating inclusive teaching and research practices.
Creating lasting change in STEM demographics is a long-term commitment. Institutions must maintain their dedication to diversity and inclusion even when faced with challenges and changes in institutional leadership. Payton and Gates beautifully articulate the case that college and university leadership can create lasting change in STEM by implementing data-driven initiatives, fostering local and national collaborations, and maintaining a long-term institutional commitment to diversity and inclusion.
Talithia Williams
Associate Professor of Mathematics
Mathematics Clinic Program Director
Harvey Mudd College
Agricultural Research for the Public Good
Norman Borlaug succeeded at something that no one had done before—applying wide area adaptation for a specific trait in a specific crop for yield enhancement. This worked beyond all expectations in field trials conducted in environments that favored the expression of the new genetic material, which in this case had been developed from a type of semidwarf wheat native to Japan. Of course, wide area adaptation must be put in perspective as per the trait, farmer, crop genetics, and environment in which such a package is intended to be used.
The more traditional “local adaptation” typically happens in a farmer’s fields. In these and other microenvironments, wheat such as Borlaug developed, or other new wheats, can be tested and, if successful, bred locally for such situations. This has been exactly the type of applied “bread and butter” work done by national program scientists and local seed companies. However, this specialized knowledge for each area of the country and a given crop is being lost, as is the ability of national program scientists to conduct multilocation trials.
This loss of talent and support has eroded not because of the work of Borlaug, but because of consolidation of the agricultural research entities into four large agricultural/pharmaceutical companies. No more are there local seed companies; no more is there a robust plant breeding community in the public sector; no longer is there a focused effort on the “public good” of agriculture. Losing this publicly supported pool of expertise is especially a concern when local needs do not align with those of commercial providers.
This is true for India, Mexico, the United States, and Canada—and one can keep on going.
Consequently, the work that Borlaug did, all conducted in the public arena for the public good, is all that more important to replicate today. Science and farming are two ends of the same rope, and while one continues to be privatized, the other cannot benefit. Thus, improving farmers’ education and “infrastructure,” however little this seems to be defined, will not keep a given farming sector free from globalized pressures or a shortage of public-minded and public-based extension agents.
The separation of plant breeding—which Marci R. Baranski’s book classifies as a capital-intensive technological approach—from farming speaks of a divorce that simply should not come to pass. Instead, depending on trait, genotypes, environment, and famers’ needs, they should be brought closer together to ensure that what is developed serves those in need, not just those who have the currency and farming practices that are compatible with commercial agriculture.
Joel Cohen
Visiting Scholar, Nicholas School of the Environment
Duke University
Beyond Stereotypes and Caricatures
In “Chinese Academics Are Becoming a Force for Good Governance” (Issues, Summer 2023), Joy Y. Zhang, Sonia Ben Ouagrham-Gormley, and Kathleen M. Vogel provide a thoughtful exploration of how bioethicists, scientists, legal scholars, and others are making important contributions to ethical debates and policy discussions in China. They are addressing such topics as what constitutes research misconduct and how it should be addressed by scientific institutions and oversight bodies, how heritable human genome editing should be regulated, and what societal responses to unethical practices are warranted when they are not proscribed by existing laws. Their essay also addresses several issues with implications that extend beyond China to global conversations about ethical, legal, and social dimensions of emerging technologies in the life sciences and other domains.
Given the growing role that academics in China are playing in shaping oversight of scientific technologies, individuals expressing dissent from official government doctrine in at least some cases risk being subjected to censorship and pressure to withdraw from public engagement. As tempting as it might be to highlight differences between public discourse under China’s Communist Party and public debate in liberal democratic societies, academics in democracies where various forms of right-wing populism have taken root are also at risk of being subjected to political orthodoxies and punishment for expressions of dissent. One important role transnational organizations can play is to promote and protect critical, thoughtful analyses of emerging technologies. They can also offer solidarity, support, and deliberative spaces to individuals subjected to censorship and political pressure.
The authors also note the challenges that scholars in China have had in advocating for more robust ethical review and regulatory oversight of scientific research funded by industry and other private-sector sources. This issue extends to other countries with stringent review of research funded by government agencies and conducted at government-supported institutions, and with comparatively lax oversight of research funded by private sources and conducted at private-sector institutions. This disparity in regulatory models is a recipe for future research scandals involving a variety of powerful technologies. In the biomedical sciences, for example, these discrepancies in governance frameworks are becoming increasingly concerning when longevity research is funded by private industry or even individual billionaires who may have well-defined objectives regarding what they hope to achieve and sometimes a willingness to challenge the authority of national regulatory bodies.
Finally, we need to move beyond the facile use of national stereotypes and caricatures when discussing China and other countries with evolving policies for responsible research. China, as the authors point out, is sometimes depicted as a “Wild East” environment in which “rogue scientists” can flourish. However, research scandals are a global phenomenon. Likewise, inadequate oversight of clinical facilities is an issue in many countries, including nations with what often are assumed to be well-resourced and effective regulatory bodies. For example, academics used to write about “stem cell tourism” to such countries as China, India, and Mexico, but clinics engaged in direct-to-consumer marketing of unlicensed and unproven stem cell interventions are now proliferating in the United States as well. Our old models of the global economy, with well-regulated countries versus out-of-control marketplaces, often have little to do with current realities. Engagement with academics in China needs to occur without the use of self-serving and patronizing narratives about where elite science occurs, where research scandals are likely to take place, and which countries have well-regulated environments for scientific research and clinical practice.
Leigh Turner
Executive Director, UCI Bioethics Program
Professor, Department of Health, Society, & Behavior
In “ARPA-H Could Offer Taxpayers a Fairer Shake” (Issues, Summer 2023), Travis Whitfill and Mariana Mazzucato accurately describe three strategies for how the Advanced Research Project Agency for Health (ARPA-H) could structure its grant program to ensure that taxpayers receive a fairer return for their high-risk public investments in research and development to solve society’s most pressing health challenges. One of their core ideas is repurposing a successful venture capital model of converting early-stage investments into equity ownership if a product progresses successfully in the development process.
As patients face challenges in accessing affordable prescription drugs and health technologies, we believe it is imperative for policymakers and ARPA-H leaders to address two fundamental questions: How does the proposed grant program strategy directly help patients, and how will ARPA-H (or any government agency) implement and enforce this specific strategy?
The first question concerns what patients ultimately care about—how will this policy impact them and their loved ones? For example, if the government receives equity ownership in a successful company that generates revenue for the US Treasury, that has limited direct benefit for a family that cannot afford the health technology.
There should be a strong emphasis that all patients, regardless of their demographic background or insurance status, can access innovative health technologies developed with public funding at a fair price. For example, in September 2023 the Biden administration announced a $326 million contract with Regeneron to develop a monoclonal antibody for COVID-19 prevention. This contract included a pricing provision that requires the list price in the United States to be equal to or lower than the price in other major countries. Maintaining this focus will lead policymakers to address how we pay for these health technologies and consider practical steps to achieve equitable access. That may include price negotiation or reinvesting sales revenue directly into public health and the social determinants of health.
The effectiveness of any policy depends strongly on its implementation and enforcement. As Whitfill and Mazzucato mention, the US government has the legal authority to seek lower prescription drug prices through the Bayh-Dole Act for inventions with federally funded patents. However, the National Institutes of Health, which houses ARPA-H as an independent agency, has refused to exercise its license or other statutory powers, most recently with enzalutamide (Xtandi), a prostate cancer drug.
The government also has the existing legal authority under 28 US Code §1498 to make or use a patent-protected product while giving the patent owners “reasonable and entire compensation” when doing so, but it has not implemented this policy in the case of prescription drugs for many decades.
It is no secret that corporations in the US pharmaceutical market are incentivized by various forces to pursue profit maximization. In the case of public funding to support pharmaceutical innovation, we need to ensure that when taxpayers de-risk research and development, they should also share more directly in the financial benefits of that investment.
Hussain Lalani
Primary Care Physician, Brigham and Women’s Hospital
Health Policy Researcher, Harvard Medical School
Program On Regulation, Therapeutics, and Law
Aaron S. Kesselheim
Professor of Medicine, Department of Medicine, Division of Pharmacoepidemiology and Pharmacoeconomics
Brigham and Women’s Hospital and Harvard Medical School
Director, Program On Regulation, Therapeutics, and Law
Travis Whitfill and Mariana Mazzucato make a case that demands the attention of both leaders of the Advanced Research Project Agency for Health (ARPA-H) and policymakers: the agency’s innovation must focus not only on technology but also finance. Breaking from decades of public finance for science and technology with few strings attached, new policy-thinking is needed, they argue, if taxpayers are to get a dynamic and fair return on their ARPA-H investments. To pursue this goal, the authors make three promising proposals: capturing returns through public sector equity, curbing shareholder profiteering by promoting reinvestment in innovation, and setting conditions for access and affordability.
As the technology scholar Bhaven N. Sampath chronicled in Issues in 2020, however, debates over the structure of public financing for scientific research and development have been around since the dawn of the post-war era. But amid reassessments of long-standing orthodoxy about public and private roles in innovation, Whitfill and Mazzucato’s argument lands in at least two intriguing streams of policy rethinking.
First, debates over ARPA-H’s design could connect biomedical R&D policy to the wider “industrial strategy” paradigm being shaped across spheres of technology, from semiconductors to green energy. Passage of the CHIPS and Science Act, the Inflation Reduction Act, and the Bipartisan Infrastructure Deal has invigorated government efforts to shift from a laissez-faire posture to proactively shape markets in pursuit of specific national security, economic, and social goals. Yet biomedical research has been noticeably absent from these policy discussions, perhaps in part because of the strong grip of vested interests and narratives about the division of labor between government and industry. Seeing ARPA-H through this industrial strategy lens could instead invite a wider set of fresh proposals about its design and implementation.
Second, bringing an industrial strategy view to ARPA-H would take advantage of new momentum to reconfigure government’s relationship with the biopharmaceutical industry, which has recently focused on drug pricing. The introduction of Medicare drug pricing negotiation in the Inflation Reduction Act for a limited set of drugs is a landmark measure for pharmaceutical affordability, yet it directs government policy to ex-post negotiations after an innovation has been developed. If done right, the agency’s investments would “crowd in” the right kind of patient, private capital with ex-ante conditions described by Mazzucato and Whitfill. In the process, ARPA-H could serve as an unprecedented “public option” for biomedical innovation, building public capacity for later stages of R&D prioritized for achieving public health goals.
Whether the authors’ ideas will find traction, however, remains uncertain. Why might change happen now, decades after the initial postwar debates settled into contemporary orthodoxies? Beyond the nascent rethinking of the prevailing neoliberal economic paradigm in policy circles, a critical factor might well be the evolution of a two-decade-old network of smart and strategic lawyers, organizers, and patient groups that comprise the “access to medicines” movements. These movements are pushing for bold changes across multiple domains, including better patenting and licensing practices, public manufacturing, and globally equitable technology transfer. Ultimately, ARPA-H’s success may well rest on citizen-led action that helps decisionmakers understand the stakes of doing public enterprise differently.
Victor Roy
Postdoctoral Fellow, Veterans Affairs Scholar
National Clinician Scholars Program, Yale School of Medicine
Travis Whitfill and Mariana Mazzucato’s proposal to utilize the new Advanced Research Project Agency for Health (ARPA-H) to stimulate innovation in pharmaceuticals while reducing the net costs to taxpayers is very important and welcome.
Innovation can indeed provide major benefits and is greatly stimulated by the opportunity to make profits. Yet pharmaceuticals (including vaccines and medical devices) are unlike most other products, even basics such as food and clothing. Their primary aim is not to increase people’s pleasure, as with better tasting food or more stylish clothing, but to improve their health and longevity. Also, the need for and choice of medications is usually determined not by patients but by their physicians.
Another major difference is that the National Institutes of Health and related agencies—that is, taxpayers—finance much of basic medical research. In addition, when patients use medical products, most costs are usually borne not by them but by members of the public (who support public insurance) or by other insurance holders (whose premiums are raised to cover the costs of expensive products). Furthermore, pharmaceuticals are protected from competition by patents, which are manipulated to extend for many years. Pharmaceutical companies should not, therefore, be treated like other private enterprises and be permitted to make huge profits at the expense of the public and patients.
In Whitfill and Mazzucato’s proposal, public monies provided to private companies and researchers would become equity, just like venture capital funds and other private investments, and taxpayers would thus become shareholders in the pharmaceutical companies. The funding could extend beyond research to clinical trials and even marketing and patient follow-up. This would create ongoing public-private collaboration that could reward the taxpayers as well as the companies and their other shareholders. In addition, the new ARPA-H could “encourage or require” companies to reinvest profits into research and development and look for other ways to restrict profit-taking, and could insist on accessible prices for the drugs it helped to finance.
But their proposal’s feasibility and impact are uncertain. To what extent would ARPA-H have to expand its current funding—$1.5 billion in 2023 and $2.5 billion requested for 2024, in contrast to $187 billion spent by the NIH to enable new drug approvals between 2010 and 2019—to make a substantial impact on the development of new, high-value pharmaceuticals? What degree of price and profit restriction would companies be willing to accept? Could the benefit of higher prices to taxpayers as shareholders be used to justify the excessive prices that benefit company executives and other shareholders even more? Should not the burden on those who pay for the pharmaceuticals by financing public and private insurances be taken into account? Finally, would politicians be willing, in the face of fierce lobbying by pharma, to provide ARPA-H with the required funds and authority?
Nonetheless, expanding an already-existing (even if newly created) agency is clearly more feasible than more radical restructuring, such as my colleagues and I have proposed. Stimulating beneficial innovations and reducing, if not eliminating, excessive profits are far better than accepting the status quo. I strongly support, therefore, implementing Whitfill and Mazzucato’s proposal.
Paul Sorum
Professor Emeritus
Departments of Internal Medicine and Pediatrics
Albany Medical College
Chaosmosis: Assigning Rhythm to the Turbulent
Chaosmosis: Assigning Rhythm to the Turbulent is an art exhibition inspired by fluid dynamics, a discipline that describes the flow of liquids and gases. The exhibition draws from past submissions to the American Physical Society’s Gallery of Fluid Motion, an annual program that serves as a visual record of the aesthetic and science of contemporary fluid dynamics. For the first time, a selection of these past submissions has been curated into an educational art exhibition to engage viewers’ senses.
The creators of these works, which range from photography and video to sculpture and sound, are scientists and artists. Their work enables us to see the invisible and understand the ever-moving elements surrounding and affecting us. Contributors to the exhibition include artists Rafael Lozano-Hemmer and Roman De Giuli, along with physicists Georgios Matheou, Alessandro Ceci, Philippe Bourrianne, Manouk Abkarian, Howard Stone, Christopher Clifford, Devesh Ranjan, Virgile Thievenaz, Yahya Modarres-Sadeghi, Alvaro Marin, Christophe Almarcha, Bruno Denet, Emmanuel Villermaux, Arpit Mishra, and Paul Branson.
Magnified frozen water droplets resemble shattered glass in a series of photographs. A video simulation depicts the confined friction occurring within a pipe with flowing liquid. In other works, the fluid motions portrayed are produced by human bodies: a video sheds light on the airflow of an opera singer while singing, and a 3D-printed sculpture reveals the flow of human breath using sound from the first dated recording of human speech. Gases and liquids are in constant motion, advancing in seemingly chaotic ways, yet the works offer a closer look, revealing elegant and poetic patterns amid atmospheric turbulence.
The term chaosmosis, coined by the philosopher Félix Guattari in the 1990s, conveys the idea of transforming chaos into complexity. It assigns rhythm to the turbulent, linking breathing with the subjective perception of time, and concluding that respiration is what unites us all.
Stephen R. Johnston, Jessica B. Imgrund, Dan Fries, Rafael Lozano-Hemmer, Stephan Schulz, Kyle C. Johnson, Johnathan T. Bolton, Christopher J. Clifford, Brian S. Thurow, Enrico Fonda, Katepalli R. Sreenivasan, and Devesh Ranjan, Volute 1: Au Clair De La Lune, 2016, 3D-printed filament, sound, 26 x 7 x 8 inches. Christophe Almarcha, Joel Quinard, Bruno Denet, Jean-Marie Laugier, and Emmanuel Villermaux, Experimental Two-Dimensional Cellular Flames, 2014, laser print on fabric, 84 x 46 inches. Arpit Mishra, Claire Bourquard, Arnab Roy, Rajaram Lakkaraju, Outi Supponen, and Parthasarathi Ghosh, Flow-Focusing from Interacting Cavitation Bubbles, 2021, laser print on fabric, 84 x 48.5 inches.Roman De Giuli, Sense of Scale, 2022, video still.
Chaosmosis runs from October 2, 2023, through February 23, 2024, at the National Academy of Sciences building in Washington, DC. The exhibition is curated by Natalia Almonte and Nicole Economides in coordination with Azar Panah and the American Physical Society’s Division of Fluid Dynamics.
Transforming Research Participation
In “From Bedside to Bench and Back” (Issues, Summer 2023), Tania Simoncelli highlights patients moving from being subjects of biomedical research to leading that research. Patients and their families no longer simply participate in research led by others and advocate for resources. Together they design and implement research agendas, taking the practice of science into their own hands. As Simoncelli details, initiatives such as the Chan Zuckerberg Initiative’s Rare as One Project—and the patient-led partnerships it funds—are challenging longstanding power dynamics in biomedical research.
Opportunities to center the public’s questions, priorities, and values throughout the research lifecycle are not limited to research on health outcomes. And certainly, the promise of participatory approaches is not new. Yet demand for these activities is pressing.
Today’s global challenges are urgent, local, and interconnected. They require all-hands-on-deck solutions in such diverse areas as climate resilience and ecosystem protection, pandemic prevention, and the ethical deployment of artificial intelligence. Benefits of engaging the public in these collective undertakings and of centering societal considerations in research are being recognized by those who hold power in US innovation systems, including by Congress in the CHIPS and Science Act.
Opportunities to center the public’s questions, priorities, and values throughout the research lifecycle are not limited to research on health outcomes.
On August 29, 2023, the President’s Council of Advisors on Science and Technology (PCAST) issued a letter on “Recommendations for Advancing Public Engagement with the Sciences.” PCAST finds, “We must, as a country, create an ecosystem in which scientists collaborate with the public, from the identification of initial questions, to the review and analysis of new findings, to their dissemination and translation into policies.”
To some observers outside the research enterprise, this charge is long overdue. To those already operating at the boundaries of science and communities, it is a welcome door-opener. And to entrenched interests concerned about movement away from a social contract supporting curiosity-driven fundamental research toward solutions-oriented research that focuses scientific processes on solutions and public good, PCAST may be shaking bedrock.
Increased federal demand can move scientific organizations toward participatory practices. For greatest impact, more on-the-ground capacity is needed, including training of practitioners who can connect communities with research tools and collaborators. Similarly essential is continued equity work within research institutions grappling with their history of exclusionary practices.
Boundary organizations that bridge the scientific enterprise with communities of shared interest or place are connecting the public with researchers and putting data, tools, and open science hardware into the hands of more people. The Association of Science and Technology Centers, which I led from 2018 through 2020, issued a community-science framework and suite of resources to build capacity among science-engagement practitioners. The American Geophysical Union’s Thriving Earth Exchange supports community science by helping communities find resources to address their pressing concerns. Public Lab is pursuing environmental justice through community science and open technology. The Expert and Citizen Assessment of Science and Technology (ECAST) Network developed a participatory technology assessment method to support democratic science policy decisionmaking.
I applaud these patient-led partnerships and community-science collaborations, and I look forward to the solutions they produce.
Cristin Dorgelo
Former Senior Advisor for Management at the Office of Management and Budget
President Emerita of the Association of Science and Technology Centers
Former Chief of Staff of the Office of Science and Technology Policy
Tania Simoncelli paints a powerful picture of the increasingly central role of patients and patient communities in driving medical research. The many success stories she describes of the Chan Zuckerberg Initiative’s Rare as One project provide an assertive counternarrative to the rarely explicated but deeply held presumption that only health professionals with decades of training in science and medicine can and should drive the agenda in health research. These successes confirm that those who continue to treat patient engagement in research as a box-checking exercise do themselves and the patients they claim to serve a grave disservice.
However, these narratives do more than just celebrate accomplishments. They also highlight the limitations of our current systems of funding and prioritizing health research, which require herculean efforts from patients and families already facing their own personal medical challenges. Patient communities have clearly demonstrated that they can achieve the impossible, but they do so because our current systems for funding research provide limited alternatives. How would federal funding for health research need to change such that patients and families would not have to also become scientists, clinicians, drug developers, and fundraisers for their disease to receive attention from the scientific community?
Medical research—and rare disease research in particular—urgently needs substantial investment in shared infrastructure and tools to increase efficiency, reduce costs, and facilitate engagement of diverse patients and families with variable time and resources to contribute. These investments will not only increase efficiency; they will also increase equity insomuch as they reduce the likelihood that progress in a given disease will depend on the financial resources and social capital of a particular patient community. This concern is not just hypothetical; a 2020 study of research funding in the United States for sickle cell disease, which predominantly affects Black patients, compared with cystic fibrosis, which predominantly affects white patients, found an average of $7,690 in annual foundation spending per patient affected with cystic fibrosis compared with only $102 in sickle cell disease, with predicable differences in the numbers of studies conducted and therapies developed. The investments made by the Chan Zuckerberg Initiative have been critical in leveling the playing field, but developing an efficient, equitable, and sustainable approach to rare disease research in the United States will require a commitment on the part of federal policymakers and funders as well.
Medical research—and rare disease research in particular—urgently needs substantial investment in shared infrastructure and tools to increase efficiency, reduce costs, and facilitate engagement of diverse patients and families with variable time and resources to contribute.
To achieve this seismic shift, I see few stakeholders better situated to advise policymakers and funders than the patient communities themselves. While federal funders may support patient engagement in individual research efforts, there is also the need to move this engagement upstream, allowing patients a voice in setting research funding priorities. Of course, implementing increased patient engagement in federal research funding allocation will require a careful examination of whose voices ultimately represent the many, inherently diverse patient communities. Attention to questions of representation and generalizability within and across patient communities is an ongoing challenge in all patient engagement activities, and the responsibility for addressing this challenge lies with all of us—funders, researchers, industry partners, regulators, and patient communities alike. However, it would be a mistake to treat this challenge as impossible: patient communities will undoubtedly prove otherwise.
Meghan C. Halley
Senior Research Scholar
Center for Biomedical Ethics
Stanford University School of Medicine
Biomedical research has blind spots that can be reduced, as Tania Simoncelli writes, by “centering the largest stakeholders in medicine—the patients.” By focusing on rare diseases, the Chan Zuckerberg Initiative is partnering with the most daring rebels of the patient-led movement. These pioneers are breaking new paths forward in clinical research, health policy, and data rights management.
But it’s not only people living with rare diseases whose needs are not being met by the current approach to health care delivery and innovation. Equally exciting is the broader coalition of people who are trying to improve their lives by optimizing diet or sleep routines based on self-tracking or building their own mobility or disease-management tools. They, too, are driving research forward, often outside the view of mainstream leaders because the investigations are focused on personal health journeys.
For example, 8 in 10 adults in the United States track some aspect of their health, according to a survey by Rock Health and Stanford University’s Center for Digital Health. These personal scientists are solving their own health mysteries, managing chronic conditions, or finding ways to achieve their goals using clinical-grade digital tools that are now available. How might we create a biomedical research intake valve for those insights and findings?
Patients know their bodies better than anyone and, with training and support, are able to accurately report any changes to their care teams, who can then respond and nip issues in the bud. In a study conducted at Memorial Sloan Kettering Cancer Center, patients being treated with routine chemotherapy who tracked their own symptoms during treatment both lived longer and felt better. Why are we not helping everyone learn how to track their symptoms?
Hardware innovation is another front in the patient-led revolution.
People living with disability ingeniously adapt home health equipment to meet their needs. By improving their own mobility, making a home safer, and creatively solving everyday problems, they and their care partners save themselves and the health care system money. We should invest in ways to lift up and publicize the best ideas related to home care, just as we celebrate advances in laboratory research.
We should invest in ways to lift up and publicize the best ideas related to home care, just as we celebrate advances in laboratory research.
Insulin-requiring diabetes requires constant vigilance and, for some people, that work is aided by continuous glucose monitors and insulin pumps. But medical device companies lock down the data generated by people’s own bodies, ignoring the possibility that patients and caregivers could contribute to innovation to improve their own lives. Happily, the diabetes rebel alliance, whose motto is #WeAreNotWaiting, found a way to not only get access to the data, but also build a do-it-yourself open-source artificial pancreas system. This, by the way, is just one example of how the diabetes community has risen up to demand—or invent—better tools.
Finally, since any conversation about biomedical innovation is now not complete without a reference to artificial intelligence, I will point to evidence that patients, survivors, and caregivers are essential partners in reducing bias on that front as well. For example, when creating an algorithm to measure the severity of osteoarthritis in knee X-rays, a team of academic and tech industry researchers fed it both clinical and patient-reported data. The result was a more accurate estimate of pain, particularly among underserved populations, whose testimony had been ignored or dismissed by human clinicians.
The patient-led revolutionaries are at the gate. Let’s let them in.
Tania Simoncelli provides a thoughtful reminder of the reality faced by many families with someone who has a rare disease. The term “rare disease” is often misunderstood. Such diseases affect an estimated 1 in 10 Americans, which means each of us likely knows someone with one of the 7,000 rare diseases that have a diagnosis. As the former executive director of FasterCures, a center of the Milken Institute, and an executive in a rare disease biotech, I have met many of these families. They see scientific advances reported every day in the news. And yet they may be part of a patient community where there are no options. As Simoncelli points out, fewer than 5% of rare diseases have a treatment approved by the US Food and Drug Administration.
The author’s personal journey is a reminder that champions exist who are dedicated to finding models that can change the system. The Chan Zuckerberg Initiative that Simoncelli works for, which has donated $75 million through its Rare as One program, believes that its funded organizations can establish enough scientific evidence and research infrastructure—and leverage the power of their voices—to attract additional investment from government and the life sciences community. Successful organizations such as the Cystic Fibrosis Foundation have leveraged their research leadership to tap into the enormous capital, talent, and sense of urgency of the private sector to transform the lives of families through the development of treatments, and they have advocated for policies that support patient access. Rare as Oneorganizations are a beacon of light for families forging new paths on behalf of their communities.
The role of philanthropy is powerful, but it does not equate to the roles government and the private sector can play.
As Simoncelli also highlights, the role of philanthropy is powerful, but it does not equate to the roles government and the private sector can play. Since 1983, the Orphan Drug Act has been a major driver spurring the development of therapeutic advances in rare disease, and one study estimates that the FDA approved 599 orphan medications between 1983 and 2020. In August 2022, Congress passed the Inflation Reduction Act authorizing the Medicare program to begin negotiating the prices of drugs that have been on the market for several years. Congress believed that tackling drug prices was a key to ensuring patient affordability. However, critics have pointed to the law’s potential impact on innovation, citing specifically how it could disincentivize research into rare disease. The implementation of the law is ongoing, so it is too early to understand the consequences. But the patient community does not need to wait to advance new innovative models to address any disincentives that may surface.
Every Cure is one of these models that may help address the consequences that new Medicare drug negotiation may have on continuing investments in specific types of research programs. Its mission is to unlock the full potential of existing medicines to treat every disease and every patient possible. Every Cure is building an artificial intelligence-enabled, comprehensive, open-source database of drug-repurposing opportunities. The goal is to create an efficient infrastructure that enables research for rare diseases as well as more common conditions. By working in partnership with the patient community, clinical trials organizations, data scientists, and funders, Every Cure hopes to be a catalyst in advancing new treatment options for patients who currently lack options. Innovation can’t wait—because patients won’t.
Tanisha Carino
Vice Chair of the Board
Every Cure
Tania Simoncelli illuminates a powerful transformation in medical research: enter patients and families to center stage. No longer passive recipients and participants, they are passionate drivers of innovation, teamwork, focus, and results. Science systematically and rigorously approaches truth through cycles of hypothesis and experimentation. Yet science is a human endeavor, and scientists differ in their knowledge, tribal affinities in cultural and scientific backgrounds, bias, creativity, open-mindedness, ambition, and many other critical factors, but often lack “skin in the cure game.”
Medicine prides itself as a science, but it is a social science, as humans are observed and observers. Less “soft” a science than sociology or psychology, medicine is far closer physics or chemistry in rigor and reproducibility. Patients were traditionally viewed as biased while physicians as objective. Double-blind studies revolutionized medicine by explicitly recognizing the bias of physician scientists. Biases run deep as humans are its reservoirs, vectors, and victims. Paradoxically, patients and families with skin in the game are exceptional collaborators who are immune to academic biases. They have revolutionized medical science.
Patients and families with skin in the game are exceptional collaborators who are immune to academic biases. They have revolutionized medical science.
Academics may myopically measure success by papers published in high-impact journals, prestigious grants, promotions, and honors. Idealistic and iconoclastic views of youth give way with success to perpetuating a new status quo that reinforces their theories and tribe; blinded by bias.
True scientists, masters of doubt about their own beliefs, and people with serious medical disorders and their families seek improved outcomes and cures. Teamwork magnifies medical science’s awesome power.
Dichotomies endlessly divide the road of discovery. What is the best path? Fund basic science, untethered from therapy, answering fundamental questions in biology? Or fund translational science, laser focused on new therapies? How often are biomarkers critical, distractions, or misinformation? What leads to more seminal advances—top-down, planned A-bomb building Manhattan projects, or serendipity, propelling Fleming’s discovery of penicillin? The answer depends on your smarts, team-building skills, and luck. Who can best decide how medical research funds should be allocated? Are those with seniority in politics, science, and medicine best? Should those affected have a say? Why can’t science shine its potent lens on the science of discovery instead of defaulting to “what is established” and “works based” but is not evidence-based?
A new paradigm has arrived. Families with skin in the game have a seat at the decision table. Their motivation is pure, and although no one knows the best path before embarking to discover, choices should be guided by the desire to improve health outcomes, not protect the status quo.
Orrin Devinsky
Professor of Neurology and Neuroscience
New York University Grossman School of Medicine
Students seeking a meaningful career in science policy that effects real-world change could do worse than look to the career of Tania Simoncelli. Her account in Issues of how the Chan Zuckerberg Initiative (CZI) is helping build the infrastructure that can speed the development and effectiveness of treatments for rare diseases is just her most recent contribution. It follows her instrumental role in bringing the lawsuit against the drug company Myriad Genetics that ultimately ended in a unanimous US Supreme Court decision invalidating patent claims on genes, as well as her productive stints at several institutions near the centers of power in biomedicine and government.
Rare diseases are rare only in isolation. In aggregate they are not so uncommon. But because they are individually rare, they face a difficult collective action problem. There are few advocates relative to cancer, heart disease, or Alzheimer’s disease, although each of those conditions also languished in neglect at points in their history before research institutions incorporated their conquest into their missions. But rare diseases can fall between the categorical institutes of the National Institutes of Health, or find research on them distributed among multiple institutes, no one of which has sufficient heft to be a champion.
The Chan Zuckerberg team that Simoncelli leads has taken a patient-driven approach. Mary Lasker and Florence Mahoney, who championed cancer and heart disease research by lobbying Congress, giving rise to the modern NIH, might well be proud of this legacy. Various other scientific and policy leaders at the time opposed Lasker and Mahoney’s approach, especially during the run-up to the National Cancer Act of 1971, favoring instead NIH’s scientist-driven research, responding to scientific opportunity. But patient-driven research is a closer proxy to social need. Whether health needs or scientific opportunity should guide research priorities has been the hardy perennial question facing biomedical research policy as it grew ten thousandfold in scale since the end of World War II.
Whether health needs or scientific opportunity should guide research priorities has been the hardy perennial question facing biomedical research policy as it grew ten thousandfold in scale since the end of World War II.
CZI and its Rare As One project are not starting from scratch. They are building on research and advocacy movements that have arisen for chordoma, amyotrophic lateral sclerosis, Castleman disease, and many other conditions. And they are drawing on the strategies of AIDS/HIV activists and breast cancer research advocates who directly influenced national research priorities by systematic attention to science, communication, and disciplined priority-setting from outside government.
Where the Howard Hughes Medical Institute and many other research funders have built on the broad base of NIH research by selecting particularly promising investigators or seizing on emerging scientific opportunities, which is indeed an effective cherry-picking strategy, the CZI is instead building capacity for many organizations to get up to speed on science, think through the challenges and resource needs required to address their particular condition, and develop a research strategy to address it. The scientific elite and grass-roots fertilization strategies are complements, but the resources devoted to the patient-driven side of the scale are far less well established, financed, and institutionalized. That makes the effort all the more intriguing.
The Chan Zuckerberg Initiative is at once helping address the collective action problem of small constituencies, many of which cannot easily harness all the knowledge and tools they need, and also building a network of expertise and experts who mutually reinforce one another. It is a potentially powerful new approach, and a promising frontier of philanthropy.
Robert Cook-Deegan
Professor, School for the Future of Innovation in Society and the Consortium for Science, Policy & Outcomes
Arizona State University
Science and Global Conspiracies
What difference does the internet make to conspiracy theories? The most sobering aspect of “How Science Gets Drawn Into Global Conspiracy Narratives” (Issues, Spring 2023), by Marc Tuters, Tom Willaert, and Trisha Meyer, is its focus on the on the seismic power of the hashtag in choreographing new conspiratorial alliances between diverse sets of Twitter users. Importantly, this is also happening on other platforms such as Instagram and TikTok. During a recent data sprint project that I cofacilitated at the University of Amsterdam, it became increasingly apparent that TikTok hashtags resemble an apophenic grammar—a sort of language that tends to foster perceptions of meaningful connections between seemingly unrelated things. Specifically, the data sprint findings suggest that co-hashtags were partly responsible for the convergence of conspiracy theory and spirituality content over the course of 2021 (to conjure what is often termed conspirituality).
But notwithstanding the significance of hashtag stuffing, the other major idea that figures prominently in current conspiracy theory research is weaponization. In our present political moment, seemingly innocuous events can become weaponized as conspiratorial dog whistles, as can celebrity gossip and fan fiction. On July 4, 2020, the Florida congressional candidate K. W. Miller claimed in a tweet that the popular music icon Beyoncé is actually a white Italian woman called Ann Marie Lastrassi. Miller seems to have “discovered the truth” about Beyoncé via a speculative Instagram comment and parodic Twitter thread, and his #QAnon clarion call was a deliberate misappropriation of Black speculative discourse with white supremacist overtones. He was building upon both the 2016 denunciation of Beyoncé by the InfoWars conspiracy theorist Alex Jones and longstanding hip-hop rumors regarding Beyoncé and Jay-Z’s involvement with the Illuminati, an imagined organization often portrayed as pulling global strings of power.
In our present political moment, seemingly innocuous events can become weaponized as conspiratorial dog whistles, as can celebrity gossip and fan fiction.
Could derisive disarmament by counter-conspiratorial web users be an effective way of laughing in the face of such weaponization tactics? Through a distinctive kind of Black laughter discussed by Zari Taylor and other social media scholars, some Twitter users attempted to extinguish the potential for hilarity in Miller’s tweet by asserting that the joke is actually on white conspiracy theorists who are willing to believe that Beyoncé is Italian while denying the very real and palpable existence of systemic racism in the United States. Ultimately, Miller’s election campaign was wholly unsuccessful, and the collective disarmament effort seems to have been relatively effective, both within and beyond the notion of Black Twitter introduced by Sanjay Sharma, a scholar of race and racism. Several years on from the event, Black Twitter users memorialize and celebrate Ann Marie Lastrassi as their Italian Queen in a similar way to Beyoncé’s own reclamation of racist #BoycottBeyoncé hashtags in 2016.
The internet’s contribution to the spread of conspiracy theories has less to do with “echo chambers” and algorithmically dug “rabbit holes” and much more to do with the perverse echo effects of connected voices. These voices listen to each other in order to find something that they might seize upon to deliver virtuosic conspiratorial performances. Although researchers, monitoring organizations, and policymakers might learn something from the echo effects of the Lastrassi case, it is also true, as my colleague Annie Kelly regularly reminds me, that disarmament is sometimes nothing more than re-weaponization. It might even serve to fan the flames of conspiracy theory in the age of the so-called culture wars.
Edward Katrak Spencer
Postdoctoral Research Associate, University of Manchester
Lecturer I in Music, Magdalen College, University of Oxford
Marc Tuters, Tom Willaert, and Trisha Meyer explore the emergence of a distinctive feature of conspiracy theories: interconnectedness. The authors focused on the Twitter case-study of public understanding of science, and specifically on the use of the hashtag #mRNA. They found that the hashtag, initially used in science discussions (beginning in 2020), was later hijacked by conspiracy narratives (late 2022) and interconnected with conspiratorial hashtags such as #greatreset, #plandemic, and #covid1984.
In a recent paper, one of us quantified such interconnectedness in the largest corpus of conspiracy theories available today, an 88-million-word collection called LOCO. On average, conspiracy documents (compared with non-conspiracy documents) showed higher interconnectedness spanning multiple thematic domains (e.g., Michael Jackson associated with moon landing). We also found that conspiracy documents were similar to each other, suggesting the mutually supportive function of denouncing an alleged conspiratorial plan. These results extend Tuters and colleagues’ research and show that interconnectedness, not bound only to scientific understanding, is a cognitive mechanism of sensemaking.
Conspiracy theories simplify real-world complexity into a cause-effect chain that identifies a culprit. In doing so, conspiracy theories are thought to reduce uncertainty and anxiety caused by existential threats. Because people who subscribe to conspiracy theories do not trust official narrative, they search for hidden motives, consider alternative scenarios, and explore their environment in search for (what they expect to be the) truth. In this process, prompted by the need to confirm their beliefs, conspiracy believers tend to quickly jump to conclusions and identify meaningful relationships among randomly co-occurring events, leading to the “everything is connected” bias.
Conspiracy theories simplify real-world complexity into a cause-effect chain that identifies a culprit.
As Tuters and colleagues suggest, social media might offer affordances for such exploration, thus facilitating the spread of conspiracy theories. The authors also suggest that not all social media platforms are equal in this regard: some might ease this process more than others. Work currently in progress has confirmed this suggestion: we have indeed found striking differences between platforms. From a set of about 2,600 websites, we extracted measures of incoming traffic from different social media platforms such as Twitter, YouTube, Reddit, and Facebook. We found that YouTube and Facebook are the main drivers to conspiracy (e.g., infowars.com) and right-wing (e.g., foxnews.com) websites, whereas Reddit drives traffic mainly toward pro-science (e.g., healthline.com) and left-wing (e.g., msnbc.com) websites. Twitter drives traffic to both left and right politically biased (but not conspiracy) websites.
Do structural differences across social media platforms affect how conspiracy theories are generated? More experimental work is needed to understand the mechanisms by which conspiracy theories are generated by accumulation. Social psychology has furthered our understanding of the cognitive predisposition for such beliefs. Now, building on Tuters and colleagues’ work, it is time for the cognitive, social, and computational sciences to systematically investigate the emergence of conspiracy theories.
Alessandro Miani
Postdoctoral Research Associate
Stephan Lewandowsky
Professor of Cognitive Science
University of Bristol, United Kingdom
Blue Dreams: Rebecca Rutstein and the Ocean Memory Project
Blue Dreams is an immersive video experience inspired by microbial networks in the deep sea and beyond. Using stunning undersea video footage, abstract imagery, and computer modeling, the work offers a glimpse into the complicated relationships among the planet’s tiniest—yet most vital—living systems. The video installation flows between micro and macro worlds to portray geologic processes at play with microbial and planetary webs of interactivity.
Installation photo by Kevin Allen Photo.
Microbes are essential to the functioning of the Earth: they produce breathable air, regulate biogeochemical cycles, and are the origins of life on this planet. Blue Dreams aims to offer a unique and thought-provoking perspective on the interconnectedness and sublimity of the natural world.
Blue Dreams, 2023, digital video still.Blue Dreams, 2023, digital video still.Blue Dreams, 2023, digital video still.Blue Dreams, 2023, digital video still.
Blue Dreams evolved from a year-long collaboration between its five contributors—Rika Anderson, Samantha (Mandy) Joye, Tom Skalak, Shayn Pierce-Cottler, and Rebecca Rutstein—through a grant from the National Academies Keck Futures Initiative’s Ocean Memory Project. Anderson, an environmental microbiologist at Carleton College, advised on marine microbial adaptation and resilience, microbial gene sharing networks, and the implications for exoplanet science and astrobiology. Joye, a marine biogeochemist at the University of Georgia and explorer of diverse deep-sea environments, provided insight into the biogeochemistry of vent and seep systems, and the interplay of microbial networks with large-scale ecological processes. Skalak, a bioengineer, provided overall conceptual vision and insight into methods for abstracting the data into system models, including agent-based simulations that could provoke visualization of swarm and collective behaviors. Peirce-Cottler, professor of biomedical engineering at the University of Virginia, created agent-based models of deep-sea microbial growth patterns generated from patterns of original Rutstein paintings. And multidisciplinary artist Rutstein researched, synthesized, abstracted, and layered imagery, animation, video, and sound to create Blue Dreams.
Blue Dreams, 2023, digital video still.
This exhibition ran through September 15, 2023, at the National Academy of Sciences building in Washington, DC.
REBECCA RUTSTEIN, Artist at Sea Series, 2016–2021, acrylic paintings on canvas, 18 x 18 inches each. Rutstein created these paintings as an artist in residence during several expeditions at sea, including aboard the R/V Falkor sailing from Vietnam to Guam, the R/V Atlantis in the Guaymas Basin, and the R/V Rachel Carson in the Salish Sea. On each voyage, she set up a makeshift art studio and collaborated with scientists, working with satellite, multibeam sonar mapping, or marine microbial data being collected. Separate from the Blue Dreams exhibition, the National Academy of Sciences has acquired these 12 paintings for its permanent art collection.
ABOUT THE OCEAN MEMORY PROJECT
By investigating the interconnectivity of the ocean and its inhabitants at different time scales, the Ocean Memory Project, a transdisciplinary group spanning the sciences, arts, and humanities, aims to understand how this system possesses both agency and memory, and how it records environmental changes through genetic and epigenetic processes in organisms and through dynamic processes in the ocean structure itself. The Ocean Memory Project was born out of the National Academies Keck Futures Initiatives interdisciplinary conference, “Discovering the Deep Blue Sea,” held in 2016.
Installation photo by Kevin Allen Photo.
Blue Dreams, 2023, digital video still.
Fostering Clean Energy in Africa
In “Generating Meaningful Energy Systems Models for Africa” (Issues, Spring 2023), Michael O. Dioha and Rose M. Mutiso highlight an important but often neglected issue in current energy transition dialogues: the underrepresentation of African expertise and data in the analyses that inform energy policies on the continent. While the inherent inequality that marks the knowledge development process is concerning, it is the implication that current energy transition strategies are likely out of touch with the on-the-ground realities of the African continent that pose the greatest risk to achieving global climate goals.
As the authors note, the energy systems models that currently inform policy actions tend to focus primarily on decarbonization and emissions reductions. In Africa, however, the challenge at hand is far more complex than this. The continent has the lowest rates of access to modern energy in the world, lags behind other regions on several development indicators—health, education, infrastructure, water, and sanitation, among others—and is one of the most vulnerable regions to the impacts of climate change, despite its historically low emissions. Any energy transition strategy in Africa that fails to acknowledge and address this complex set of challenges in an integrated manner is bound to miss the mark.
Africa contributes the least to climate change because it is poor. Unlike in developed economies, agriculture and land use change, rather than the energy sector, account for the lion’s share of Africa’s emissions. This is because the continent is still predominantly agrarian and relies heavily on the inefficient combustion of biomass for cooking. Modernizing Africa’s energy systems and improving agricultural practices can result in dual climate and development benefits. But this will require significant investments, making economic development a critical lever for achieving climate objectives.
Any energy transition strategy in Africa that fails to acknowledge and address this complex set of challenges in an integrated manner is bound to miss the mark.
Most African countries are still actively building out their energy infrastructure. This means countries on the continent have an opportunity to develop energy systems that can provide Africans with affordable, abundant, and reliable energy while tapping into the vast range of innovative technologies available today, to minimize the climate impacts of energy use. Developing modern and sophisticated electric grids, investing in innovative zero-carbon solutions, and developing the human resources we need to manage cutting-edge climate friendly energy systems are not cheap endeavors.
Today, the impact of climate change is being felt across Africa; extreme weather events, droughts, famines, and increasing disease burdens are straining the capacity of African governments to respond to these vulnerabilities. Persistent poverty on the continent will only force countries to make existential choices between meeting basic development needs and investing in a climate-friendly future. Still, given the scale of investments needed to build a global clean energy economy, Africa should not continue to depend on handouts from richer countries for the continent’s energy transformation. Building African wealth is our best bet.
By embracing the uniqueness of the African context, we can begin to shift the center of gravity of energy transition dialogues from the narrow focus on replacing dirty fuels with cleaner ones to a comprehensive approach that enables access to abundant, affordable, reliable, and modern energy, promotes economic development across sectors, and builds the resilience of Africans to respond to the impacts of climate change.
Lily Odarno
Director, Energy and Climate Innovation–Africa
Clean Air Task Force
Regulating Space Debris
This wicked problem needs US leadership
Marilyn Harbert and Asha Balakrishnan’s call to action should be generalized to a broader international audience. Space debris presents a particularly wicked international problem; the millions of pieces of orbital detritus that circle Earth do not discriminate based on a satellite’s national origin, endangering all satellites and the space-facilitated services that support modern terrestrial society.
The United States faces a daunting task in untangling its regulatory regime to effectively protect the space environment from the creation of more debris, but this is just one step toward a comprehensive solution to a global problem. The gravity of the US position is compounded by the risks of doing too little or too much. Emerging spacefaring nations often look to the United States’ example, seeking tacit guidance from the technical and political leader in space. Failing to capitalize on this leadership position would both reduce US standing and foster institutional inertia around the globe. Overly stringent regulations may incentivize nations to provide a comparatively relaxed regulatory environment to attract industry.
Failing to capitalize on this leadership position would both reduce US standing and foster institutional inertia around the globe.
Space infrequently presents second-mover advantages, but nations with nascent space governance structures may benefit from being fast followers. In this rare situation, those that do not have established space-relevant bureaucracies can adopt best practices without needing to settle regulatory turf battles. The absence of domestic industry should not reduce the urgency of this issue; establishing a thoughtful regulatory practice is a strong signal that a nation is ready to accelerate domestic industrial growth. Furthermore, well-aligned regulation among nations is perhaps the only way to adequately address the growing risks to the space systems that support our everyday life on Earth.
Benjamin Silverstein
Research Analyst
Carnegie Endowment for International Peace
Let the White House authorize new space activities
Marilyn Harbert and Asha Balakrishnan are to be commended for their timely article about a little-understood area of commercial space regulation. While orbital debris is a subject of increasing public attention, getting an appropriate regulatory regime is vital for all space activities. The US government faces at least two key regulatory challenges: ensuring accountability while allowing for innovation; and ensuring efficiency while allowing for multistakeholder interests (e.g., security, commerce, diplomacy, science).
Placing “mission authorization” within the FCC would be a bad idea for a number of reasons, but primarily because, as an independent regulatory agency, it does not report to the president.
The United States has regulatory regimes for commercial launch, remote sensing, and communications satellites. It does not have clear regimes for innovative activities that lack government precedents related to, for example, on-orbit satellite servicing, active debris removal, and in-space resource utilization. To fill this gap, the Obama, Trump, and Biden administrations have each sought to create “mission authorization” legislation to provide “authorization and on-going supervision” for US commercial space activities as required by international law. Enacting such legislation needs to happen as soon as possible to promote a predictable environment for financing and insuring commercial space ventures.
The Federal Communications Commission (FCC) has sought to fill current regulatory lacunae, proposing regulations not only for orbital debris but also for on-orbit satellite, servicing, and assembly. Such regulations may be only thinly related to existing FCC authorities and clearly go beyond the powers explicitly authorized by Congress. Placing “mission authorization” within the FCC would be a bad idea for a number of reasons, but primarily because, as an independent regulatory agency, it does not report to the president. The FCC could make decisions that undercut national security, foreign policy, or public safety interests and leave the president without legal recourse. Such instances occurred during the Trump administration with regard to the protection of GPS signals and meteorological aids, and more recently during the Biden administration with regard to air navigation aids.
Multiple agencies can and do have regulatory responsibility for existing commercial space activities. For new activities, the Department of Commerce is the logical home for regulatory oversight. Since Commerce reports to the president, the White House would retain authority for resolving potential conflicts among the diverse national interests affected by the growing commercial space economy.
Scott Pace
Director, Space Policy Institute
Elliott School of International Affairs
George Washington University
Space regulation needs a new home
The issue of space debris is complex and reminds me of the issue of climate change. Is it “space debris denial” by national governments and private companies if they don’t see the risks as pressing right now? Or is it more about taking a precautionary approach, recognizing the range of challenges that need to be dealt with as the United States prepares for a fully commercialized and multistakeholder space future? To help thread that needle, the Office of Space Commerce within the US Department of Commerce has proposed an “institutional bypass” that would address regulatory gaps by acting as a centralized one-stop shop for commercialization issues expected to arise in the New Space era. As Mariana Mota Prado, a scholar of international law and development, has stated, an institutional bypass would not try to modify, change, or reform existing institutions, Instead, it would create a new pathway in which efficiency and functionality will be the norm.
The current situation seems not clear cut, as some private-sector researchers assessing US space policy and law are themselves uncertain of whether space sustainability and orbital debris generates more specific policy prescriptions than do other areas because the topic is generally popular in the field, or because the issue is of particular concern to companies. Further, the research and insights on disclosure practices of US and foreign corporations that participate in the investment-oriented Procure Space Exchange Traded Fund reveal that there is a lot more talk than action from many space companies with respect to the issues of space sustainability, and it is not apparent that the issue stems from confusion about authorization.
Without minimizing the space debris issue, perhaps equally important may be other environmental concerns stemming from space activities that have immediate impacts here on Earth. Launch contamination anyone?
Marilyn Harbert and Asha Balakrishnan conclude that interagency efforts are active, which begs the question of the nature of any problems with interagency processes, particularly as they deal with who should have the authority to regulate and authorize space activities. Apparently, the current requirements require firms to navigate a complex web of federal agencies, and this “element of the unknown” leads to hesitancy for potential investors. However, interviews with members of industry reveal that most participants could not think of a specific example or incident when their ability to do business was affected by interagency dynamics. Still, it is worthy of note that their reported specific challenges emerged largely from their inability to voice concerns directly with regulatory officials so that their companies could efficiently adjust course, rather than from an actual problem encountering the overall interagency process.
The bottom line, then, is that if the budget exists and projections materialize, it seems to make sense for the Office of Space Commerce to become the central home for these new regulatory issues. However, further research should be undertaken to investigate the findings of a thoughtful analysis described in “The Private Sector’s Assessment of US Space Policy and Law,” carried out by the Center for Strategic and International Studies. Without minimizing the space debris issue, perhaps equally important may be other environmental concerns stemming from space activities that have immediate impacts here on Earth. Launch contamination anyone?
Timiebi Aganaba
Assistant Professor, School for the Future of Innovation in Society
Arizona State University
Founder, Space Governance Lab
Competent nonexperts make the best regulators
I was pleasantly surprised when I read the essay by Marilyn Harbert and Asha Balakrishnan. It correctly and coherently identifies the chaos that is the state of orbital debris regulation in the United States and the world.
Some of even the best regulators may not be world-class experts in the domain they regulate, but they just might be great at their job!
My only lament is that the discussion about how the Federal Communications Commission (FCC) aggressively, and I think correctly, reduced the post-mission disposal threshold from 25 years to five years could have been extended, since I think it holds a potentially unifying lesson that was almost unearthed. That lesson is that regulation is a discipline in and of itself, and possibly that the subject-matter experts in a certain domain should not be the ones regulating that domain. Rather, expert regulators who know just enough about a domain to make cogent decisions but are not flummoxed by cognitive biases of studying the topic for their entire careers are able to aggressively move away from the status quo.
In sum, then, this good article was one inference away from being great: some of even the best regulators (i.e., FCC) may not be world-class experts in the domain they regulate, but they just might be great at their job!
Darren McKnight
Senior Technical Fellow
LeoLabs
How to Clean Up Our Information Environments
I was delighted to read “Misunderstanding Misinformation” (Issues, Spring 2023), in which Claire Wardle advocates moving from “atoms” of misinformation to narratives and laments our current, siloed empirical analysis. I couldn’t agree more. But I would also like to take Wardle’s thoughts further in two ways, as a call to action and a warning.
First, focusing on narratives requires understanding how they circulate in certain contexts, which some groups, including the US Surgeon General, refer to as “information environments.” My background is in medicine, where trying to define a toxin or poison is as difficult as defining misinformation: it all depends on context. Even water can be toxic—ask any marathon runner hospitalized after drinking too much of it. A public health approach isn’t to focus on either atoms or narratives, but on the context that imbues danger. Lead and chromium-6 (think Erin Brockovich) are usually of little concern unless ingested by the young. Eliminating either is a fool’s errand, so we mitigate their potential harms: we test for them, bottled water companies remove them, and we regulate them to minimize the riskiest exposures, such as by restricting lead paint and gasoline.
Taking a similar approach to misinformation would be recognizing that any information can be toxic in the wrong context. Environmental protection requires monitoring, removal of toxins when necessary, and regulation to prevent the most egregious harms. While the English physician John Snow cannot step in to fix things—as he did by removing a public water pump handle in his famous, albeit authoritarian, solution to London’s 1854 cholera epidemic—we must recognize that cleaning up our information environments is not “if” but “how.” Because keeping tabs on revenge porn, hate speech, and pedophilia makes sense, but where and how to draw the line in borderline cases, such as the narratives Wardle speaks of, is less clear.
And that leads to the second point echoing Wardle’s suggestion that we need more research with fewer silos to better understand in what conditions, contexts, and communities information becomes most toxic. But I’m concerned that her call for more holistic research will go unheeded because environments are challenging to study. For example, over 50 years ago, the social scientist William McGuire developed a “matrix” of communication variables related to persuasion, listing on one side all the ways a message could be constructed and delivered, and on the other the spectrum of potential impacts, from catching someone’s attention to changing their behavior. Research was needed in every cell to fully understand persuasion. Today, most research fits into only a small part of that matrix: assessing the effect of small changes in message characteristics on beliefs.
We must recognize that cleaning up our information environments is not “if” but “how.”
Such studies are quick and cheap to do, test important theories, and are publishable. To be clear, the problem is not the research, but the sheer amount of it relative to the deep, structural research that is sorely needed. Nick Chater and George Lowenstein lament this disparity in a recent article in Behavioral and Brain Sciences, observing how individually framed research deflected attention and support away from systemic policy changes. Writing in The BMJ, published by the British Medical Association, Nason Maani and colleagues go further, arguing that discourse today is disproportionately “polluted” as discussion of individual solutions crowds out discussion of more needed—but more difficult to study—structural solutions.
In short, why have we made so little progress? Because social scientists have been and continue to be incentivized to study certain types of questions, leaving McGuire’s matrix embarrassingly unfilled. A 2011 Knight Foundation report argued that we should be assessing community information needs, a perspective, I would argue, that is consistent with understanding their information environments. A National Academies 2017 consensus study setting a research agenda for the science of science communication also called for a systems-based approach to understand the interrelationship between key elements that make up science communication: communicators, messages, audiences, and channels, to name a few. Yet years—and a global pandemic—later, individually framed research on misinformation dominates the discourse.
These incentives are myriad and deeply entrenched in academia and funding agencies. In his 2016 New Atlantis essay, “Saving Science,” Daniel Sarewitz characterized this as a post-world war problem with “institutional cultures organized and incentivized around pursuing more knowledge, not solving problems.” The result, he argued, is that “science isn’t self-correcting, it’s self-destructing.” It raises a scary prospect: perhaps solving the problem of misinformation may require fixing, or at least circumventing, a sclerotic system of science that continues to reproduce the same methodological biases decade after decade. Otherwise, I cringe with macabre anticipation about reading the future recommendations on improving science communication after the 2031 H5N1 influenza pandemic. I imagine they will add to the chorus of unheeded calls for more holistic, context-informed studies.
David Scales
Assistant Professor of Medicine, Weill Cornell Medicine
Chief Medical Officer, Critica
My “ah-ha” disinformation moment came in July 2014, in the hours and days following the crash of Malaysia Airlines Flight 17 in eastern Ukraine. I was a senior official in the federal agency that oversees US international broadcasting, including Voice of America and Radio Free Europe/Radio Liberty, and closely monitoring media reports of the tragedy. Credible reporting quickly emerged that the airliner had been shot down by Russian-controlled forces. Yet within hours the global information space was muddled by at least a dozen alternate narratives, including that the Ukrainian military, using either a jet fighter or a surface-to-air missile, had downed the aircraft; that bodies recovered at the crash scene actually had been corpses, loaded on an otherwise empty passenger jet that was remotely piloted and shot down by Western forces who then blamed Russia; and even that the intended Ukrainian target had been the aircraft of Russian President Vladimir Putin, who was returning at that time from a foreign trip. In this messy, multi-language information scrum, Russia’s likely culpability was just one more version of what might have happened.
It quickly became clear that, at a quickening rate, false and misleading information online was seeping into the marketplace of ideas, eroding public discourse around the world and carving fissures throughout societies. The mantra of US international broadcasting throughout the Cold War—that, over time, the truth would win in the competition of ideas—was under unprecedented pressure as the disinformation tsunami gathered momentum. We knew we had a problem. But what was to be done?
Enter Claire Wardle and other clear-thinking academics and analysts who initially helped to clarify the parameters of “information disorder,” in part by dismissing the woefully inadequate term fake news and introducing the more precise terms misinformation, disinformation, and malinformation.
At a quickening rate, false and misleading information online was seeping into the marketplace of ideas, eroding public discourse around the world and carving fissures throughout societies.
In her essay, Wardle again comes to the rescue, with fresh analysis and recommendations that include moving beyond even the improved trifecta of information terminology, which she calls “an overly simple, tidy framework I no longer find useful.” “We researchers,” she chides, “(have) become so obsessed with labeling the dots that we can’t see the larger pattern they show.” Instead, researchers should “focus on narratives and why people share what they do.”
What’s needed, Wardle argues, is a better understanding of the “social contexts of this information, how it fits into narratives and identities, and its short-term impacts and long-term harms.”
This is music to the ears of this media professional. Responsible, professional journalists already have responded to the disinformation challenge through actions such as quick-response fact-checking operations and innovative investigations to expose disinformation sources. I believe that they can do more, including the sort of investigative digging and connecting players and actions across the disinformation space that Wardle calls for. Journalists, for instance, can act on her recommendation to enhance genuine, two-way communication between publics and experts and officials, and they can support community-led resilience and take part in targeted “cradle to grave” educational campaigns to “help people learn to navigate polluted information systems.”
Why pursue this course? As a career journalist and therefore a fervent defender of the First Amendment, I am wary of any moves, no matter how well-intentioned, to restrict speech. Top-down solutions such as legislating speech are but a short step away from the slippery slope toward censorship. The ultimate defense against disinformation, therefore, must come from us, acting as individuals and together—in short, from a well-educated, informed, engaged public. To paraphrase Smokey Bear: Only YOU can prevent disinformation. And the better we understand and implement the prescriptions that Wardle and others (including the authors of the other insightful offerings in Issues under the “Navigating a Polluted Information Ecosystem” rubric) continue to pursue, the better our prospects for limiting—and perhaps even preventing—wildfires of disinformation.
Jeffrey Trimble
International Journalist, Editor, and Media Manager
Former Lecturer in Communication and Political Science at Ohio State University
Training More Biosafety Officers
The United States has long claimed that there is a need to focus on the safety and security of biological research and engineering, but we are only beginning to see that call turn into high-level action on funding and support for biosafety and biosecurity governance. The CHIPS and Science Act, for example, calls for the White House Office of Science and Technology Policy to support “research and other activities related to the safety and security implications of engineering biology,” and for the office’s interagency committee to develop and update every five years a strategic plan for “applied biorisk management.” The committee is further charged with evaluating “existing biosecurity governance policies, guidance, and directives for the purposes of creating an adaptable, evidence-based framework to respond to emerging biosecurity challenges created by advances in engineering biology.”
To carry out this mouthful of assignments, more people need to be trained in biosafety and biosecurity. But what does good training look like? Moreover, what forms of knowledge should be incorporated into an adaptable evidence-based framework?
In “The Making of a Biosafety Officer” (Issues, Spring 2023), David Gillum shows the power and importance of tacit knowledge—“picked up here and there, both situationally and systemically”—in the practice of biosafety governance, while at the same time stressing the importance of the need to formalize biosafety education and training. This is due, in part, to the lack of places where people can receive formal training in biosafety. But it is also a recognition of, as Gillum puts it, the type of knowledge biosafety needs—knowledge “at the junction between rules, human behavior, facilities, and microbes.”
The present lack of formalized biosafety education and training presents an opportunity to re-create what it means to be a biosafety officer as well as to redefine what biosafety and biosecurity are within a broader research infrastructure and societal context. This opening, in turn, should be pursued in tandem with agenda-setting for research on the social aspects of biosafety and biosecurity. It is increasingly unrealistic to base a biosafety system primarily on lists of known concerns and standardized practices for laboratory management. Instead, adaptive frameworks are needed that are responsive to the role that tacit knowledge plays in ensuring biosafety practices and are aligned with current advances in bioengineering and the organizational and social dynamics within which it is done.
The present lack of formalized biosafety education and training presents an opportunity to re-create what it means to be a biosafety officer as well as to redefine what biosafety and biosecurity are within a broader research infrastructure and societal context.
Proficiency in biosafety and biosecurity expertise today means attending to the formal requirements of policies and regulations while also generating new knowledge about the gaps in those requirements and a well-developed sense of the workings of a particular institution. The challenge for both training and agenda-setting is how to endorse, disseminate, and assimilate the tacit knowledge generated by biosafety officers’ real-life experiences. For students and policymakers alike, a textbook introduction to biosafety’s methodological standards, fundamental concepts, and specific items of concern will surely come about as biosafety research becomes more codified. But even as some aspects of tacit knowledge become more explicit, routinized, and standardized, the emergence of new and ever valuable tacit knowledge will always remain a key part of biosafety expertise and experience.
Gillum’s vivid examples of real-life experiences involving anthrax exposures, the organizational peculiarities of information technology infrastructures, and the rollout of regulations of select bioagents demonstrate that, at a basic level, biosafety officers and those with whom they work need to be attuned to adaptability, uncertainty, and contingency in specific situations. Cultivating this required mode of attunement among future biosafety professionals means embracing the fact that biosafety, like science itself, is a constantly evolving social practice, embedded within particular institutional and political frameworks. As such, it means that formal biosafety educational programs must not reduce what counts as “biosafety basics” to technical know-how alone, but ought to prioritize situational awareness and adaptability as part of its pedagogy. Biosafety and biosecurity research such as that envisioned in the CHIPS and Science Act will advance the training and work of the next generation of biosafety professionals only if it recognizes this key facet of biosafety.
Melissa Salm
Biosecurity Postdoctoral Fellow in the Center for International Security & Cooperation
Stanford University
Sam Weiss Evans
Senior Research Fellow in the Program on Science, Technology, and Society
Harvard Kennedy School
David Gillum illustrates the importance of codifying and transferring knowledge that biosafety professionals learn on the job. It is certainly true that not every biosafety incident can be anticipated, and that biosafety professionals must be prepared to draw on their knowledge, experience, and professional judgment to handle situations as they arise. But it is also true that as empirical evidence of laboratory hazards and their appropriate mitigations accumulate, means should be developed by which this evidence is analyzed, aggregated, and shared.
There will always be lessons that can only be learned the hard way—but they shouldn’t be learned the hard way more than once. There is a strong argument for codifying and institutionalizing these biosafety “lessons learned” through means such as formalized training or certification. Not only will that improve the practice of biosafety, but it will also help convince researchers—a population particularly sensitive to the need for empirical evidence and logical reasoning as the basis for action—that the concerns raised by biosafety professionals need to be taken seriously.
There is a strong argument for codifying and institutionalizing these biosafety “lessons learned” through means such as formalized training or certification.
This challenge would be significant enough if the only potential hazards from research in the life sciences flowed from accidents—human error or system malfunction—or from incomplete understanding of the consequences of research activities. But the problem is worse than that. Biosecurity, as contrasted with biosafety, deals with threats posed by those who would deliberately apply methods, materials, or knowledge from life science research for harm. Unfortunately, when it comes to those who might pose deliberate biological threats, we cannot exclude researchers or even biosafety professionals themselves. As a result, the case for codifying and sharing potential biosecurity failures and vulnerabilities is much more fraught than it is for biosafety: the audience might include the very individuals who are the source of the problem—people who might utilize the scenarios that are being shared, or who might even modify their plans once they learn how others seek to thwart them. Rather than setting up a registry or database by which lessons learned can be compiled and shared, one confronts the paradox of creating the Journal of Results Too Dangerous to Publish. Dealing with such so-called informationhazards is one factor differentiating biosafety from biosecurity. Often, however, we call upon the same experts to deal with both.
Personal relationships do not immunize against such insider threats, as we learn every time the capture of a spy prompts expressions of shock from coworkers or friends who could not imagine that the person they knew was secretly living a vastly different life. However, informal networks of trust and personal relationships are likely a better basis on which to share sensitive biosecurity information than relying on mutual membership in the same profession or professional society. So while there is little downside to learning how to better institutionalize, codify, and share the tacit knowledge and experience with biosafety that Gillum describes so well, it will always be more difficult to do so in a biosecurity context.
Gerald L. Epstein
Contributing Scholar
Johns Hopkins Center for Health Security
Enhancing Trust in Science
In “Enhancing Trust in Science and Democracy in an Age of Misinformation” (Issues, Spring 2023), Marcia McNutt and Michael M. Crow encourage the scientific community to “embrace its vital role in producing and disseminating knowledge in democratic societies.” We fully agree with this recommendation. To maximize success in this endeavor, we believe that the public dialogue on trust in science must become less coarse to better identify the different elements of science that can be trusted, whether it is science as a process, particular studies, which actors or entities are trusted, or further distinctions.
At the foundation of trust in science is trust in the scientific method, without which no other trust can be merited, warranted, or justified. The scientific community must strive to ensure that the scientific process is understood and accepted before we can hope to merit trust at more refined levels. Although trust in the premise that following the scientific method will lead to logical and evidence-based conclusions is essential, blanket trust in any component of the scientific method would be counterproductive. Instead, trust in science at all levels should be justified through rigor, reproducibility, robustness, and transparency. Scientific integrity is an essential precursor to trust.
As examples, at the study level, trust might be partially warranted through documentation of careful study execution, valid measurement, and sound experimental design. At the journal level, trust might be partially justified by enforcing preregistration or data and code sharing. In the case of large scientific or regulatory bodies, these institutions must merit trust by defining and communicating both the evidence on which they base their recommendations and the standards of evidence they are using.
Trust in science at all levels should be justified through rigor, reproducibility, robustness, and transparency. Scientific integrity is an essential precursor to trust.
Recognizing that trust can be merited at one point of the scientific process (e.g., a study and its findings have been reported accurately) without being merited at another (e.g., the findings represent the issue in question) is essential to understanding how to develop specific recommendations for conveying trustworthiness at each point. Therefore, efforts to improve trust in science should include the development of specific and actionable advice for increasing trust in science as a process of learning; individual scientific experiments; certain individual scientists; large, organized institutions of science; the scientific community as a whole; particular findings and interpretations; and scientific reporting.
However, as McNutt and Crow note, “It may be unrealistic to expect that scientists … probe the mysteries of, say, how nano particles behave, as well as communicate what their research means.” Hence, a major challenge facing the scientific community is developing detailed methods to help scientists better communicate with and warrant the trust of the general public. Thus, the current dialogue surrounding trust must identify both specific trust points and clear actions that can be taken at each point to indicate and possibly increase the extent to which trust is merited.
We believe the scientific community will rise to meet this challenge, offering techniques that signal the degree of credibility merited by key elements and steps in the scientific process and earning the public trust.
David Allison
Dean
Distinguished Professor
Provost Professor
Indiana University School of Public Health, Bloomington
Raul Cruz-Cano
Associate Professor of Biostatistics, Department of Epidemiology and Biostatistics
Indiana University School of Public Health, Bloomington
In times of great crisis, a country needs inspiring leaders and courageous ideas. Marcia McNutt and Michael M. Crow offer examples of both. Recognizing the urgency of our moment, they propose several innovative strategies for increasing access to research-grade knowledge.
Their attention to increasing the effectiveness of science communication is important. While efforts to improve science communication can strengthen trust in science, positive outcomes are not assured. A challenge comes from the fact that many people and organizations see science communication as a way to draw more attention to their people and ideas. While good can come from pursuits of attention, they can also amplify challenges posed by misinformation and disinformation. These inadvertent outcomes occur when attention pursuits come at the expense of characteristics that make science credible in the first place.
Consider, for example, what major media companies know: sensationalism draws viewers and readers. For them, sensationalism works best when a presentation builds from a phenomenon that people recognize as true and then exaggerates it to fuel interest in “what happens next” (e.g., the plot of most superhero movies or the framework for many cable news programs).
A better way forward is to see the main goal of science communication as a form of service that increases accurate understanding. Adopting this orientation means that a communicator’s primary motivation is something other than gaining attention, influence, or prestige.
In science, several communication practices are akin to sensationalism. Science communicators who suppress null results and engage p-hacking (the practice of using statistical programs to create the illusion of causal relationships) can gain attention by increasing the probability of getting published in a scientific journal. Similarly, science communicators who exaggerate the generalizability of a finding or suppress information about methodological limitations may receive greater media coverage. Practices such as these can generate attention while producing misleading outcomes that reduce the public’s understanding of science.
A better way forward is to see the main goal of science communication as a form of service that increases accurate understanding. Adopting this orientation means that a communicator’s primary motivation is something other than gaining attention, influence, or prestige. Instead, the communicator’s goal is to treat the audience with so much reverence and respect that she or he will do everything possible to produce the clearest possible understanding of the topic.
Of course, many scientific topics are complex. A service-oriented approach to communication requires taking the time to learn about how people respond to different presentations of a phenomenon—and measuring which presentations produce the most accurate understandings. Fortunately, an emerging field of the science of science communication makes this type of activity increasingly easy to conduct.
Among the many brilliant elements of the McNutt-Crow essay are the ways their respective organizations have embraced service-oriented ideas. Each has broken with long-standing traditions about how science is communicated. Arizona State, through its revolutionary transformation into a highly accessible national university, and the National Academies of Sciences, Engineering, and Medicine through their innovations in responsibly communicating science, offer exemplars of how trust in science can be built. These inspiring leaders and their courageous ideas recognize the urgency of our moment and offer strong frameworks from which to build.
Arthur Lupia
Gerald R. Ford Distinguished University Professor
Associate Vice President for Research, Large Scale Strategies
Executive Director, Bold Challenges
University of Michigan
Some years ago, I conducted a content analysis of five of the leading American history textbooks sold in the United States. The premise of the study was that most young people get more information about the history of science and pathbreaking discoveries in their history courses than in the typical secondary school course in chemistry or physics. I wanted to compare the extent and depth of coverage of great science compared with the coverage of politics, historical events, and the arts, among other topics.
The results were somewhat surprising. First, there was almost no coverage of science at all in these texts. Second, the only topic that received more than cursory attention was the discovery of the atomic bomb. Third, in comparative terms, the singer Madonna received more coverage in these texts than did the discovery of the DNA molecule by Watson and Crick. In short, there was almost no coverage of science.
When I asked authors why they did not include more about science, their answers were straight forward. As one put it: “Science doesn’t sell, according to my publisher,” and “Frankly, I don’t know enough about science myself to write with confidence about it.”
This brings me to Marcia McNutt and Michael M. Crow’s important essay on producing greater public trust in science as well as some higher level of scientific and technological literacy. Trust is a hard thing to regain once it is lost. McNutt and Crow suggest significant ways to improve public trust in science. I would expand a bit further on their playbook.
Probably 30% of the American population know little to nothing about science and have no desire to be educated about it and the discoveries that have changed their lives. They are lost. But a majority are believers in science and technology. When universities are becoming multidisciplinary and increasing institutions without borders, we must harness the abilities and knowledge that exists within these houses of intellect—and expertise beyond academic walls—to make the case for science as the greatest driver of American productivity and improved health care that we have.
When you survey people about science, you are apt to get more negative responses to very general questions than if you ask them to assess specific products and discoveries by scientists. The group Research America! consistently finds that the vast majority of US citizens approve of spending more federal money on scientific research. They applaud the discovery of the laser, of the gene-editing tool CRISPR, of computer chips, and of the vaccines derived from RNA research.
A few scientists have the gift for translating their work in ways that lead to accurate and deeper public understanding of their scientific research and discoveries. But the vast majority do not. That can’t be their job.
As McNutt and Crow suggest, it is now time to create a truly multidisciplinary effort to transfer knowledge from the laboratory to the public. A few scientists have the gift for translating their work in ways that lead to accurate and deeper public understanding of their scientific research and discoveries. But the vast majority do not. That can’t be their job. Here is where we need the talent and expertise of humanists, historians, ethicists, artists, and leading technology experts outside of the academy, as well as the producers of stories, films, and devises that ought to be used for learning. New academic foci of attention on science and technology as part of this movement of knowledge toward interdisciplinarity ought to be fostered inside our universities.
The congressional hearings centered on events of January 6 offer an excellent example of the collaboration between legislators and Hollywood producers. The product was a coherent story that could be easily understood. We should teach scientists to be good communicators with the communicators. They must also help to make complex ideas both accurate and understandable to the public. There are many scientists who can do this—and a few who can tell their own stories. This suggests the importance of training excellent science reporters and interlocutors who can evaluate scientific results and translate those discoveries into interesting reading for the attentive public. These science writers need additional training in the quality of research so that they don’t publish stories based on weak science that leads to misinformation—such as the tiny, flawed studies that were presented to the public as fact that led to false beliefs about autism or the effects of dietary cholesterol and heart disease.
We should be looking especially toward educating the young. The future of science and technology lies with their enthusiasms and beliefs. That enthusiasm for learning about women’s and minority members health, about global climate change, about finding cures and preventions for disease lies ultimately with their knowledge and action. The total immersive learning at Arizona State University is an excellent prototype of what is possible. Now those educational, total immersion models—so easily understood by the young—should be developed and used in all the nation’s secondary schools. We can bypass the politically influenced textbook industry by working directly with secondary schools and even more directly with young people who can use new technology better than their elders.
We have witnessed a growth in autobiographies by scientists, especially by women and members of minority groups. More scientists should tell their stories to the public. We also need gifted authors, such as Walter Isaacson, or before him Walter Sullivan or Stephen J. Gould, telling the stories of extraordinary scientists and their discoveries. Finally, we should be more willing to advertise ourselves. We have an important story to tell and work to be done. We should unabashedly tell those stories through organized efforts by the National Academies (such as their biographies of women scientists), research universities, and very well-trained expositors about science. Through these mechanisms we can build much improved public understanding of science and technology and the derivative trust that that will bring.
Jonathan R. Cole
John Mitchell Mason Professor of the University
Provost and Dean of Faculties (1989–2003)
Columbia University
CHIPS and Science Opens a Door for Society
In August 2022, President Biden signed the CHIPS and Science Act into law, a bill my colleagues and I passed to ensure US leadership in semiconductor development and innovation across a multitude of sectors. The law secured historic authorizations in American manufacturing, support for our workforce in science, technology, engineering, and mathematics (STEM), and bolstering of the nation’s research competitiveness in emerging technologies. A year later, Congress must find the political will to fund the science component of the act, while ensuring these investments are socially and ethically responsible for all Americans.
In recent decades, emerging technologies were quickly perfected and rapidly proliferated to transform our economy and society. Powerful forces are now overwhelmingly at our fingertips, either through mass production or the digital superhighway brought on by fiber optics. What we know today about various materials and energy uses differs dramatically from when we were first harnessing the capabilities of plastics, tool and die making, and the combustion engine. Are we capable of learning from a past when we could not see as clearly into the future as we can today? How can we create a structure to adjust or more ethically adapt to changing environments and weigh social standards for implementing new technology?
Today, we see that many emerging technologies will continue to have profound impacts on the lives of American citizens. Technologies such as artificial intelligence and synthetic biology hold tremendous promise, but they also carry tremendous risks. AI and quantum cryptography, for example, will drastically influence the privacy of the average internet user. These are known risks for which we can take steps, including developing legislation, such as a bill I authored, the Privacy Enhancing Technology Research Act, to mitigate such risks. There is also a universe of unknown risks. But even in those cases we have tools and expertise to think through what those risks might be and how to assign value to them.
The ethical and societal considerations in CHIPS and Science were designed to empower scientists and engineers to consider the ethical, social, safety, and security implications of their research throughout its lifecycle, potentially mitigating any harms before they happen. And where researchers lack the tools or knowledge to consider these risks on their own, they might turn to professional ethicists or consensus guidelines within their disciplines for help.
The intent was not only to ensure representation in fields developing and applying the global shaping technologies of the future, but also to put value on the notion that American science can be more culturally just and equitable.
Incorporating these considerations into our federal agencies’ research design and review processes is consistent with the American approach of scientific self-governance. The enacted scientific legislation plays to the strengths of our policymaking in that we entrust researchers to use their intellectual autonomy to create technological solutions for the potential ethical and societal challenges of their work and give them the freedom to pursue new research directions altogether.
While prioritizing our law on STEM diversity, the intent was not only to ensure representation in fields developing and applying the global shaping technologies of the future, but also to put value on the notion that American science can be more culturally just and equitable. This occurs when diverse voices are in the research lab and at the commercialization table.
Seeing the CHIPS and Science Act fully funded remains one of my top priorities. New and emerging technologies, such as AI, quantum computing, and engineering biology, have a vast potential to re-shore American manufacturing, create sustainable supply chains, and bring powerful benefits to all Americans everywhere. However, these societal and ethical benefits cannot be realized if we are not also intentional in considering the societal context for these investments. If we do not lead with our values, other countries whose values we may not share will step in to fill the void. It is time for us to revitalize federal support for all kinds of research and development—including social and ethical initiatives—that have long made the United States a beacon of excellence in science and innovation.
Rep. Haley Stevens
Michigan, 11th District
Ranking Member of the Committee on Science, Space, and Technology’s Subcommittee on Research and Technology
As David H. Guston intimates, the CHIPS and Science Act presents a new opportunity for the National Science Foundation to make another important step in fostering the social aspects of science. The act can also champion existing and emerging efforts focused on understanding the way social science can deeply inform and shape the entire scientific enterprise.
Contemporary issues demand a substantive increase in the support for social science. This research is critically necessary to understand the social impacts of our changing environment and technological systems, and how to design and develop solutions and pathways that equitably center humanity.
By describing the historical arc of the evolving place of social science at NSF, Guston illustrates how the unbalance, syncopated, and often arhythmical dance between NSF and social science did not necessarily benefit either. I am optimistic about what specific sections of the CHIPS and Science Act directly require, tacitly imply, and conceptually allude to. The history of NSF is replete with examples of scientific research that fundamentally altered the way humans interact, communicate, and live in a shared world. Contemporary issues—in such diverse areas as rising climate variability and the place of artificial intelligence in our everyday interactions—demand a substantive increase in the support for social science. This research is critically necessary to understand the social impacts of our changing environment and technological systems, and how to design and develop solutions and pathways that equitably center humanity. As the world always shows, we are on the cusp of a new moment. This new moment needs to be driven by social science and social scientists in concert with natural and physical scientists. I use the term concert, and its referent to artistic and sonic creative collaborations, deliberately to evoke a different framework of collaborative and interdisciplinary effort. Part of the solution is to always remember that science is a human endeavor.
In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential.
In thinking about the place of social science in the next evolution of interdisciplinary research, I believe the cupcake metaphor is instructive. As a child of the 1970s, I remember the cupcake was a birthday celebration staple. I really liked the cake part but was greatly indifferent to the frosting or sprinkles. If I had to choose, I would always select the cupcake with sprinkles for one reason: they were easy to knock off. In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential. Social science is not the egg, the flour, or the sugar. Sprinkles are neither in the batter, nor do they see the oven. Sprinkles are a late addition. No matter the stylistic or aesthetic impact, they never alter the substance of the “cake” in the cupcake. The potential of certain provisions of the CHIPS and Science Act hope to chart a pathway for scientific research that makes social science a key component of the scientific batter to bake social scientific knowledge, skill, and expertise into twenty-first century scientific “cupcakes.”
Rayvon Fouché
Professor in Communication Studies and the Medill School of Journalism
Northwestern University
Former Division Director, Social and Economic Sciences
National Science Foundation
David H. Guston expertly describes how provisions written into the ambitious CHIPS and Science Act could make ethical and societal considerations a primary factor in the National Science Foundation’s grantmaking priorities, thereby transforming science and innovation policy for generations to come.
Of particular interest, Guston makes reference to public interest technology (PIT), a growing movement of practitioners in academia, civil society, government, and the private sector to build practices to design, deploy, and govern technology to advance the public interest. Here, I extend his analysis by applying core concepts from PIT that have been articulated and operationalized by the Public Interest Technology University Network (PIT-UN), a 64-member network of universities and colleges that I oversee as director of public interest technology for New America. (Guston is a founding member of PIT-UN and has led several efforts to establish and institutionalize PIT at Arizona State University and in the academic community more broadly.)
As Guston describes, the CHIPS and Science Act “expand[s] congressional expectations of more integrated, upstream attention to ethical and societal considerations” in NSF’s process for awarding funds. This is undoubtedly a step in the right direction. However, operationalizing the concept of “ethical and societal considerations” requires that we get specific about who researchers must include in their process of articulating foreseeable risks and building partnerships to “mitigate risk and amplify societal benefit.”
Universities and other NSF-funded institutions must invest more in these kinds of community partnerships to regularly challenge and update our understanding of “the public.”
Public interest technology asserts that the needs and concerns of people most vulnerable to technological harm must be integrated into the process of designing, deploying, and governing technology. While existing methods to assess ethical and societal considerations of technology such as impact evaluations or user-centered design can be beneficial, they often fail to adequately incorporate the needs and concerns of marginalized and underserved communities that have been systematically shut out of stakeholder conversations. Without a clear understanding of how specific communities have been excluded from technology throughout US history—and a shared analysis of how those communities are continually exploited or made vulnerable to the negative impacts of technology—we run the risk of not only repeating the injustices of the past, but also embedding biases and harmful assumptions into emerging technologies. Frameworks and insights from interdisciplinary PIT scholars such as Ruha Benjamin, Cathy O’Neil, Meredith Broussard, and Afua Bruce that map relationships between technology and power structures must inform NSF’s policymaking if the funds made available through the CHIPS and Science Act are to effectively address ethical and societal considerations.
Furthermore, a robust operationalization of these considerations will require a continual push to develop and extend community partnerships in a way that expands our notion of the public. Who should be included in the definition of “the public”? Does it include under-resourced small businesses and nonprofits? People who are vulnerable to tech abuse? People living on the front lines of climate change? In advancing this broader understanding of the public, a strategic partnership with international organizations becomes essential, including cooperation with emerging research entities that focus on the ethical issues within emerging technologies and artificial intelligence such as the Distributed Artificial Intelligence Research Institute, the Algorithmic Justice League, the Center for AI and Digital Policy, the Electronic Frontier Foundation, and the OECD AI Policy Observatory, among others.
Guston points to participatory technology assessments undertaken through the NSF-funded Center for Nanotechnology in Society at Arizona State University as an example of how to engage the public in understanding and mitigating technological risks. Universities and other NSF-funded institutions must invest more in these kinds of community partnerships to regularly challenge and update our understanding of “the public,” to ensure that technological outputs are truly reflective of the voices, perspectives, and needs of the public as a whole, not only those of policymakers, academics, philanthropists, and technology executives.
Andreen Soley
Director of Public Interest Technology at New America and the Public Interest Technology University Network
These comments draw in part from recommendations crafted by PIT-UN scholars to NSF’s request for information on “Developing a Roadmap for the Directorate for Technology, Innovation, and Partnerships.”
The most important word in David H. Guston’s article addressing the societal considerations of the CHIPS and Science Act occurs in the first sentence: “promised.” For scholars and practitioners of science and technology policy, the law has created genuine excitement. This is a dynamic moment, where new practices are being envisioned and new institutions are being established to link scientific research more strongly and directly with societal outcomes.
Many factors will need to converge to realize the promise that Guston describes. One set of contributors that are crucial, yet often overlooked, in this changing ecosystem of science and technology policy are science philanthropies. Science philanthropy has played a key role in the formation and evolution of the current research enterprise, and these funders are especially well-positioned to actualize the kind of use-inspired, societally oriented scholarship that Guston emphasizes. How can science philanthropy assist in achieving these goals? I see three fruitful areas of investigation.
The first is experimentingwith alternative approaches to funding. Increasingly, funders from both philanthropy and government are experimenting with different ways of financing scientific research to respond rapidly to scientific and societal needs. Some foundations have explored randomizing grant awards to address the inherent biases of peer review. New institutional arrangements, called Focused Research Organizations, have been established outside of universities to undertake applied, use-inspired research aimed at solving critical challenges related to health and climate change. There is the capacity for science philanthropies to do even more. For instance, participatory grantmaking is emerging as a complementary approach to allocating funds, in which the expected community beneficiaries of a program have a direct say in which awards are made. While this approach has yet to be directly applied to science funding, such alternative decisionmaking processes offer opportunities to place societal implications front and center.
Science philanthropies, because of the wide latitude they have in designing and structuring their programs, are uniquely situated to sponsor interdisciplinary research.
The second is making connections and filling knowledge gaps across disciplines and sectors. Interdisciplinary research is notoriously difficult to fund through conventional federal grantmaking programs. Science philanthropies, because of the wide latitude they have in designing and structuring their programs, are uniquely situated to sponsor such scholarship. As an example, the Energy and Environment program that I oversee at the Alfred P. Sloan Foundation is focused on advancing interdisciplinary social science and bringing together different perspectives and methodologies to ask and answer central questions about energy system decarbonization. The program supports interdisciplinary research topics such as examining the societal dimensions of carbon dioxide removal technologies, a project in which Guston is directly involved; highlighting the factors that are vital in accelerating the electrification of the energy system; and concentrating on the local, place-based challenges of realizing a just, equitable energy transition. Additional investments from science philanthropies can expand and extend interdisciplinary scholarship across all domains of inquiry.
The third is learning through iteration and evaluation. Guston traces the historical context of how societal concerns have always been present in federal science funding, even if their role has been obscured or marginalized. Science philanthropies can play a pivotal role in resourcing efforts to better understand the historical origins and subsequent evolution of the field of science and technology policy. For this reason, the Sloan Foundation recently funded a series of historically oriented research projects that will illuminate important developments related to the practices and institutions of the scientific enterprise. Further, science philanthropies should do more to encourage retrospective evaluation and impact assessment to inform how society is served by publicly and privately funded research. To that end, over the past three years I have helped to lead the Measurement, Evaluation, and Learning Special Interest Group of the Science Philanthropy Alliance, a forum for alliance members to come together and learn from one another about different approaches and perspectives on program monitoring and evaluation. As Guston writes, there is much promise in the CHIPS and Science Act. Science philanthropies will be essential partners to achieve its full potential.
Evan S. Michelson
Program Director
Alfred P. Sloan Foundation
We agree with David Guston’s assertion that the CHIPS and Science Act of 2022, which established the National Science Foundation’s Directorate for Technology, Innovation, and Partnerships, presents a significant opportunity to increase the public benefits from—and minimize the adverse effects of—US research investments.
We also agree that the TIP directorate’s focus on public engagement in research is promising for amplifying scientific impact. Our experiences leading the Transforming Evidence Funders Network, a global group of funders interested in increasing the societal impact of research, are consistent with a recent NSF report, which states that engaged research “conducted via meaningful collaboration among scientist and nonscientist actors explicitly recognizes that scientific expertise alone is not always sufficient to pose effective research questions, enable new discoveries, and rapidly translate scientific discoveries to address society’s grand challenges.”
We have also found that engaged research could be an essential strategy for identifying, anticipating, and integrating into science the “ethical and societal considerations” mentioned in the CHIPS and Science Act. The NSF-funded centers for nanotechnology in society provide an illustrative example in developing and normalizing participatory technology assessment. As one center notes on its website, the centers use engagement and other tactics to build capacity for collaboration among researchers and the public, allowing the groups to work together to “guide the path of nanotechnology knowledge and innovation toward more socially desirable outcomes and away from undesirable ones.”
Engaged research could be an essential strategy for identifying, anticipating, and integrating into science the “ethical and societal considerations” mentioned in the CHIPS and Science Act.
But to ensure that engaged research can deliver on the potential these collaborative methods hold, we argue for an expansion of funding for rigorous studies that address questions about when engagement and other strategies are effective for improving the relevance and use of research for societal needs—and who benefits (and who doesn’t) from these strategies. Such studies will increase our understanding of the conditions that enable engagement and other tactics to deliver their intended impacts. For example, scholarship shows that allowing sufficient time for relationship-building between researchers and decisionmakers is important for unlocking the potential of engaged research. Findings from such studies could, and should, shape future research investments aimed at improving societal outcomes.
Efforts to expand understanding in this area—Guston calls these efforts “socio-technical integration research”—include studies on the use of research evidence, science and technology studies, decision science, and implementation science, among several other areas. But so far, this body of research has been relatively siloed and has inconsistently informed research investments. The CHIPS and Science Act may help spur research investments in this important area with its requirement that NSF “make awards to improve our understanding of the impacts of federally funded research on society, the economy, and the workforce.” And the NSF’s TIP directorate provides a helpful precedent for funding studies that develop an understanding of when, and under what conditions, research drives change in decision-making and when (and for whom) research improves outcomes. But much must still be done to meet the need.
The CHIPS and Science Act and the TIP directorate present an important opportunity to scale research efforts that better reflect societal and ethical considerations. To support progress in this area, we have begun coordinating grantmakers in the Transforming Evidence Funders Network to build evidence about the essential elements of success for work at the intersection of science and society. We invite funders to connect with us to make use of the opportunity presented by these shifts in federal science funding and to join us as we build knowledge about how to maximize the societal benefits—and minimize the adverse effects—of research investments.