“It was the best of times; it was the worst of times,” Charles Dickens famously began in A Tale of Two Cities. So it was for scientific research in early 2020 as a number of forces came together to create a unique set of opportunities and challenges.
First, the COVID-19 pandemic itself. The disease was so contagious and so serious that physical, human-to-human proximity was canceled except for interactions essential to life. Laboratories closed; lecture theaters and libraries lay empty; people barely left their homes.
Second, the emergence of technology-mediated collaboration. Video conferencing became the new meeting space; social media were repurposed for exchanging real-time information and ideas; and digital architects put their skills to building bespoke platforms.
Third, the scientific world united around a common purpose: generating the evidence base that would end the pandemic. Goodwill and reciprocity ruled. We forgot about academic league tables, promotion bottlenecks, h-indices, or longstanding rivalries. We switched gear from competing to collaborating. We pooled our data and our expertise for the good of humanity (and, perhaps, with a view to saving ourselves and our loved ones). And not to be overlooked, the red tape of research governance was cut. Our institutions and funders allowed us—indeed, required us—to divert our funds, equipment, and brainpower to the only work that now mattered. Journal paywalls were torn down. It became possible to build best teams from across the world, to get fast-track ethics approval within hours rather than weeks, to generate and test bold hypotheses, to publish almost instantly, and to replicate studies quickly when the science required it. The downside, of course, was the haystack of preprints that nobody had time to peer-review, but that’s a subject for another day.
In “How to Catalyze a Collaboration” (Issues, Summer 2023), Annamaria Carusi, Laure-Alix Clerbaux, and Clemens Wittwehr describe one initiative that emerged from those strange, wonderful, and terrifying times. The project, dubbed CIAO—short for Modelling COVID-19 Using the Adverse Outcome Pathway Framework—happened because a handful of toxicologists and virologists came together, on a platform designed for exchanging pictures of kittens, to solve an urgent puzzle. Through 280-character tweets and judiciously pitched hashtags, they began to learn each other’s language, reasoned collectively and abductively, and brought in others with different skills as the initiative unfolded.
Online collaborative groups need two things to thrive: a strong sense of common purpose, and a tight central administration (to do the inevitable paperwork, for example). In addition, as the sociologist Mark Granovetter has observed, such groups offer “the strength of weak ties”—people we hardly know are often more useful to us than people we are close to (because we already have too much in common with the latter). An online network tends to operate both through weak ties (the “town square,” where scientists from different backgrounds get to know each other a bit better) and through stronger ties (the “clique,” where scientists who find they have a lot in common peel off to share data and write a paper together).
The result, Carusi and her colleagues say, was 11 peer-reviewed papers and explanation of some scientific mysteries—such as why people with COVID-19 lose their sense of smell. Congratulations to the CIAO team for making the best of the “worst of times.”
Trisha Greenhalgh
Professor of Primary Care Health Sciences
University of Oxford, United Kingdom
Annamaria Carusi, Laure-Alix Clerbaux, and Clemens Wittwehr candidly and openly describe their technical and soft-skill experiences in fostering a global collaboration to address COVID-19 during the pandemic, drawing from an existing Adverse Outcome Pathway approach developed within the Organisation for Economic Co-Operation and Development. The collaborative, nicknamed CIAO (by the Italian members who would like to say, “Ciao COVID!”), found much-needed structure in the integrative construct of adverse outcome pathways (AOPs), or structured representations of biological events. In particular, one tool the researchers adopted—the AOP-Wiki—provided an increasingly agile web-based application that offered contributors a place and space to work on project tasks regardless of time zone. In addition, the AOP structure and AOP-Wiki both have predefined (and globally accepted) standards that obviate the need for semantics debates.
Yet the technical challenges were meager compared with the social challenges of people “being human” and the practical challenges of bringing people together when the world was essentially closed and physical interactions very limited. Carusi, Clerbaux, Wittwehr and their colleagues stepped up during this time of crisis by exercising not only scientific ingenuity but also social and emotional intelligence. They helped bring about, in essence, a paradigm shift. There was no choice but to abandon traditional in-person approaches that were no longer feasible and to embrace virtual and web-based applications. Collaborative leads leveraged their own social networks in virtual space to rapidly make connections that critically helped the AOP framework become the proverbial (and virtual) “campfire” for bringing the collaborative together.
Importantly, this work was not constrained by geography or language. For instance, the AOP-Wiki allowed for asynchronous project management by people living across 20 countries, breaking down language barriers through incorporation of globally accepted lingua franca for documenting and reporting COVID-19 biological pathways. Data entered into the AOP-Wiki were controlled using globally accepted standards and data management practices, such as controlled data extraction fields, vocabularies and ontologies, and FAIR (findable, accessible, interoperable, and reusable) data standards. These ingredients provided the collaborative its perfect campfire for cooking up COVID-19 pathways. All that was needed were the “enzymes” to get it all digested. That’s where the authors stepped in, gently “simmering” the collaborative toward a banquet of digitally documented COVID-19 web-based applications.
The collaborative’s resultant work was the personification of the adage when there is a will, there is a way. The group’s way was greatly facilitated by a willingness to accept and leverage new(er) technology and methods (i.e., web applications and digital data) that enable humans—and their computers—flexibility and efficiency across the globe. Novel virtual/digital models enhanced the collaborative’s experience. Notably, the collaborative’s acceptance and use of the AOP framework and AOP-Wiki’s data management interface means the COVID-19 AOPs are digitally documented, readable by both machines and humans, and globally accessible. The AOP framework has not only catalyzed the collaboration, but prospectively catalyzes the ability to use generative artificial intelligence to find and refine additional data with similar characteristics. This means the COVID-19 AOPs may evolve with the virus, updating over time as new information is automatically ingested.
Michelle Angrish
Toxicologist
US Environmental Protection Agency
Centering Equity and Inclusion in STEM
As the United States seeks to tap every available resource for talent and innovation to keep pace with global competition, institutional leadership in building research capacity at historically Black colleges and universities (HBCUs) and other minority-serving institutions (MSIs) is essential, as Fay Cobb Payton and Ann Quiroz Gates explain in “The Role of Institutional Leaders in Driving Lasting Change in the STEM Ecosystem” (Issues, Summer 2023). Transformational leadership, such as that displayed by Chancellor Harold Martin and North Carolina Agricultural and Technical State University as it elevates itself to the Carnegie R1 designation of “very high research activity,” and by former President Diana Natalicio to position the University of Texas at El Paso as an R1 institution, provides role models for other institutions.
Payton and Gates argue elegantly for utilization of the National Science Foundation’s Eddie Bernice Johnson INCLUDES Theory of Change model. For fullest effect, I suggest that this model must include two additional elements for institutional leaders to consider: the role of institutional board members and the role of minority technical organizations (MTOs). To achieve improved and lasting research capacity, the boards at HBCUs and MSIs must view research as part of the institutional DNA. Many of these institutions are in the midst of transforming from primarily teaching institutions to both teaching and research universities. For public institutions, the governors or oversight authorities should appoint board members with research experience and members who have large influence in the business community, as one outcome from university research is technology commercialization. HBCUs and MSIs need board members with “juice”—because, as the saying goes, “if you’re not at the table, you’re on the menu.”
Finally, as the nation witnesses increasing enrollments at HBCUs and MSIs, the role of minority technical organizations cannot be understated. If we are to achieve the National Science Board’s Vision 2030 of a robust, diverse, domestic workforce in science, technology, engineering, and mathematics—the STEM fields—these organizations are crucial. MTOs such as the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers and the Society for the Advancement of Chicanos/Hispanics and Native Americans in Science are two of the many MTOs that provide role models for STEM students, hold annual conferences for students and professionals, and foster retention of Black and brown students in the STEM fields. As part of the NSF INCLUDES ecosystem, let’s also not forget the major events that recognize outstanding individuals at HBCUs and MSIs, such as the Black Engineer of the Year awards and the Great Minds in STEM annual conferences.
Victor McCrary
Vice President for Research, University of the District of Columbia
Vice Chair, National Science Board
As the president of a national foundation focused exclusively on postsecondary education, I was especially intrigued with Fay Cobb Payton and Ann Quiroz Gates’s ambitious recommendations for the philanthropic community. The authors challenge traditional foundations to make bigger and lengthier investments in higher education, especially minority-serving institutions (MSIs). At ECMC Foundation, we do just this. By making large, multi-year investments in projects led by public two- and four-year colleges and universities, intermediaries and even start-ups through our program-related investments, we aim to advance wholesale change for broad swaths of students, particularly those who come from underserved backgrounds.
One project worth noting is the Transformational Partnerships Fund. Along with support from Ascendium Education Group, the Kresge Foundation, and the Michael and Susan Dell Foundation, we have created a fund that provides support to higher education leaders interested in recalibrating the strategic trajectory of their institutions in service to students. Such recalibrations might be mergers, course sharing, or collaborations that streamline back-end administrative functions. Although this fund does not offer large grants or long-term support, it nonetheless helps higher education leaders understand more deeply how they need to respond to the challenges that lay ahead for their colleges and universities.
Payton and Gates advance a compelling moral argument about the need to better support MSIs and the students they serve, especially in STEM-related majors. What they do not emphasize, however, are specific institutional incentives that will drive lasting improvements in diversity and inclusion. Presidents and chancellors report to trustees, whose primary fiduciary obligation is to keep their institutions in business. Barring incentives that might make significant change possible, college leaders often stick with the status quo, preferring tactical, stop-gap measures rather than strategic reform.
Arguments for institutional change that appeal to our better angels, although earnest and well-intentioned, have failed thus far to significantly alter the postsecondary education landscape for our most vulnerable students. The consequence is that too many students choose to leave before completing their degree. According to the National Student Clearinghouse Research Center, the population of students with some college and no credential has reached 40.4 million. The loss of talent in STEM-related and other disciplines is staggering, and a reversal of institutional inertia is required to alter course.
Still, the authors offer a theory of change that makes a positive, forward-looking contribution to our thinking about institutional transformation. I eagerly await the authors’ future work as they translate their powerful worldview into a bold set of recommendations that offer up key incentives for higher education leaders to employ as they address the challenges their institutions face in postpandemic America.
Jacob Fraire
President, ECMC Foundation
Fay Cobb Payton and Ann Quiroz Gates summarize the critical challenges and opportunities ahead for science, technology, engineering, and mathematics education. The STEM ecosystem is vast, complex, and stubbornly anchored in inertia. The authors present a compelling vision for the future: institutional excellence will be defined by inclusion, actions will be centered on accountability, and the effectiveness of leadership will be measured by the ability to drive systemic and sustained culture change.
Achieving inclusive excellence begins with a commitment to change the STEM culture. Here is a to-do list requiring skillful leadership:
Redefine the STEM curriculum, especially at the introductory level.
Resist the impulse of requiring STEM students to go too deep too soon. Instead, encourage them to explore the arts, humanities, and social sciences.
Review admissions criteria and STEM course prerequisites.
Reward instructors and advisers who practice the skills of equitable and inclusive teaching and mentoring.
Increase representation of persons heretofore excluded from STEM by valuing relevant lived experiences more than pedigree.
We yearn for leaders with the vision, strength, and patience to drive lasting culture change. We must nurture the next generation of leaders so that today’s modest changes will be amplified and sustained.
Inclusive excellence, already challenging, is made more difficult because of the pressures exerted by powerful forces. Many institutions succumb to the false promise of external validation based on criteria that are contradictory to the values of equity and inclusion. The current system selects for competition instead of community, exclusion instead of inclusion, a white-centered culture instead of equity. The “very high research activity” (R1) classification for institutions is based on external funding, the number of PhD degrees and postdoctoral researchers, and citations to published work. In the current US News and World Report “Best Colleges” ranking, half of an institution’s score is based on just four (of 24) criteria: six-year graduation rates, reputation, standardized test scores, and faculty salaries.
It is time to disrupt the incentives system, as the medical scholar Simon Grassmann recently argued in Jacobin magazine. It is wrong to believe that quantitative metrics such as the selectivity of admissions and the number of research grants are an accurate measure of the quality of an institution. Instead, let us develop the means to recognize institutions that make a genuine difference for their students and employees—call it an “Institutional Delta.” Students will learn and instructors will thrive when the learning environment is centered on belonging and the campus commits to the success of everyone. Finding reliable ways to measure the Institutional Delta and assess institutional culture will require new qualitative approaches and courageous leadership. An important lever is the accreditation process, in which accrediting organizations can explicitly evaluate how well an institution’s governing board understands and encourages equity and inclusion.
The STEM culture must be disrupted so that it is centered on equity and inclusion. This requires committed leaders with the courage to battle the contradictions of an outdated rewards system. Culture disruptors must be supported by governing boards and accreditation agencies. Let leaders lead!
David J. Asai
Senior Director, Center for the Advancement of Science Leadership and Culture
Howard Hughes Medical Institute
Fay Cobb Payton and Ann Quiroz Gates emphasize that systemic change to raise attainment of scientists from historically underrepresented backgrounds must engage stakeholders at multiple levels and from multiple organizations. These stakeholders include positional and grassroots leaders in postsecondary institutions, industry leaders, and public and private funders. The authors posit that “revisiting theories of change, understanding the way STEM academic ecosystems work, and fully accounting for the role that leadership plays in driving change and accountability are all necessary to transform a system built upon historical inequities.”
The National Academies of Sciences, Engineering, and Medicine report Minority Serving Institutions: America’s Underutilized Resource for Strengthening the STEM Workforce, released in 2019, highlighted that such institutions graduate disproportionately high shares of students from minoritized backgrounds in STEM fields. The report found that minority-serving institutions (MSIs) receive significantly less federal funding than other institutions and recommended increased investment in MSIs for their critical work in educating minoritized STEM students. To reinforce this work, the report also called for expanding “mutually beneficial partnerships” between MSIs and other higher education institutions, industry stakeholders, and public and private funders.
Payton and Gates rightfully recommend that strengthening the ecosystem to diversify science should “build initiatives on MSIs’ successes.” Yet the National Academies report on MSIs noted that research on why and how some MSIs are so successful in educating minoritized STEM students has been scant. Conversely, most research on this topic has been conducted in highly selective, historically white institutions. Paradoxically then, most of this research has neglected the institutional contexts that many racially minoritized STEM students navigate, including the MSI contexts in which they are often more likely to succeed.
The authors also call to revisit organizational theories of change as a step toward transforming STEM ecosystems in more equitable directions. Yet the social science research on higher education organizational change has historically been disconnected from research on improving STEM education. The American Association for the Advancement of Science report Levers for Change, released in 2019, highlighted this very disconnection as a key barrier to reform in undergraduate STEM education.
Even research that has attempted to link higher education organizational studies with STEM education reform has primarily been conducted in highly selective, historically and predominantly white institutions that are predicated on exclusion. Limited organizational knowledge about how MSIs educate minoritized students and how that knowledge can be adapted to different institutional contexts have together hindered the development of a STEM ecosystem predicated on inclusion. Enacting Payton and Gates’s recommendation to revisit organizational theories of change to transform STEM ecosystems will require that scholarly communities and funders generate more incentives and opportunities to conduct research that integrates higher education organizational change, STEM reform approaches, and the very MSI institutional contexts that can offer models of inclusive excellence in STEM. Such social science research can yield the most promising leadership tools to transform STEM ecosystems toward inclusive excellence.
Anne-Marie Núñez
Executive Director, Diana Natalicio Institute for Hispanic Student Success
Distinguished Centennial Professor, Educational Leadership and Foundations
The University of Texas at El Paso
Fay Cobb Payton and Ann Quiroz Gates highlight the role of leadership in transforming the academic system built upon the nation’s historical inequities. Women, African Americans, Hispanic Americans, and Native Americans remain inadequately represented in science, technology, engineering, and mathematics relative to their proportions in the larger population.
For the United States to maintain leadership in and keep up with expected growth of STEM-related jobs, academic institutions must envision and embrace strategies to educate the future diverse workforce. At the same time, federal funding agencies need to support strategies to encourage universities to pursue strategic alliances with the private sector to recruit, train, and retain a diverse workforce. We need visionary strategies and intentionality to make changes, with accountability frameworks for assessing progress.
Leadership is one key element in strategies of change. Thus, Payton and Gates perspicuously illustrate the role of leadership in advancing the STEM research enterprise at minority-serving institutions. At the University of Texas at El Paso, its president established new academic programs, offered open admissions to students, recruited faculty members from diverse groups, and built the necessary infrastructure to support research and learning. As a result, in a matter of a few years the university achieved the Carnegie “R1” designation signifying “very high research activity” while transforming the university community to reflect the diverse community it serves. Thanks to such visionary leadership, it now leads R1 universities in STEM graduate degrees awarded to Hispanic students.
At North Carolina Agricultural and Technical State University, the chancellor has likewise transformed the institution, increasing student enrollment by nearly 30% in 12 years and doubling undergraduate graduation. Because of the strategies intentionally implemented by the chancellor, the university during the past decade experienced a more than 60% increase in its research enterprise supported by new graduate programs. Similarly, the president of Southwestern Indian Polytechnic Institute, a community college for Native Americans, has led in forging partnerships with Tribal colleges, universities, and the private sector to ensure that graduates can develop successful careers or pursue advanced studies.
As Payton and Gates note, the private sector has a major role to play in training a diverse STEM workforce, citing as exemplar the $1.5 billion Freeman Hrabowski Scholars Program established in 2022 by the Howard Hughes Medical Institute. Every other year, the program will appoint 30 early-career scientists from diverse groups, supporting a total of 150 scientists over a decade. This long-term project will likely yield outcomes to transform the diversity of the nation’s biomedical workforce.
It is clear, then, that the nation needs to embrace sustained and multipronged strategies involving academic institutions, government agencies, private enterprises, and even families to achieve an equitable level of diversity in STEM fields. It is also clear that investments in leadership development and academic infrastructure can help foster the growth of a more capable and diverse workforce and advance the nation’s overall innovation capability. The good news is that Payton and Gates provide proof positive that institutions and partnerships can achieve the desired outcomes.
Jose D. Fuentes
Professor of Atmospheric Science
Pennsylvania State University
The author chairs the Committee on Equal Opportunities in Science and Engineering, chartered by Congress to advise the National Science Foundation on achieving diversity across the nation’s STEM enterprise.
Fay Cobb Payton and Ann Quiroz Gates remind us that despite some positive movement, the United States has substantially more to do in broadening participation in science, technology, engineering, and mathematics—the STEM fields. The authors promote two often overlooked contributions to change: the key role of institutional leaders and the importance of minority-serving institutions. Even with their additions, however, I believe there is a significant deficiency in building out an appropriate theory of change to address the overall challenges we face in STEM.
The authors recount that the National Science Foundation’s Eddie Bernice Johnson INCLUDES Initiative was established to leverage the many discrete efforts underway. They note that “episodic efforts, or those that are not coordinated, intentional, or mutually reinforcing, have not proven effective.” They advocate revisiting theories of change, understanding how STEM academic ecosystems work, and fully accounting for the role that leadership plays in driving change and accountability. But while I strongly agree with their case—as far as it goes—I believe there is considerably more that ought to be added to the theory of change embraced by the INCLUDES Initiative to make it more useful and impactful.
I posit that to successfully guide STEM systems change at scale, a theory of change ought to incorporate at least three (simplified) dimensions:
Institution. At its core, change is local. Classroom, department, and institution levels are where policies, practices, and culture have to change.
Institution/national interface. Initiatives must have bidirectional interaction. National initiatives influence institutions, and a change by an institution reflects back to a national initiative, hopefully multiplying its success through adaptation by other network members.
Multiple dimensions of change. Changes in policy and culture must be translated into specific changes in pedagogy, student belonging, and faculty professional development. We also need better ways to track the translation of these changes into the STEM ecosystem, such as graduating a more diverse class of engineers.
The INCLUDES theory of change focuses almost exclusively on the second dimension. It presents an important progression for initiatives from building collaborative infrastructure to fostering networks, then leveraging allied efforts. It captures the institution/national interface with a box on expansion and adaptation of better-aligned policies, practices, and culture, but only alludes to the institutional change on which such advances rest. Payton and Gates add to the theory by focusing mostly on the missing role of leadership in fostering institutional change. They describe examples of key leaders who have been critical drivers of specific changes. They also devote attention to multiple dimensions of change by describing important successes that minority-serving institutions have had in increasing student graduation in STEM and to the policy and program changes by leadership that made such change possible.
Even after Payton and Gates’s critical additions, I’m left with deep discomfort over a major omission: in the theory of change they offer for the STEM ecosystem, there is virtually nothing specific to STEM activity in it. While well-conceived, it appears entirely process-oriented and doesn’t directly translate to metrics enabling an assessment of progress toward broadening participation in STEM. Surely increased collaboration and changes in policy and culture are imperative, but they can apply to virtually any societal policy shift. What makes the INCLUDES theory of change applicable to whether the United States can produce a more diverse engineering graduating class?
Having offered this challenge—stay tuned from this quarter.
Howard Gobstein
Senior Vice President for STEM Education and Research Policy
Association of Public and Land-grant Universities
Creating transformative (not transactional), intentional, and lasting change in higher education—specifically in a STEM ecosystem—requires continuity, commitment, and lived experiences from leaders who are not afraid to lead change and disrupt inefficient policies and practices that do not support the success of all students in an equitable context. The long-standing work of higher education presidents or chancellors such as Diana Natalicio at the University of Texas at El Paso, Harold Martin at the North Carolina Agricultural and Technical State University, Freeman Hrabowski at the University of Maryland Baltimore County, and Tamarah Pfeiffer at the Southwestern Indian Polytechnic Institute would not have materialized if they were conflict-adverse.
What do each of these dynamic leaders have in common? They were responsible for leading minority-serving institutions (MSIs) of higher education, which they transformed through deliberate actions. More importantly, their deliberate actions were intentionally grounded in understanding the mission of the institution, understanding the historically minoritized populations for which the institution served (among others), and understanding that a long-term commitment to doing the work would be required, even if that meant disrupting “business as usual” for the institution and setting a trajectory towards accelerating systemic change.
Fay Cobb Payton and Ann Quiroz Gates make a compelling case for what is required of institutional leaders to harness and mobilize systemic change in the STEM ecosystem by using the National Science Foundation’s INCLUDES model as a case study. Payton and Gates argue that “higher education leaders (e.g., presidents, provosts, and deans) set the tone for inclusion through their behaviors and expectations.” This argument is tantamount to the individual leaders’ strengths, strategies, and successes at the types of institutions highlighted in the article. Moreover, Payton and Gates point out that “leaders can hold themselves and the organization accountable by identifying measures of excellence to determine whether improvements in access, equity, and excellence are being achieved.”
As a former dean of a college of liberal arts and a college of arts and sciences at two MSIs (Jackson State University, an urban, historically Black college and university—HBCU—and the University of La Verne, a Hispanic serving institution, respectively) and now serving as the chief academic officer and provost at the only public HBCU and exclusively urban land-grant university in the United States—the University of the District of Columbia—I know firsthand the role that institutional leaders must play in moving the needle to “broaden participation” and the need for urgent inclusion of historically minoritized participants in the STEM ecosystem. As leaders operating within the MSI spaces, we recognize that meeting students where they are is crucial to developing the skilled technical workforce that our country so desperately needs.
We must do more to address the barriers that prevent individuals from embarking on or completing a STEM education that prepares them for the workforce. According to a 2017 National Academies of Sciences, Engineering, and Medicine report, by 2022 “the percentage of skilled technical job openings is likely to exceed the percentage of skilled technical workers in the labor force by 1.3 percentage points or about 3.4 million technical jobs.” The report finds that the number of skilled technical workers will likely fall short of demand, even when accounting for how technological innovation may change workforce needs (e.g., shortages of electricians, welders, and programmers).
At the same time, economic shifts toward jobs that put a premium on many lines of work on science and engineering knowledge and skills are leaving behind too many Americans. Therefore, as institutional leaders, we must harness the power of partnerships with industry, nonprofits, and community and technical colleges to increase awareness and understanding of skilled technical workforce careers and employment opportunities. This will be an enduring challenge to balance traditional and emerging research. In the long term, we demonstrate what Payton and Gates argue is necessary for lasting change—a change that affects multiple courses, departments, programs, and/or divisions and alters policies, procedures, norms, cultures, and/or structures.
Suggestions for next steps:
The challenge for MSIs in the twenty-first century is to figure out how to collaborate among institutions to renew, reform, and expand programs to ensure students have the opportunity for educational and career success.
As we think about MSI collaborations, there needs to be a broader discussion to include efforts that will yield high levels of public-private collaboration in STEM education, advocating policies and budgets focused on maximizing investments to increase student access and engagement in active, rigorous STEM learning experiences.
If we are to reimagine a twenty-first century where we have fewer HBCU mergers and closures, we must recognize that leadership at the top of our organizations must also come together to learn best practices for leading change. The old mindsets, habits, and practices of running our colleges and universities must be reset.
Through collaboration, HBCUs can pool resources and extend their reach. Collaboration opens communication channels, knowledge-sharing, and community-building between HBCUs and MSIs.
Lawrence T. Potter
Chief Academic Officer
University of the District of Columbia
Fay Cobb Payton and Ann Quiroz Gates effectively conceptualize how inclusive STEM ecosystems are developed and sustained over time. The Eddie Bernice Johnson INCLUDES initiative at the National Science Foundation (NSF), which the authors write about, is a significant investment in moving the needle of underrepresentation in STEM. After thirty years as a STEM scholar, practitioner, and administrator in academia, industry, and government, I believe we are finally at an inflection point, although inflection can go both ways: negative or positive—and possibly only incrementally positive. For me, Payton and Gates’s framework triggered thoughts on the meaning of inclusion and why leadership is instrumental in building STEM ecosystems.
Inclusion has many different meanings, and those meanings have shifted over the years depending on context and purpose. Without consistent linguistic framing, inclusion can be decontextualized—rendering it into a passive concept rather than an action to be taken or an engine to be used to drive culture and climate. NSF’s INCLUDES program emphasizes collaboration, alliances, and connectors. The program is designed to inspire investigators to actively engage in inclusive change, a mechanism that requires us to use both inclusively informed and inclusively responsive approaches.
Diversifying STEM is challenged by the lack of a shared concept. Although the concept of “inclusion” does not have to be identical among institutions, it should be semantically aligned. As an example, Kaja A. Brix, Olivia A. Lee, and Sorina G. Stalla used a crowd-sourced approach to capture the meaning of “inclusion within diversity.” Their grounded theory methodology yielded four shared concepts: (1) access and participation; (2) embracing diverse perspectives; (3) welcoming participation; and (4) team belonging. For those of us who have advised doctoral students, inclusion is sort of like a good dissertation: there is no real formula for a high-quality dissertation, but you know it when you see it.
Another point made by Payton and Gates relates to sustained leadership and accountability. When she was president of the University of Texas at El Paso (UTEP), Diana Natalicio was highly effective in framing diversity, equity, inclusion, and accessibility (DEIA) to support action. Given the historical disadvantages experienced by UTEP, President Natalicio never seemed to waiver on UTEP’s right to become an R1 university in a collaborative DEIA context. Her degree in linguistics may have facilitated her skill in framing ideas that move people to real action.
Effective leadership in support of inclusion must be boldly voiced in multiple ways for multiple audiences. This form of institutional voice matters to all stakeholders, both within the institution and externally, because failure to voice DEIA says something as well: it means a leader is not really committed to change. Giving voice means leaders must consult with groups on their own campus and in their own communities to understand how to elevate DEIA using multipronged, systems-wide actions.
Payton and Gates also highlight Harold Martin, an electrical engineer who is recognized for his effective leadership of the North Carolina Agricultural and Technical State University. Among other accomplishments, Chancellor Martin’s leadership and practice have established an institution that strategically applies data-informed methods to advance excellence. Application of data-informed approaches is not a panacea, but metrics and measures serve to find the “proof in the pudding” regarding inclusive change. Inclusive change management is facilitated by thoughtful creation, elicitation, review, and interpretation of data in quantitative and qualitative forms. Without data, institutions will only check anecdotal boxes around inclusion, leading to no real or lasting change.
We must pay attention to shared meanings and effective leadership when leading inclusive change in STEM. Ecosystems thrive because of successful interaction and interconnection. Unfortunately, many leaders focus only on culture. While key to lasting change, culture is grounded in shared meaning, values, beliefs, etc. But culture change without climate change is ineffective. In organizational research, culture is what we say; climate is what we do. It is high time we are all about the “doing” because full reliance on the “saying” may not move diversity in a positive direction.
Tonya Smith-Jackson
Provost and Executive Vice Chancellor for Academic Affairs
North Carolina Agricultural and Technical State University
Fay Cobb Payton and Ann Quiroz Gates shed light on the critical role of leadership in addressing historical inequities in the STEM fields, particularly in higher education. One of the key takeaways is the importance of visionary and committed leadership in fostering lasting change. Although their article provides valuable insights into the importance of leadership in promoting STEM equity, there are a couple areas that could use additional examination.
First, their argument would benefit from further exploration of systemic challenges and proven strategies. Payton and Gates focus primarily on leadership within educational institutions but do not address external factors that can influence STEM diversity. For example, they don’t discuss the role of government policies, industry partnerships, K–12 preparation, or societal attitudes in shaping STEM demographics. Understanding the specific obstacles faced by underrepresented groups and how leadership can address them will add value to the discussion. While the article mentions the importance of inclusive excellence, it would be helpful to provide specific strategies that college and university leadership can implement immediately to create lasting change in STEM.
Second, there should be a wider discussion of intersectionality. The article primarily discusses diversity in terms of race and ethnicity but does not adequately address other dimensions of diversity, such as gender, disability, or socioeconomic background. Recognizing the intersectionality of identities and experiences is crucial for creating inclusive STEM environments.
To create lasting change in STEM, college and university leadership can take several additional steps, including collecting and analyzing data on the representation of underrepresented groups in STEM programs and faculty positions. These data can help identify disparities and inform targeted interventions. Leadership also needs to review and revise the curriculum to ensure it reflects diverse perspectives and contributions in STEM fields. Faculty must be encouraged and rewarded for incorporating inclusive teaching and research practices.
Creating lasting change in STEM demographics is a long-term commitment. Institutions must maintain their dedication to diversity and inclusion even when faced with challenges and changes in institutional leadership. Payton and Gates beautifully articulate the case that college and university leadership can create lasting change in STEM by implementing data-driven initiatives, fostering local and national collaborations, and maintaining a long-term institutional commitment to diversity and inclusion.
Talithia Williams
Associate Professor of Mathematics
Mathematics Clinic Program Director
Harvey Mudd College
Agricultural Research for the Public Good
Norman Borlaug succeeded at something that no one had done before—applying wide area adaptation for a specific trait in a specific crop for yield enhancement. This worked beyond all expectations in field trials conducted in environments that favored the expression of the new genetic material, which in this case had been developed from a type of semidwarf wheat native to Japan. Of course, wide area adaptation must be put in perspective as per the trait, farmer, crop genetics, and environment in which such a package is intended to be used.
The more traditional “local adaptation” typically happens in a farmer’s fields. In these and other microenvironments, wheat such as Borlaug developed, or other new wheats, can be tested and, if successful, bred locally for such situations. This has been exactly the type of applied “bread and butter” work done by national program scientists and local seed companies. However, this specialized knowledge for each area of the country and a given crop is being lost, as is the ability of national program scientists to conduct multilocation trials.
This loss of talent and support has eroded not because of the work of Borlaug, but because of consolidation of the agricultural research entities into four large agricultural/pharmaceutical companies. No more are there local seed companies; no more is there a robust plant breeding community in the public sector; no longer is there a focused effort on the “public good” of agriculture. Losing this publicly supported pool of expertise is especially a concern when local needs do not align with those of commercial providers.
This is true for India, Mexico, the United States, and Canada—and one can keep on going.
Consequently, the work that Borlaug did, all conducted in the public arena for the public good, is all that more important to replicate today. Science and farming are two ends of the same rope, and while one continues to be privatized, the other cannot benefit. Thus, improving farmers’ education and “infrastructure,” however little this seems to be defined, will not keep a given farming sector free from globalized pressures or a shortage of public-minded and public-based extension agents.
The separation of plant breeding—which Marci R. Baranski’s book classifies as a capital-intensive technological approach—from farming speaks of a divorce that simply should not come to pass. Instead, depending on trait, genotypes, environment, and famers’ needs, they should be brought closer together to ensure that what is developed serves those in need, not just those who have the currency and farming practices that are compatible with commercial agriculture.
Joel Cohen
Visiting Scholar, Nicholas School of the Environment
Duke University
Beyond Stereotypes and Caricatures
In “Chinese Academics Are Becoming a Force for Good Governance” (Issues, Summer 2023), Joy Y. Zhang, Sonia Ben Ouagrham-Gormley, and Kathleen M. Vogel provide a thoughtful exploration of how bioethicists, scientists, legal scholars, and others are making important contributions to ethical debates and policy discussions in China. They are addressing such topics as what constitutes research misconduct and how it should be addressed by scientific institutions and oversight bodies, how heritable human genome editing should be regulated, and what societal responses to unethical practices are warranted when they are not proscribed by existing laws. Their essay also addresses several issues with implications that extend beyond China to global conversations about ethical, legal, and social dimensions of emerging technologies in the life sciences and other domains.
Given the growing role that academics in China are playing in shaping oversight of scientific technologies, individuals expressing dissent from official government doctrine in at least some cases risk being subjected to censorship and pressure to withdraw from public engagement. As tempting as it might be to highlight differences between public discourse under China’s Communist Party and public debate in liberal democratic societies, academics in democracies where various forms of right-wing populism have taken root are also at risk of being subjected to political orthodoxies and punishment for expressions of dissent. One important role transnational organizations can play is to promote and protect critical, thoughtful analyses of emerging technologies. They can also offer solidarity, support, and deliberative spaces to individuals subjected to censorship and political pressure.
The authors also note the challenges that scholars in China have had in advocating for more robust ethical review and regulatory oversight of scientific research funded by industry and other private-sector sources. This issue extends to other countries with stringent review of research funded by government agencies and conducted at government-supported institutions, and with comparatively lax oversight of research funded by private sources and conducted at private-sector institutions. This disparity in regulatory models is a recipe for future research scandals involving a variety of powerful technologies. In the biomedical sciences, for example, these discrepancies in governance frameworks are becoming increasingly concerning when longevity research is funded by private industry or even individual billionaires who may have well-defined objectives regarding what they hope to achieve and sometimes a willingness to challenge the authority of national regulatory bodies.
Finally, we need to move beyond the facile use of national stereotypes and caricatures when discussing China and other countries with evolving policies for responsible research. China, as the authors point out, is sometimes depicted as a “Wild East” environment in which “rogue scientists” can flourish. However, research scandals are a global phenomenon. Likewise, inadequate oversight of clinical facilities is an issue in many countries, including nations with what often are assumed to be well-resourced and effective regulatory bodies. For example, academics used to write about “stem cell tourism” to such countries as China, India, and Mexico, but clinics engaged in direct-to-consumer marketing of unlicensed and unproven stem cell interventions are now proliferating in the United States as well. Our old models of the global economy, with well-regulated countries versus out-of-control marketplaces, often have little to do with current realities. Engagement with academics in China needs to occur without the use of self-serving and patronizing narratives about where elite science occurs, where research scandals are likely to take place, and which countries have well-regulated environments for scientific research and clinical practice.
Leigh Turner
Executive Director, UCI Bioethics Program
Professor, Department of Health, Society, & Behavior
In “ARPA-H Could Offer Taxpayers a Fairer Shake” (Issues, Summer 2023), Travis Whitfill and Mariana Mazzucato accurately describe three strategies for how the Advanced Research Project Agency for Health (ARPA-H) could structure its grant program to ensure that taxpayers receive a fairer return for their high-risk public investments in research and development to solve society’s most pressing health challenges. One of their core ideas is repurposing a successful venture capital model of converting early-stage investments into equity ownership if a product progresses successfully in the development process.
As patients face challenges in accessing affordable prescription drugs and health technologies, we believe it is imperative for policymakers and ARPA-H leaders to address two fundamental questions: How does the proposed grant program strategy directly help patients, and how will ARPA-H (or any government agency) implement and enforce this specific strategy?
The first question concerns what patients ultimately care about—how will this policy impact them and their loved ones? For example, if the government receives equity ownership in a successful company that generates revenue for the US Treasury, that has limited direct benefit for a family that cannot afford the health technology.
There should be a strong emphasis that all patients, regardless of their demographic background or insurance status, can access innovative health technologies developed with public funding at a fair price. For example, in September 2023 the Biden administration announced a $326 million contract with Regeneron to develop a monoclonal antibody for COVID-19 prevention. This contract included a pricing provision that requires the list price in the United States to be equal to or lower than the price in other major countries. Maintaining this focus will lead policymakers to address how we pay for these health technologies and consider practical steps to achieve equitable access. That may include price negotiation or reinvesting sales revenue directly into public health and the social determinants of health.
The effectiveness of any policy depends strongly on its implementation and enforcement. As Whitfill and Mazzucato mention, the US government has the legal authority to seek lower prescription drug prices through the Bayh-Dole Act for inventions with federally funded patents. However, the National Institutes of Health, which houses ARPA-H as an independent agency, has refused to exercise its license or other statutory powers, most recently with enzalutamide (Xtandi), a prostate cancer drug.
The government also has the existing legal authority under 28 US Code §1498 to make or use a patent-protected product while giving the patent owners “reasonable and entire compensation” when doing so, but it has not implemented this policy in the case of prescription drugs for many decades.
It is no secret that corporations in the US pharmaceutical market are incentivized by various forces to pursue profit maximization. In the case of public funding to support pharmaceutical innovation, we need to ensure that when taxpayers de-risk research and development, they should also share more directly in the financial benefits of that investment.
Hussain Lalani
Primary Care Physician, Brigham and Women’s Hospital
Health Policy Researcher, Harvard Medical School
Program On Regulation, Therapeutics, and Law
Aaron S. Kesselheim
Professor of Medicine, Department of Medicine, Division of Pharmacoepidemiology and Pharmacoeconomics
Brigham and Women’s Hospital and Harvard Medical School
Director, Program On Regulation, Therapeutics, and Law
Travis Whitfill and Mariana Mazzucato make a case that demands the attention of both leaders of the Advanced Research Project Agency for Health (ARPA-H) and policymakers: the agency’s innovation must focus not only on technology but also finance. Breaking from decades of public finance for science and technology with few strings attached, new policy-thinking is needed, they argue, if taxpayers are to get a dynamic and fair return on their ARPA-H investments. To pursue this goal, the authors make three promising proposals: capturing returns through public sector equity, curbing shareholder profiteering by promoting reinvestment in innovation, and setting conditions for access and affordability.
As the technology scholar Bhaven N. Sampath chronicled in Issues in 2020, however, debates over the structure of public financing for scientific research and development have been around since the dawn of the post-war era. But amid reassessments of long-standing orthodoxy about public and private roles in innovation, Whitfill and Mazzucato’s argument lands in at least two intriguing streams of policy rethinking.
First, debates over ARPA-H’s design could connect biomedical R&D policy to the wider “industrial strategy” paradigm being shaped across spheres of technology, from semiconductors to green energy. Passage of the CHIPS and Science Act, the Inflation Reduction Act, and the Bipartisan Infrastructure Deal has invigorated government efforts to shift from a laissez-faire posture to proactively shape markets in pursuit of specific national security, economic, and social goals. Yet biomedical research has been noticeably absent from these policy discussions, perhaps in part because of the strong grip of vested interests and narratives about the division of labor between government and industry. Seeing ARPA-H through this industrial strategy lens could instead invite a wider set of fresh proposals about its design and implementation.
Second, bringing an industrial strategy view to ARPA-H would take advantage of new momentum to reconfigure government’s relationship with the biopharmaceutical industry, which has recently focused on drug pricing. The introduction of Medicare drug pricing negotiation in the Inflation Reduction Act for a limited set of drugs is a landmark measure for pharmaceutical affordability, yet it directs government policy to ex-post negotiations after an innovation has been developed. If done right, the agency’s investments would “crowd in” the right kind of patient, private capital with ex-ante conditions described by Mazzucato and Whitfill. In the process, ARPA-H could serve as an unprecedented “public option” for biomedical innovation, building public capacity for later stages of R&D prioritized for achieving public health goals.
Whether the authors’ ideas will find traction, however, remains uncertain. Why might change happen now, decades after the initial postwar debates settled into contemporary orthodoxies? Beyond the nascent rethinking of the prevailing neoliberal economic paradigm in policy circles, a critical factor might well be the evolution of a two-decade-old network of smart and strategic lawyers, organizers, and patient groups that comprise the “access to medicines” movements. These movements are pushing for bold changes across multiple domains, including better patenting and licensing practices, public manufacturing, and globally equitable technology transfer. Ultimately, ARPA-H’s success may well rest on citizen-led action that helps decisionmakers understand the stakes of doing public enterprise differently.
Victor Roy
Postdoctoral Fellow, Veterans Affairs Scholar
National Clinician Scholars Program, Yale School of Medicine
Travis Whitfill and Mariana Mazzucato’s proposal to utilize the new Advanced Research Project Agency for Health (ARPA-H) to stimulate innovation in pharmaceuticals while reducing the net costs to taxpayers is very important and welcome.
Innovation can indeed provide major benefits and is greatly stimulated by the opportunity to make profits. Yet pharmaceuticals (including vaccines and medical devices) are unlike most other products, even basics such as food and clothing. Their primary aim is not to increase people’s pleasure, as with better tasting food or more stylish clothing, but to improve their health and longevity. Also, the need for and choice of medications is usually determined not by patients but by their physicians.
Another major difference is that the National Institutes of Health and related agencies—that is, taxpayers—finance much of basic medical research. In addition, when patients use medical products, most costs are usually borne not by them but by members of the public (who support public insurance) or by other insurance holders (whose premiums are raised to cover the costs of expensive products). Furthermore, pharmaceuticals are protected from competition by patents, which are manipulated to extend for many years. Pharmaceutical companies should not, therefore, be treated like other private enterprises and be permitted to make huge profits at the expense of the public and patients.
In Whitfill and Mazzucato’s proposal, public monies provided to private companies and researchers would become equity, just like venture capital funds and other private investments, and taxpayers would thus become shareholders in the pharmaceutical companies. The funding could extend beyond research to clinical trials and even marketing and patient follow-up. This would create ongoing public-private collaboration that could reward the taxpayers as well as the companies and their other shareholders. In addition, the new ARPA-H could “encourage or require” companies to reinvest profits into research and development and look for other ways to restrict profit-taking, and could insist on accessible prices for the drugs it helped to finance.
But their proposal’s feasibility and impact are uncertain. To what extent would ARPA-H have to expand its current funding—$1.5 billion in 2023 and $2.5 billion requested for 2024, in contrast to $187 billion spent by the NIH to enable new drug approvals between 2010 and 2019—to make a substantial impact on the development of new, high-value pharmaceuticals? What degree of price and profit restriction would companies be willing to accept? Could the benefit of higher prices to taxpayers as shareholders be used to justify the excessive prices that benefit company executives and other shareholders even more? Should not the burden on those who pay for the pharmaceuticals by financing public and private insurances be taken into account? Finally, would politicians be willing, in the face of fierce lobbying by pharma, to provide ARPA-H with the required funds and authority?
Nonetheless, expanding an already-existing (even if newly created) agency is clearly more feasible than more radical restructuring, such as my colleagues and I have proposed. Stimulating beneficial innovations and reducing, if not eliminating, excessive profits are far better than accepting the status quo. I strongly support, therefore, implementing Whitfill and Mazzucato’s proposal.
Paul Sorum
Professor Emeritus
Departments of Internal Medicine and Pediatrics
Albany Medical College
Chaosmosis: Assigning Rhythm to the Turbulent
Chaosmosis: Assigning Rhythm to the Turbulent is an art exhibition inspired by fluid dynamics, a discipline that describes the flow of liquids and gases. The exhibition draws from past submissions to the American Physical Society’s Gallery of Fluid Motion, an annual program that serves as a visual record of the aesthetic and science of contemporary fluid dynamics. For the first time, a selection of these past submissions has been curated into an educational art exhibition to engage viewers’ senses.
The creators of these works, which range from photography and video to sculpture and sound, are scientists and artists. Their work enables us to see the invisible and understand the ever-moving elements surrounding and affecting us. Contributors to the exhibition include artists Rafael Lozano-Hemmer and Roman De Giuli, along with physicists Georgios Matheou, Alessandro Ceci, Philippe Bourrianne, Manouk Abkarian, Howard Stone, Christopher Clifford, Devesh Ranjan, Virgile Thievenaz, Yahya Modarres-Sadeghi, Alvaro Marin, Christophe Almarcha, Bruno Denet, Emmanuel Villermaux, Arpit Mishra, and Paul Branson.
Magnified frozen water droplets resemble shattered glass in a series of photographs. A video simulation depicts the confined friction occurring within a pipe with flowing liquid. In other works, the fluid motions portrayed are produced by human bodies: a video sheds light on the airflow of an opera singer while singing, and a 3D-printed sculpture reveals the flow of human breath using sound from the first dated recording of human speech. Gases and liquids are in constant motion, advancing in seemingly chaotic ways, yet the works offer a closer look, revealing elegant and poetic patterns amid atmospheric turbulence.
The term chaosmosis, coined by the philosopher Félix Guattari in the 1990s, conveys the idea of transforming chaos into complexity. It assigns rhythm to the turbulent, linking breathing with the subjective perception of time, and concluding that respiration is what unites us all.
Stephen R. Johnston, Jessica B. Imgrund, Dan Fries, Rafael Lozano-Hemmer, Stephan Schulz, Kyle C. Johnson, Johnathan T. Bolton, Christopher J. Clifford, Brian S. Thurow, Enrico Fonda, Katepalli R. Sreenivasan, and Devesh Ranjan, Volute 1: Au Clair De La Lune, 2016, 3D-printed filament, sound, 26 x 7 x 8 inches. Christophe Almarcha, Joel Quinard, Bruno Denet, Jean-Marie Laugier, and Emmanuel Villermaux, Experimental Two-Dimensional Cellular Flames, 2014, laser print on fabric, 84 x 46 inches. Arpit Mishra, Claire Bourquard, Arnab Roy, Rajaram Lakkaraju, Outi Supponen, and Parthasarathi Ghosh, Flow-Focusing from Interacting Cavitation Bubbles, 2021, laser print on fabric, 84 x 48.5 inches.Roman De Giuli, Sense of Scale, 2022, video still.
Chaosmosis runs from October 2, 2023, through February 23, 2024, at the National Academy of Sciences building in Washington, DC. The exhibition is curated by Natalia Almonte and Nicole Economides in coordination with Azar Panah and the American Physical Society’s Division of Fluid Dynamics.
Transforming Research Participation
In “From Bedside to Bench and Back” (Issues, Summer 2023), Tania Simoncelli highlights patients moving from being subjects of biomedical research to leading that research. Patients and their families no longer simply participate in research led by others and advocate for resources. Together they design and implement research agendas, taking the practice of science into their own hands. As Simoncelli details, initiatives such as the Chan Zuckerberg Initiative’s Rare as One Project—and the patient-led partnerships it funds—are challenging longstanding power dynamics in biomedical research.
Opportunities to center the public’s questions, priorities, and values throughout the research lifecycle are not limited to research on health outcomes. And certainly, the promise of participatory approaches is not new. Yet demand for these activities is pressing.
Today’s global challenges are urgent, local, and interconnected. They require all-hands-on-deck solutions in such diverse areas as climate resilience and ecosystem protection, pandemic prevention, and the ethical deployment of artificial intelligence. Benefits of engaging the public in these collective undertakings and of centering societal considerations in research are being recognized by those who hold power in US innovation systems, including by Congress in the CHIPS and Science Act.
Opportunities to center the public’s questions, priorities, and values throughout the research lifecycle are not limited to research on health outcomes.
On August 29, 2023, the President’s Council of Advisors on Science and Technology (PCAST) issued a letter on “Recommendations for Advancing Public Engagement with the Sciences.” PCAST finds, “We must, as a country, create an ecosystem in which scientists collaborate with the public, from the identification of initial questions, to the review and analysis of new findings, to their dissemination and translation into policies.”
To some observers outside the research enterprise, this charge is long overdue. To those already operating at the boundaries of science and communities, it is a welcome door-opener. And to entrenched interests concerned about movement away from a social contract supporting curiosity-driven fundamental research toward solutions-oriented research that focuses scientific processes on solutions and public good, PCAST may be shaking bedrock.
Increased federal demand can move scientific organizations toward participatory practices. For greatest impact, more on-the-ground capacity is needed, including training of practitioners who can connect communities with research tools and collaborators. Similarly essential is continued equity work within research institutions grappling with their history of exclusionary practices.
Boundary organizations that bridge the scientific enterprise with communities of shared interest or place are connecting the public with researchers and putting data, tools, and open science hardware into the hands of more people. The Association of Science and Technology Centers, which I led from 2018 through 2020, issued a community-science framework and suite of resources to build capacity among science-engagement practitioners. The American Geophysical Union’s Thriving Earth Exchange supports community science by helping communities find resources to address their pressing concerns. Public Lab is pursuing environmental justice through community science and open technology. The Expert and Citizen Assessment of Science and Technology (ECAST) Network developed a participatory technology assessment method to support democratic science policy decisionmaking.
I applaud these patient-led partnerships and community-science collaborations, and I look forward to the solutions they produce.
Cristin Dorgelo
Former Senior Advisor for Management at the Office of Management and Budget
President Emerita of the Association of Science and Technology Centers
Former Chief of Staff of the Office of Science and Technology Policy
Tania Simoncelli paints a powerful picture of the increasingly central role of patients and patient communities in driving medical research. The many success stories she describes of the Chan Zuckerberg Initiative’s Rare as One project provide an assertive counternarrative to the rarely explicated but deeply held presumption that only health professionals with decades of training in science and medicine can and should drive the agenda in health research. These successes confirm that those who continue to treat patient engagement in research as a box-checking exercise do themselves and the patients they claim to serve a grave disservice.
However, these narratives do more than just celebrate accomplishments. They also highlight the limitations of our current systems of funding and prioritizing health research, which require herculean efforts from patients and families already facing their own personal medical challenges. Patient communities have clearly demonstrated that they can achieve the impossible, but they do so because our current systems for funding research provide limited alternatives. How would federal funding for health research need to change such that patients and families would not have to also become scientists, clinicians, drug developers, and fundraisers for their disease to receive attention from the scientific community?
Medical research—and rare disease research in particular—urgently needs substantial investment in shared infrastructure and tools to increase efficiency, reduce costs, and facilitate engagement of diverse patients and families with variable time and resources to contribute. These investments will not only increase efficiency; they will also increase equity insomuch as they reduce the likelihood that progress in a given disease will depend on the financial resources and social capital of a particular patient community. This concern is not just hypothetical; a 2020 study of research funding in the United States for sickle cell disease, which predominantly affects Black patients, compared with cystic fibrosis, which predominantly affects white patients, found an average of $7,690 in annual foundation spending per patient affected with cystic fibrosis compared with only $102 in sickle cell disease, with predicable differences in the numbers of studies conducted and therapies developed. The investments made by the Chan Zuckerberg Initiative have been critical in leveling the playing field, but developing an efficient, equitable, and sustainable approach to rare disease research in the United States will require a commitment on the part of federal policymakers and funders as well.
Medical research—and rare disease research in particular—urgently needs substantial investment in shared infrastructure and tools to increase efficiency, reduce costs, and facilitate engagement of diverse patients and families with variable time and resources to contribute.
To achieve this seismic shift, I see few stakeholders better situated to advise policymakers and funders than the patient communities themselves. While federal funders may support patient engagement in individual research efforts, there is also the need to move this engagement upstream, allowing patients a voice in setting research funding priorities. Of course, implementing increased patient engagement in federal research funding allocation will require a careful examination of whose voices ultimately represent the many, inherently diverse patient communities. Attention to questions of representation and generalizability within and across patient communities is an ongoing challenge in all patient engagement activities, and the responsibility for addressing this challenge lies with all of us—funders, researchers, industry partners, regulators, and patient communities alike. However, it would be a mistake to treat this challenge as impossible: patient communities will undoubtedly prove otherwise.
Meghan C. Halley
Senior Research Scholar
Center for Biomedical Ethics
Stanford University School of Medicine
Biomedical research has blind spots that can be reduced, as Tania Simoncelli writes, by “centering the largest stakeholders in medicine—the patients.” By focusing on rare diseases, the Chan Zuckerberg Initiative is partnering with the most daring rebels of the patient-led movement. These pioneers are breaking new paths forward in clinical research, health policy, and data rights management.
But it’s not only people living with rare diseases whose needs are not being met by the current approach to health care delivery and innovation. Equally exciting is the broader coalition of people who are trying to improve their lives by optimizing diet or sleep routines based on self-tracking or building their own mobility or disease-management tools. They, too, are driving research forward, often outside the view of mainstream leaders because the investigations are focused on personal health journeys.
For example, 8 in 10 adults in the United States track some aspect of their health, according to a survey by Rock Health and Stanford University’s Center for Digital Health. These personal scientists are solving their own health mysteries, managing chronic conditions, or finding ways to achieve their goals using clinical-grade digital tools that are now available. How might we create a biomedical research intake valve for those insights and findings?
Patients know their bodies better than anyone and, with training and support, are able to accurately report any changes to their care teams, who can then respond and nip issues in the bud. In a study conducted at Memorial Sloan Kettering Cancer Center, patients being treated with routine chemotherapy who tracked their own symptoms during treatment both lived longer and felt better. Why are we not helping everyone learn how to track their symptoms?
Hardware innovation is another front in the patient-led revolution.
People living with disability ingeniously adapt home health equipment to meet their needs. By improving their own mobility, making a home safer, and creatively solving everyday problems, they and their care partners save themselves and the health care system money. We should invest in ways to lift up and publicize the best ideas related to home care, just as we celebrate advances in laboratory research.
We should invest in ways to lift up and publicize the best ideas related to home care, just as we celebrate advances in laboratory research.
Insulin-requiring diabetes requires constant vigilance and, for some people, that work is aided by continuous glucose monitors and insulin pumps. But medical device companies lock down the data generated by people’s own bodies, ignoring the possibility that patients and caregivers could contribute to innovation to improve their own lives. Happily, the diabetes rebel alliance, whose motto is #WeAreNotWaiting, found a way to not only get access to the data, but also build a do-it-yourself open-source artificial pancreas system. This, by the way, is just one example of how the diabetes community has risen up to demand—or invent—better tools.
Finally, since any conversation about biomedical innovation is now not complete without a reference to artificial intelligence, I will point to evidence that patients, survivors, and caregivers are essential partners in reducing bias on that front as well. For example, when creating an algorithm to measure the severity of osteoarthritis in knee X-rays, a team of academic and tech industry researchers fed it both clinical and patient-reported data. The result was a more accurate estimate of pain, particularly among underserved populations, whose testimony had been ignored or dismissed by human clinicians.
The patient-led revolutionaries are at the gate. Let’s let them in.
Tania Simoncelli provides a thoughtful reminder of the reality faced by many families with someone who has a rare disease. The term “rare disease” is often misunderstood. Such diseases affect an estimated 1 in 10 Americans, which means each of us likely knows someone with one of the 7,000 rare diseases that have a diagnosis. As the former executive director of FasterCures, a center of the Milken Institute, and an executive in a rare disease biotech, I have met many of these families. They see scientific advances reported every day in the news. And yet they may be part of a patient community where there are no options. As Simoncelli points out, fewer than 5% of rare diseases have a treatment approved by the US Food and Drug Administration.
The author’s personal journey is a reminder that champions exist who are dedicated to finding models that can change the system. The Chan Zuckerberg Initiative that Simoncelli works for, which has donated $75 million through its Rare as One program, believes that its funded organizations can establish enough scientific evidence and research infrastructure—and leverage the power of their voices—to attract additional investment from government and the life sciences community. Successful organizations such as the Cystic Fibrosis Foundation have leveraged their research leadership to tap into the enormous capital, talent, and sense of urgency of the private sector to transform the lives of families through the development of treatments, and they have advocated for policies that support patient access. Rare as Oneorganizations are a beacon of light for families forging new paths on behalf of their communities.
The role of philanthropy is powerful, but it does not equate to the roles government and the private sector can play.
As Simoncelli also highlights, the role of philanthropy is powerful, but it does not equate to the roles government and the private sector can play. Since 1983, the Orphan Drug Act has been a major driver spurring the development of therapeutic advances in rare disease, and one study estimates that the FDA approved 599 orphan medications between 1983 and 2020. In August 2022, Congress passed the Inflation Reduction Act authorizing the Medicare program to begin negotiating the prices of drugs that have been on the market for several years. Congress believed that tackling drug prices was a key to ensuring patient affordability. However, critics have pointed to the law’s potential impact on innovation, citing specifically how it could disincentivize research into rare disease. The implementation of the law is ongoing, so it is too early to understand the consequences. But the patient community does not need to wait to advance new innovative models to address any disincentives that may surface.
Every Cure is one of these models that may help address the consequences that new Medicare drug negotiation may have on continuing investments in specific types of research programs. Its mission is to unlock the full potential of existing medicines to treat every disease and every patient possible. Every Cure is building an artificial intelligence-enabled, comprehensive, open-source database of drug-repurposing opportunities. The goal is to create an efficient infrastructure that enables research for rare diseases as well as more common conditions. By working in partnership with the patient community, clinical trials organizations, data scientists, and funders, Every Cure hopes to be a catalyst in advancing new treatment options for patients who currently lack options. Innovation can’t wait—because patients won’t.
Tanisha Carino
Vice Chair of the Board
Every Cure
Tania Simoncelli illuminates a powerful transformation in medical research: enter patients and families to center stage. No longer passive recipients and participants, they are passionate drivers of innovation, teamwork, focus, and results. Science systematically and rigorously approaches truth through cycles of hypothesis and experimentation. Yet science is a human endeavor, and scientists differ in their knowledge, tribal affinities in cultural and scientific backgrounds, bias, creativity, open-mindedness, ambition, and many other critical factors, but often lack “skin in the cure game.”
Medicine prides itself as a science, but it is a social science, as humans are observed and observers. Less “soft” a science than sociology or psychology, medicine is far closer physics or chemistry in rigor and reproducibility. Patients were traditionally viewed as biased while physicians as objective. Double-blind studies revolutionized medicine by explicitly recognizing the bias of physician scientists. Biases run deep as humans are its reservoirs, vectors, and victims. Paradoxically, patients and families with skin in the game are exceptional collaborators who are immune to academic biases. They have revolutionized medical science.
Patients and families with skin in the game are exceptional collaborators who are immune to academic biases. They have revolutionized medical science.
Academics may myopically measure success by papers published in high-impact journals, prestigious grants, promotions, and honors. Idealistic and iconoclastic views of youth give way with success to perpetuating a new status quo that reinforces their theories and tribe; blinded by bias.
True scientists, masters of doubt about their own beliefs, and people with serious medical disorders and their families seek improved outcomes and cures. Teamwork magnifies medical science’s awesome power.
Dichotomies endlessly divide the road of discovery. What is the best path? Fund basic science, untethered from therapy, answering fundamental questions in biology? Or fund translational science, laser focused on new therapies? How often are biomarkers critical, distractions, or misinformation? What leads to more seminal advances—top-down, planned A-bomb building Manhattan projects, or serendipity, propelling Fleming’s discovery of penicillin? The answer depends on your smarts, team-building skills, and luck. Who can best decide how medical research funds should be allocated? Are those with seniority in politics, science, and medicine best? Should those affected have a say? Why can’t science shine its potent lens on the science of discovery instead of defaulting to “what is established” and “works based” but is not evidence-based?
A new paradigm has arrived. Families with skin in the game have a seat at the decision table. Their motivation is pure, and although no one knows the best path before embarking to discover, choices should be guided by the desire to improve health outcomes, not protect the status quo.
Orrin Devinsky
Professor of Neurology and Neuroscience
New York University Grossman School of Medicine
Students seeking a meaningful career in science policy that effects real-world change could do worse than look to the career of Tania Simoncelli. Her account in Issues of how the Chan Zuckerberg Initiative (CZI) is helping build the infrastructure that can speed the development and effectiveness of treatments for rare diseases is just her most recent contribution. It follows her instrumental role in bringing the lawsuit against the drug company Myriad Genetics that ultimately ended in a unanimous US Supreme Court decision invalidating patent claims on genes, as well as her productive stints at several institutions near the centers of power in biomedicine and government.
Rare diseases are rare only in isolation. In aggregate they are not so uncommon. But because they are individually rare, they face a difficult collective action problem. There are few advocates relative to cancer, heart disease, or Alzheimer’s disease, although each of those conditions also languished in neglect at points in their history before research institutions incorporated their conquest into their missions. But rare diseases can fall between the categorical institutes of the National Institutes of Health, or find research on them distributed among multiple institutes, no one of which has sufficient heft to be a champion.
The Chan Zuckerberg team that Simoncelli leads has taken a patient-driven approach. Mary Lasker and Florence Mahoney, who championed cancer and heart disease research by lobbying Congress, giving rise to the modern NIH, might well be proud of this legacy. Various other scientific and policy leaders at the time opposed Lasker and Mahoney’s approach, especially during the run-up to the National Cancer Act of 1971, favoring instead NIH’s scientist-driven research, responding to scientific opportunity. But patient-driven research is a closer proxy to social need. Whether health needs or scientific opportunity should guide research priorities has been the hardy perennial question facing biomedical research policy as it grew ten thousandfold in scale since the end of World War II.
Whether health needs or scientific opportunity should guide research priorities has been the hardy perennial question facing biomedical research policy as it grew ten thousandfold in scale since the end of World War II.
CZI and its Rare As One project are not starting from scratch. They are building on research and advocacy movements that have arisen for chordoma, amyotrophic lateral sclerosis, Castleman disease, and many other conditions. And they are drawing on the strategies of AIDS/HIV activists and breast cancer research advocates who directly influenced national research priorities by systematic attention to science, communication, and disciplined priority-setting from outside government.
Where the Howard Hughes Medical Institute and many other research funders have built on the broad base of NIH research by selecting particularly promising investigators or seizing on emerging scientific opportunities, which is indeed an effective cherry-picking strategy, the CZI is instead building capacity for many organizations to get up to speed on science, think through the challenges and resource needs required to address their particular condition, and develop a research strategy to address it. The scientific elite and grass-roots fertilization strategies are complements, but the resources devoted to the patient-driven side of the scale are far less well established, financed, and institutionalized. That makes the effort all the more intriguing.
The Chan Zuckerberg Initiative is at once helping address the collective action problem of small constituencies, many of which cannot easily harness all the knowledge and tools they need, and also building a network of expertise and experts who mutually reinforce one another. It is a potentially powerful new approach, and a promising frontier of philanthropy.
Robert Cook-Deegan
Professor, School for the Future of Innovation in Society and the Consortium for Science, Policy & Outcomes
Arizona State University
Science and Global Conspiracies
What difference does the internet make to conspiracy theories? The most sobering aspect of “How Science Gets Drawn Into Global Conspiracy Narratives” (Issues, Spring 2023), by Marc Tuters, Tom Willaert, and Trisha Meyer, is its focus on the on the seismic power of the hashtag in choreographing new conspiratorial alliances between diverse sets of Twitter users. Importantly, this is also happening on other platforms such as Instagram and TikTok. During a recent data sprint project that I cofacilitated at the University of Amsterdam, it became increasingly apparent that TikTok hashtags resemble an apophenic grammar—a sort of language that tends to foster perceptions of meaningful connections between seemingly unrelated things. Specifically, the data sprint findings suggest that co-hashtags were partly responsible for the convergence of conspiracy theory and spirituality content over the course of 2021 (to conjure what is often termed conspirituality).
But notwithstanding the significance of hashtag stuffing, the other major idea that figures prominently in current conspiracy theory research is weaponization. In our present political moment, seemingly innocuous events can become weaponized as conspiratorial dog whistles, as can celebrity gossip and fan fiction. On July 4, 2020, the Florida congressional candidate K. W. Miller claimed in a tweet that the popular music icon Beyoncé is actually a white Italian woman called Ann Marie Lastrassi. Miller seems to have “discovered the truth” about Beyoncé via a speculative Instagram comment and parodic Twitter thread, and his #QAnon clarion call was a deliberate misappropriation of Black speculative discourse with white supremacist overtones. He was building upon both the 2016 denunciation of Beyoncé by the InfoWars conspiracy theorist Alex Jones and longstanding hip-hop rumors regarding Beyoncé and Jay-Z’s involvement with the Illuminati, an imagined organization often portrayed as pulling global strings of power.
In our present political moment, seemingly innocuous events can become weaponized as conspiratorial dog whistles, as can celebrity gossip and fan fiction.
Could derisive disarmament by counter-conspiratorial web users be an effective way of laughing in the face of such weaponization tactics? Through a distinctive kind of Black laughter discussed by Zari Taylor and other social media scholars, some Twitter users attempted to extinguish the potential for hilarity in Miller’s tweet by asserting that the joke is actually on white conspiracy theorists who are willing to believe that Beyoncé is Italian while denying the very real and palpable existence of systemic racism in the United States. Ultimately, Miller’s election campaign was wholly unsuccessful, and the collective disarmament effort seems to have been relatively effective, both within and beyond the notion of Black Twitter introduced by Sanjay Sharma, a scholar of race and racism. Several years on from the event, Black Twitter users memorialize and celebrate Ann Marie Lastrassi as their Italian Queen in a similar way to Beyoncé’s own reclamation of racist #BoycottBeyoncé hashtags in 2016.
The internet’s contribution to the spread of conspiracy theories has less to do with “echo chambers” and algorithmically dug “rabbit holes” and much more to do with the perverse echo effects of connected voices. These voices listen to each other in order to find something that they might seize upon to deliver virtuosic conspiratorial performances. Although researchers, monitoring organizations, and policymakers might learn something from the echo effects of the Lastrassi case, it is also true, as my colleague Annie Kelly regularly reminds me, that disarmament is sometimes nothing more than re-weaponization. It might even serve to fan the flames of conspiracy theory in the age of the so-called culture wars.
Edward Katrak Spencer
Postdoctoral Research Associate, University of Manchester
Lecturer I in Music, Magdalen College, University of Oxford
Marc Tuters, Tom Willaert, and Trisha Meyer explore the emergence of a distinctive feature of conspiracy theories: interconnectedness. The authors focused on the Twitter case-study of public understanding of science, and specifically on the use of the hashtag #mRNA. They found that the hashtag, initially used in science discussions (beginning in 2020), was later hijacked by conspiracy narratives (late 2022) and interconnected with conspiratorial hashtags such as #greatreset, #plandemic, and #covid1984.
In a recent paper, one of us quantified such interconnectedness in the largest corpus of conspiracy theories available today, an 88-million-word collection called LOCO. On average, conspiracy documents (compared with non-conspiracy documents) showed higher interconnectedness spanning multiple thematic domains (e.g., Michael Jackson associated with moon landing). We also found that conspiracy documents were similar to each other, suggesting the mutually supportive function of denouncing an alleged conspiratorial plan. These results extend Tuters and colleagues’ research and show that interconnectedness, not bound only to scientific understanding, is a cognitive mechanism of sensemaking.
Conspiracy theories simplify real-world complexity into a cause-effect chain that identifies a culprit. In doing so, conspiracy theories are thought to reduce uncertainty and anxiety caused by existential threats. Because people who subscribe to conspiracy theories do not trust official narrative, they search for hidden motives, consider alternative scenarios, and explore their environment in search for (what they expect to be the) truth. In this process, prompted by the need to confirm their beliefs, conspiracy believers tend to quickly jump to conclusions and identify meaningful relationships among randomly co-occurring events, leading to the “everything is connected” bias.
Conspiracy theories simplify real-world complexity into a cause-effect chain that identifies a culprit.
As Tuters and colleagues suggest, social media might offer affordances for such exploration, thus facilitating the spread of conspiracy theories. The authors also suggest that not all social media platforms are equal in this regard: some might ease this process more than others. Work currently in progress has confirmed this suggestion: we have indeed found striking differences between platforms. From a set of about 2,600 websites, we extracted measures of incoming traffic from different social media platforms such as Twitter, YouTube, Reddit, and Facebook. We found that YouTube and Facebook are the main drivers to conspiracy (e.g., infowars.com) and right-wing (e.g., foxnews.com) websites, whereas Reddit drives traffic mainly toward pro-science (e.g., healthline.com) and left-wing (e.g., msnbc.com) websites. Twitter drives traffic to both left and right politically biased (but not conspiracy) websites.
Do structural differences across social media platforms affect how conspiracy theories are generated? More experimental work is needed to understand the mechanisms by which conspiracy theories are generated by accumulation. Social psychology has furthered our understanding of the cognitive predisposition for such beliefs. Now, building on Tuters and colleagues’ work, it is time for the cognitive, social, and computational sciences to systematically investigate the emergence of conspiracy theories.
Alessandro Miani
Postdoctoral Research Associate
Stephan Lewandowsky
Professor of Cognitive Science
University of Bristol, United Kingdom
Blue Dreams: Rebecca Rutstein and the Ocean Memory Project
Blue Dreams is an immersive video experience inspired by microbial networks in the deep sea and beyond. Using stunning undersea video footage, abstract imagery, and computer modeling, the work offers a glimpse into the complicated relationships among the planet’s tiniest—yet most vital—living systems. The video installation flows between micro and macro worlds to portray geologic processes at play with microbial and planetary webs of interactivity.
Installation photo by Kevin Allen Photo.
Microbes are essential to the functioning of the Earth: they produce breathable air, regulate biogeochemical cycles, and are the origins of life on this planet. Blue Dreams aims to offer a unique and thought-provoking perspective on the interconnectedness and sublimity of the natural world.
Blue Dreams, 2023, digital video still.Blue Dreams, 2023, digital video still.Blue Dreams, 2023, digital video still.Blue Dreams, 2023, digital video still.
Blue Dreams evolved from a year-long collaboration between its five contributors—Rika Anderson, Samantha (Mandy) Joye, Tom Skalak, Shayn Pierce-Cottler, and Rebecca Rutstein—through a grant from the National Academies Keck Futures Initiative’s Ocean Memory Project. Anderson, an environmental microbiologist at Carleton College, advised on marine microbial adaptation and resilience, microbial gene sharing networks, and the implications for exoplanet science and astrobiology. Joye, a marine biogeochemist at the University of Georgia and explorer of diverse deep-sea environments, provided insight into the biogeochemistry of vent and seep systems, and the interplay of microbial networks with large-scale ecological processes. Skalak, a bioengineer, provided overall conceptual vision and insight into methods for abstracting the data into system models, including agent-based simulations that could provoke visualization of swarm and collective behaviors. Peirce-Cottler, professor of biomedical engineering at the University of Virginia, created agent-based models of deep-sea microbial growth patterns generated from patterns of original Rutstein paintings. And multidisciplinary artist Rutstein researched, synthesized, abstracted, and layered imagery, animation, video, and sound to create Blue Dreams.
Blue Dreams, 2023, digital video still.
This exhibition ran through September 15, 2023, at the National Academy of Sciences building in Washington, DC.
REBECCA RUTSTEIN, Artist at Sea Series, 2016–2021, acrylic paintings on canvas, 18 x 18 inches each. Rutstein created these paintings as an artist in residence during several expeditions at sea, including aboard the R/V Falkor sailing from Vietnam to Guam, the R/V Atlantis in the Guaymas Basin, and the R/V Rachel Carson in the Salish Sea. On each voyage, she set up a makeshift art studio and collaborated with scientists, working with satellite, multibeam sonar mapping, or marine microbial data being collected. Separate from the Blue Dreams exhibition, the National Academy of Sciences has acquired these 12 paintings for its permanent art collection.
ABOUT THE OCEAN MEMORY PROJECT
By investigating the interconnectivity of the ocean and its inhabitants at different time scales, the Ocean Memory Project, a transdisciplinary group spanning the sciences, arts, and humanities, aims to understand how this system possesses both agency and memory, and how it records environmental changes through genetic and epigenetic processes in organisms and through dynamic processes in the ocean structure itself. The Ocean Memory Project was born out of the National Academies Keck Futures Initiatives interdisciplinary conference, “Discovering the Deep Blue Sea,” held in 2016.
Installation photo by Kevin Allen Photo.Blue Dreams, 2023, digital video still.
Fostering Clean Energy in Africa
In “Generating Meaningful Energy Systems Models for Africa” (Issues, Spring 2023), Michael O. Dioha and Rose M. Mutiso highlight an important but often neglected issue in current energy transition dialogues: the underrepresentation of African expertise and data in the analyses that inform energy policies on the continent. While the inherent inequality that marks the knowledge development process is concerning, it is the implication that current energy transition strategies are likely out of touch with the on-the-ground realities of the African continent that pose the greatest risk to achieving global climate goals.
As the authors note, the energy systems models that currently inform policy actions tend to focus primarily on decarbonization and emissions reductions. In Africa, however, the challenge at hand is far more complex than this. The continent has the lowest rates of access to modern energy in the world, lags behind other regions on several development indicators—health, education, infrastructure, water, and sanitation, among others—and is one of the most vulnerable regions to the impacts of climate change, despite its historically low emissions. Any energy transition strategy in Africa that fails to acknowledge and address this complex set of challenges in an integrated manner is bound to miss the mark.
Africa contributes the least to climate change because it is poor. Unlike in developed economies, agriculture and land use change, rather than the energy sector, account for the lion’s share of Africa’s emissions. This is because the continent is still predominantly agrarian and relies heavily on the inefficient combustion of biomass for cooking. Modernizing Africa’s energy systems and improving agricultural practices can result in dual climate and development benefits. But this will require significant investments, making economic development a critical lever for achieving climate objectives.
Any energy transition strategy in Africa that fails to acknowledge and address this complex set of challenges in an integrated manner is bound to miss the mark.
Most African countries are still actively building out their energy infrastructure. This means countries on the continent have an opportunity to develop energy systems that can provide Africans with affordable, abundant, and reliable energy while tapping into the vast range of innovative technologies available today, to minimize the climate impacts of energy use. Developing modern and sophisticated electric grids, investing in innovative zero-carbon solutions, and developing the human resources we need to manage cutting-edge climate friendly energy systems are not cheap endeavors.
Today, the impact of climate change is being felt across Africa; extreme weather events, droughts, famines, and increasing disease burdens are straining the capacity of African governments to respond to these vulnerabilities. Persistent poverty on the continent will only force countries to make existential choices between meeting basic development needs and investing in a climate-friendly future. Still, given the scale of investments needed to build a global clean energy economy, Africa should not continue to depend on handouts from richer countries for the continent’s energy transformation. Building African wealth is our best bet.
By embracing the uniqueness of the African context, we can begin to shift the center of gravity of energy transition dialogues from the narrow focus on replacing dirty fuels with cleaner ones to a comprehensive approach that enables access to abundant, affordable, reliable, and modern energy, promotes economic development across sectors, and builds the resilience of Africans to respond to the impacts of climate change.
Lily Odarno
Director, Energy and Climate Innovation–Africa
Clean Air Task Force
Regulating Space Debris
This wicked problem needs US leadership
Marilyn Harbert and Asha Balakrishnan’s call to action should be generalized to a broader international audience. Space debris presents a particularly wicked international problem; the millions of pieces of orbital detritus that circle Earth do not discriminate based on a satellite’s national origin, endangering all satellites and the space-facilitated services that support modern terrestrial society.
The United States faces a daunting task in untangling its regulatory regime to effectively protect the space environment from the creation of more debris, but this is just one step toward a comprehensive solution to a global problem. The gravity of the US position is compounded by the risks of doing too little or too much. Emerging spacefaring nations often look to the United States’ example, seeking tacit guidance from the technical and political leader in space. Failing to capitalize on this leadership position would both reduce US standing and foster institutional inertia around the globe. Overly stringent regulations may incentivize nations to provide a comparatively relaxed regulatory environment to attract industry.
Failing to capitalize on this leadership position would both reduce US standing and foster institutional inertia around the globe.
Space infrequently presents second-mover advantages, but nations with nascent space governance structures may benefit from being fast followers. In this rare situation, those that do not have established space-relevant bureaucracies can adopt best practices without needing to settle regulatory turf battles. The absence of domestic industry should not reduce the urgency of this issue; establishing a thoughtful regulatory practice is a strong signal that a nation is ready to accelerate domestic industrial growth. Furthermore, well-aligned regulation among nations is perhaps the only way to adequately address the growing risks to the space systems that support our everyday life on Earth.
Benjamin Silverstein
Research Analyst
Carnegie Endowment for International Peace
Let the White House authorize new space activities
Marilyn Harbert and Asha Balakrishnan are to be commended for their timely article about a little-understood area of commercial space regulation. While orbital debris is a subject of increasing public attention, getting an appropriate regulatory regime is vital for all space activities. The US government faces at least two key regulatory challenges: ensuring accountability while allowing for innovation; and ensuring efficiency while allowing for multistakeholder interests (e.g., security, commerce, diplomacy, science).
Placing “mission authorization” within the FCC would be a bad idea for a number of reasons, but primarily because, as an independent regulatory agency, it does not report to the president.
The United States has regulatory regimes for commercial launch, remote sensing, and communications satellites. It does not have clear regimes for innovative activities that lack government precedents related to, for example, on-orbit satellite servicing, active debris removal, and in-space resource utilization. To fill this gap, the Obama, Trump, and Biden administrations have each sought to create “mission authorization” legislation to provide “authorization and on-going supervision” for US commercial space activities as required by international law. Enacting such legislation needs to happen as soon as possible to promote a predictable environment for financing and insuring commercial space ventures.
The Federal Communications Commission (FCC) has sought to fill current regulatory lacunae, proposing regulations not only for orbital debris but also for on-orbit satellite, servicing, and assembly. Such regulations may be only thinly related to existing FCC authorities and clearly go beyond the powers explicitly authorized by Congress. Placing “mission authorization” within the FCC would be a bad idea for a number of reasons, but primarily because, as an independent regulatory agency, it does not report to the president. The FCC could make decisions that undercut national security, foreign policy, or public safety interests and leave the president without legal recourse. Such instances occurred during the Trump administration with regard to the protection of GPS signals and meteorological aids, and more recently during the Biden administration with regard to air navigation aids.
Multiple agencies can and do have regulatory responsibility for existing commercial space activities. For new activities, the Department of Commerce is the logical home for regulatory oversight. Since Commerce reports to the president, the White House would retain authority for resolving potential conflicts among the diverse national interests affected by the growing commercial space economy.
Scott Pace
Director, Space Policy Institute
Elliott School of International Affairs
George Washington University
Space regulation needs a new home
The issue of space debris is complex and reminds me of the issue of climate change. Is it “space debris denial” by national governments and private companies if they don’t see the risks as pressing right now? Or is it more about taking a precautionary approach, recognizing the range of challenges that need to be dealt with as the United States prepares for a fully commercialized and multistakeholder space future? To help thread that needle, the Office of Space Commerce within the US Department of Commerce has proposed an “institutional bypass” that would address regulatory gaps by acting as a centralized one-stop shop for commercialization issues expected to arise in the New Space era. As Mariana Mota Prado, a scholar of international law and development, has stated, an institutional bypass would not try to modify, change, or reform existing institutions, Instead, it would create a new pathway in which efficiency and functionality will be the norm.
The current situation seems not clear cut, as some private-sector researchers assessing US space policy and law are themselves uncertain of whether space sustainability and orbital debris generates more specific policy prescriptions than do other areas because the topic is generally popular in the field, or because the issue is of particular concern to companies. Further, the research and insights on disclosure practices of US and foreign corporations that participate in the investment-oriented Procure Space Exchange Traded Fund reveal that there is a lot more talk than action from many space companies with respect to the issues of space sustainability, and it is not apparent that the issue stems from confusion about authorization.
Without minimizing the space debris issue, perhaps equally important may be other environmental concerns stemming from space activities that have immediate impacts here on Earth. Launch contamination anyone?
Marilyn Harbert and Asha Balakrishnan conclude that interagency efforts are active, which begs the question of the nature of any problems with interagency processes, particularly as they deal with who should have the authority to regulate and authorize space activities. Apparently, the current requirements require firms to navigate a complex web of federal agencies, and this “element of the unknown” leads to hesitancy for potential investors. However, interviews with members of industry reveal that most participants could not think of a specific example or incident when their ability to do business was affected by interagency dynamics. Still, it is worthy of note that their reported specific challenges emerged largely from their inability to voice concerns directly with regulatory officials so that their companies could efficiently adjust course, rather than from an actual problem encountering the overall interagency process.
The bottom line, then, is that if the budget exists and projections materialize, it seems to make sense for the Office of Space Commerce to become the central home for these new regulatory issues. However, further research should be undertaken to investigate the findings of a thoughtful analysis described in “The Private Sector’s Assessment of US Space Policy and Law,” carried out by the Center for Strategic and International Studies. Without minimizing the space debris issue, perhaps equally important may be other environmental concerns stemming from space activities that have immediate impacts here on Earth. Launch contamination anyone?
Timiebi Aganaba
Assistant Professor, School for the Future of Innovation in Society
Arizona State University
Founder, Space Governance Lab
Competent nonexperts make the best regulators
I was pleasantly surprised when I read the essay by Marilyn Harbert and Asha Balakrishnan. It correctly and coherently identifies the chaos that is the state of orbital debris regulation in the United States and the world.
Some of even the best regulators may not be world-class experts in the domain they regulate, but they just might be great at their job!
My only lament is that the discussion about how the Federal Communications Commission (FCC) aggressively, and I think correctly, reduced the post-mission disposal threshold from 25 years to five years could have been extended, since I think it holds a potentially unifying lesson that was almost unearthed. That lesson is that regulation is a discipline in and of itself, and possibly that the subject-matter experts in a certain domain should not be the ones regulating that domain. Rather, expert regulators who know just enough about a domain to make cogent decisions but are not flummoxed by cognitive biases of studying the topic for their entire careers are able to aggressively move away from the status quo.
In sum, then, this good article was one inference away from being great: some of even the best regulators (i.e., FCC) may not be world-class experts in the domain they regulate, but they just might be great at their job!
Darren McKnight
Senior Technical Fellow
LeoLabs
How to Clean Up Our Information Environments
I was delighted to read “Misunderstanding Misinformation” (Issues, Spring 2023), in which Claire Wardle advocates moving from “atoms” of misinformation to narratives and laments our current, siloed empirical analysis. I couldn’t agree more. But I would also like to take Wardle’s thoughts further in two ways, as a call to action and a warning.
First, focusing on narratives requires understanding how they circulate in certain contexts, which some groups, including the US Surgeon General, refer to as “information environments.” My background is in medicine, where trying to define a toxin or poison is as difficult as defining misinformation: it all depends on context. Even water can be toxic—ask any marathon runner hospitalized after drinking too much of it. A public health approach isn’t to focus on either atoms or narratives, but on the context that imbues danger. Lead and chromium-6 (think Erin Brockovich) are usually of little concern unless ingested by the young. Eliminating either is a fool’s errand, so we mitigate their potential harms: we test for them, bottled water companies remove them, and we regulate them to minimize the riskiest exposures, such as by restricting lead paint and gasoline.
Taking a similar approach to misinformation would be recognizing that any information can be toxic in the wrong context. Environmental protection requires monitoring, removal of toxins when necessary, and regulation to prevent the most egregious harms. While the English physician John Snow cannot step in to fix things—as he did by removing a public water pump handle in his famous, albeit authoritarian, solution to London’s 1854 cholera epidemic—we must recognize that cleaning up our information environments is not “if” but “how.” Because keeping tabs on revenge porn, hate speech, and pedophilia makes sense, but where and how to draw the line in borderline cases, such as the narratives Wardle speaks of, is less clear.
And that leads to the second point echoing Wardle’s suggestion that we need more research with fewer silos to better understand in what conditions, contexts, and communities information becomes most toxic. But I’m concerned that her call for more holistic research will go unheeded because environments are challenging to study. For example, over 50 years ago, the social scientist William McGuire developed a “matrix” of communication variables related to persuasion, listing on one side all the ways a message could be constructed and delivered, and on the other the spectrum of potential impacts, from catching someone’s attention to changing their behavior. Research was needed in every cell to fully understand persuasion. Today, most research fits into only a small part of that matrix: assessing the effect of small changes in message characteristics on beliefs.
We must recognize that cleaning up our information environments is not “if” but “how.”
Such studies are quick and cheap to do, test important theories, and are publishable. To be clear, the problem is not the research, but the sheer amount of it relative to the deep, structural research that is sorely needed. Nick Chater and George Lowenstein lament this disparity in a recent article in Behavioral and Brain Sciences, observing how individually framed research deflected attention and support away from systemic policy changes. Writing in The BMJ, published by the British Medical Association, Nason Maani and colleagues go further, arguing that discourse today is disproportionately “polluted” as discussion of individual solutions crowds out discussion of more needed—but more difficult to study—structural solutions.
In short, why have we made so little progress? Because social scientists have been and continue to be incentivized to study certain types of questions, leaving McGuire’s matrix embarrassingly unfilled. A 2011 Knight Foundation report argued that we should be assessing community information needs, a perspective, I would argue, that is consistent with understanding their information environments. A National Academies 2017 consensus study setting a research agenda for the science of science communication also called for a systems-based approach to understand the interrelationship between key elements that make up science communication: communicators, messages, audiences, and channels, to name a few. Yet years—and a global pandemic—later, individually framed research on misinformation dominates the discourse.
These incentives are myriad and deeply entrenched in academia and funding agencies. In his 2016 New Atlantis essay, “Saving Science,” Daniel Sarewitz characterized this as a post-world war problem with “institutional cultures organized and incentivized around pursuing more knowledge, not solving problems.” The result, he argued, is that “science isn’t self-correcting, it’s self-destructing.” It raises a scary prospect: perhaps solving the problem of misinformation may require fixing, or at least circumventing, a sclerotic system of science that continues to reproduce the same methodological biases decade after decade. Otherwise, I cringe with macabre anticipation about reading the future recommendations on improving science communication after the 2031 H5N1 influenza pandemic. I imagine they will add to the chorus of unheeded calls for more holistic, context-informed studies.
David Scales
Assistant Professor of Medicine, Weill Cornell Medicine
Chief Medical Officer, Critica
My “ah-ha” disinformation moment came in July 2014, in the hours and days following the crash of Malaysia Airlines Flight 17 in eastern Ukraine. I was a senior official in the federal agency that oversees US international broadcasting, including Voice of America and Radio Free Europe/Radio Liberty, and closely monitoring media reports of the tragedy. Credible reporting quickly emerged that the airliner had been shot down by Russian-controlled forces. Yet within hours the global information space was muddled by at least a dozen alternate narratives, including that the Ukrainian military, using either a jet fighter or a surface-to-air missile, had downed the aircraft; that bodies recovered at the crash scene actually had been corpses, loaded on an otherwise empty passenger jet that was remotely piloted and shot down by Western forces who then blamed Russia; and even that the intended Ukrainian target had been the aircraft of Russian President Vladimir Putin, who was returning at that time from a foreign trip. In this messy, multi-language information scrum, Russia’s likely culpability was just one more version of what might have happened.
It quickly became clear that, at a quickening rate, false and misleading information online was seeping into the marketplace of ideas, eroding public discourse around the world and carving fissures throughout societies. The mantra of US international broadcasting throughout the Cold War—that, over time, the truth would win in the competition of ideas—was under unprecedented pressure as the disinformation tsunami gathered momentum. We knew we had a problem. But what was to be done?
Enter Claire Wardle and other clear-thinking academics and analysts who initially helped to clarify the parameters of “information disorder,” in part by dismissing the woefully inadequate term fake news and introducing the more precise terms misinformation, disinformation, and malinformation.
At a quickening rate, false and misleading information online was seeping into the marketplace of ideas, eroding public discourse around the world and carving fissures throughout societies.
In her essay, Wardle again comes to the rescue, with fresh analysis and recommendations that include moving beyond even the improved trifecta of information terminology, which she calls “an overly simple, tidy framework I no longer find useful.” “We researchers,” she chides, “(have) become so obsessed with labeling the dots that we can’t see the larger pattern they show.” Instead, researchers should “focus on narratives and why people share what they do.”
What’s needed, Wardle argues, is a better understanding of the “social contexts of this information, how it fits into narratives and identities, and its short-term impacts and long-term harms.”
This is music to the ears of this media professional. Responsible, professional journalists already have responded to the disinformation challenge through actions such as quick-response fact-checking operations and innovative investigations to expose disinformation sources. I believe that they can do more, including the sort of investigative digging and connecting players and actions across the disinformation space that Wardle calls for. Journalists, for instance, can act on her recommendation to enhance genuine, two-way communication between publics and experts and officials, and they can support community-led resilience and take part in targeted “cradle to grave” educational campaigns to “help people learn to navigate polluted information systems.”
Why pursue this course? As a career journalist and therefore a fervent defender of the First Amendment, I am wary of any moves, no matter how well-intentioned, to restrict speech. Top-down solutions such as legislating speech are but a short step away from the slippery slope toward censorship. The ultimate defense against disinformation, therefore, must come from us, acting as individuals and together—in short, from a well-educated, informed, engaged public. To paraphrase Smokey Bear: Only YOU can prevent disinformation. And the better we understand and implement the prescriptions that Wardle and others (including the authors of the other insightful offerings in Issues under the “Navigating a Polluted Information Ecosystem” rubric) continue to pursue, the better our prospects for limiting—and perhaps even preventing—wildfires of disinformation.
Jeffrey Trimble
International Journalist, Editor, and Media Manager
Former Lecturer in Communication and Political Science at Ohio State University
Training More Biosafety Officers
The United States has long claimed that there is a need to focus on the safety and security of biological research and engineering, but we are only beginning to see that call turn into high-level action on funding and support for biosafety and biosecurity governance. The CHIPS and Science Act, for example, calls for the White House Office of Science and Technology Policy to support “research and other activities related to the safety and security implications of engineering biology,” and for the office’s interagency committee to develop and update every five years a strategic plan for “applied biorisk management.” The committee is further charged with evaluating “existing biosecurity governance policies, guidance, and directives for the purposes of creating an adaptable, evidence-based framework to respond to emerging biosecurity challenges created by advances in engineering biology.”
To carry out this mouthful of assignments, more people need to be trained in biosafety and biosecurity. But what does good training look like? Moreover, what forms of knowledge should be incorporated into an adaptable evidence-based framework?
In “The Making of a Biosafety Officer” (Issues, Spring 2023), David Gillum shows the power and importance of tacit knowledge—“picked up here and there, both situationally and systemically”—in the practice of biosafety governance, while at the same time stressing the importance of the need to formalize biosafety education and training. This is due, in part, to the lack of places where people can receive formal training in biosafety. But it is also a recognition of, as Gillum puts it, the type of knowledge biosafety needs—knowledge “at the junction between rules, human behavior, facilities, and microbes.”
The present lack of formalized biosafety education and training presents an opportunity to re-create what it means to be a biosafety officer as well as to redefine what biosafety and biosecurity are within a broader research infrastructure and societal context. This opening, in turn, should be pursued in tandem with agenda-setting for research on the social aspects of biosafety and biosecurity. It is increasingly unrealistic to base a biosafety system primarily on lists of known concerns and standardized practices for laboratory management. Instead, adaptive frameworks are needed that are responsive to the role that tacit knowledge plays in ensuring biosafety practices and are aligned with current advances in bioengineering and the organizational and social dynamics within which it is done.
The present lack of formalized biosafety education and training presents an opportunity to re-create what it means to be a biosafety officer as well as to redefine what biosafety and biosecurity are within a broader research infrastructure and societal context.
Proficiency in biosafety and biosecurity expertise today means attending to the formal requirements of policies and regulations while also generating new knowledge about the gaps in those requirements and a well-developed sense of the workings of a particular institution. The challenge for both training and agenda-setting is how to endorse, disseminate, and assimilate the tacit knowledge generated by biosafety officers’ real-life experiences. For students and policymakers alike, a textbook introduction to biosafety’s methodological standards, fundamental concepts, and specific items of concern will surely come about as biosafety research becomes more codified. But even as some aspects of tacit knowledge become more explicit, routinized, and standardized, the emergence of new and ever valuable tacit knowledge will always remain a key part of biosafety expertise and experience.
Gillum’s vivid examples of real-life experiences involving anthrax exposures, the organizational peculiarities of information technology infrastructures, and the rollout of regulations of select bioagents demonstrate that, at a basic level, biosafety officers and those with whom they work need to be attuned to adaptability, uncertainty, and contingency in specific situations. Cultivating this required mode of attunement among future biosafety professionals means embracing the fact that biosafety, like science itself, is a constantly evolving social practice, embedded within particular institutional and political frameworks. As such, it means that formal biosafety educational programs must not reduce what counts as “biosafety basics” to technical know-how alone, but ought to prioritize situational awareness and adaptability as part of its pedagogy. Biosafety and biosecurity research such as that envisioned in the CHIPS and Science Act will advance the training and work of the next generation of biosafety professionals only if it recognizes this key facet of biosafety.
Melissa Salm
Biosecurity Postdoctoral Fellow in the Center for International Security & Cooperation
Stanford University
Sam Weiss Evans
Senior Research Fellow in the Program on Science, Technology, and Society
Harvard Kennedy School
David Gillum illustrates the importance of codifying and transferring knowledge that biosafety professionals learn on the job. It is certainly true that not every biosafety incident can be anticipated, and that biosafety professionals must be prepared to draw on their knowledge, experience, and professional judgment to handle situations as they arise. But it is also true that as empirical evidence of laboratory hazards and their appropriate mitigations accumulate, means should be developed by which this evidence is analyzed, aggregated, and shared.
There will always be lessons that can only be learned the hard way—but they shouldn’t be learned the hard way more than once. There is a strong argument for codifying and institutionalizing these biosafety “lessons learned” through means such as formalized training or certification. Not only will that improve the practice of biosafety, but it will also help convince researchers—a population particularly sensitive to the need for empirical evidence and logical reasoning as the basis for action—that the concerns raised by biosafety professionals need to be taken seriously.
There is a strong argument for codifying and institutionalizing these biosafety “lessons learned” through means such as formalized training or certification.
This challenge would be significant enough if the only potential hazards from research in the life sciences flowed from accidents—human error or system malfunction—or from incomplete understanding of the consequences of research activities. But the problem is worse than that. Biosecurity, as contrasted with biosafety, deals with threats posed by those who would deliberately apply methods, materials, or knowledge from life science research for harm. Unfortunately, when it comes to those who might pose deliberate biological threats, we cannot exclude researchers or even biosafety professionals themselves. As a result, the case for codifying and sharing potential biosecurity failures and vulnerabilities is much more fraught than it is for biosafety: the audience might include the very individuals who are the source of the problem—people who might utilize the scenarios that are being shared, or who might even modify their plans once they learn how others seek to thwart them. Rather than setting up a registry or database by which lessons learned can be compiled and shared, one confronts the paradox of creating the Journal of Results Too Dangerous to Publish. Dealing with such so-called informationhazards is one factor differentiating biosafety from biosecurity. Often, however, we call upon the same experts to deal with both.
Personal relationships do not immunize against such insider threats, as we learn every time the capture of a spy prompts expressions of shock from coworkers or friends who could not imagine that the person they knew was secretly living a vastly different life. However, informal networks of trust and personal relationships are likely a better basis on which to share sensitive biosecurity information than relying on mutual membership in the same profession or professional society. So while there is little downside to learning how to better institutionalize, codify, and share the tacit knowledge and experience with biosafety that Gillum describes so well, it will always be more difficult to do so in a biosecurity context.
Gerald L. Epstein
Contributing Scholar
Johns Hopkins Center for Health Security
Enhancing Trust in Science
In “Enhancing Trust in Science and Democracy in an Age of Misinformation” (Issues, Spring 2023), Marcia McNutt and Michael M. Crow encourage the scientific community to “embrace its vital role in producing and disseminating knowledge in democratic societies.” We fully agree with this recommendation. To maximize success in this endeavor, we believe that the public dialogue on trust in science must become less coarse to better identify the different elements of science that can be trusted, whether it is science as a process, particular studies, which actors or entities are trusted, or further distinctions.
At the foundation of trust in science is trust in the scientific method, without which no other trust can be merited, warranted, or justified. The scientific community must strive to ensure that the scientific process is understood and accepted before we can hope to merit trust at more refined levels. Although trust in the premise that following the scientific method will lead to logical and evidence-based conclusions is essential, blanket trust in any component of the scientific method would be counterproductive. Instead, trust in science at all levels should be justified through rigor, reproducibility, robustness, and transparency. Scientific integrity is an essential precursor to trust.
As examples, at the study level, trust might be partially warranted through documentation of careful study execution, valid measurement, and sound experimental design. At the journal level, trust might be partially justified by enforcing preregistration or data and code sharing. In the case of large scientific or regulatory bodies, these institutions must merit trust by defining and communicating both the evidence on which they base their recommendations and the standards of evidence they are using.
Trust in science at all levels should be justified through rigor, reproducibility, robustness, and transparency. Scientific integrity is an essential precursor to trust.
Recognizing that trust can be merited at one point of the scientific process (e.g., a study and its findings have been reported accurately) without being merited at another (e.g., the findings represent the issue in question) is essential to understanding how to develop specific recommendations for conveying trustworthiness at each point. Therefore, efforts to improve trust in science should include the development of specific and actionable advice for increasing trust in science as a process of learning; individual scientific experiments; certain individual scientists; large, organized institutions of science; the scientific community as a whole; particular findings and interpretations; and scientific reporting.
However, as McNutt and Crow note, “It may be unrealistic to expect that scientists … probe the mysteries of, say, how nano particles behave, as well as communicate what their research means.” Hence, a major challenge facing the scientific community is developing detailed methods to help scientists better communicate with and warrant the trust of the general public. Thus, the current dialogue surrounding trust must identify both specific trust points and clear actions that can be taken at each point to indicate and possibly increase the extent to which trust is merited.
We believe the scientific community will rise to meet this challenge, offering techniques that signal the degree of credibility merited by key elements and steps in the scientific process and earning the public trust.
David Allison
Dean
Distinguished Professor
Provost Professor
Indiana University School of Public Health, Bloomington
Raul Cruz-Cano
Associate Professor of Biostatistics, Department of Epidemiology and Biostatistics
Indiana University School of Public Health, Bloomington
In times of great crisis, a country needs inspiring leaders and courageous ideas. Marcia McNutt and Michael M. Crow offer examples of both. Recognizing the urgency of our moment, they propose several innovative strategies for increasing access to research-grade knowledge.
Their attention to increasing the effectiveness of science communication is important. While efforts to improve science communication can strengthen trust in science, positive outcomes are not assured. A challenge comes from the fact that many people and organizations see science communication as a way to draw more attention to their people and ideas. While good can come from pursuits of attention, they can also amplify challenges posed by misinformation and disinformation. These inadvertent outcomes occur when attention pursuits come at the expense of characteristics that make science credible in the first place.
Consider, for example, what major media companies know: sensationalism draws viewers and readers. For them, sensationalism works best when a presentation builds from a phenomenon that people recognize as true and then exaggerates it to fuel interest in “what happens next” (e.g., the plot of most superhero movies or the framework for many cable news programs).
A better way forward is to see the main goal of science communication as a form of service that increases accurate understanding. Adopting this orientation means that a communicator’s primary motivation is something other than gaining attention, influence, or prestige.
In science, several communication practices are akin to sensationalism. Science communicators who suppress null results and engage p-hacking (the practice of using statistical programs to create the illusion of causal relationships) can gain attention by increasing the probability of getting published in a scientific journal. Similarly, science communicators who exaggerate the generalizability of a finding or suppress information about methodological limitations may receive greater media coverage. Practices such as these can generate attention while producing misleading outcomes that reduce the public’s understanding of science.
A better way forward is to see the main goal of science communication as a form of service that increases accurate understanding. Adopting this orientation means that a communicator’s primary motivation is something other than gaining attention, influence, or prestige. Instead, the communicator’s goal is to treat the audience with so much reverence and respect that she or he will do everything possible to produce the clearest possible understanding of the topic.
Of course, many scientific topics are complex. A service-oriented approach to communication requires taking the time to learn about how people respond to different presentations of a phenomenon—and measuring which presentations produce the most accurate understandings. Fortunately, an emerging field of the science of science communication makes this type of activity increasingly easy to conduct.
Among the many brilliant elements of the McNutt-Crow essay are the ways their respective organizations have embraced service-oriented ideas. Each has broken with long-standing traditions about how science is communicated. Arizona State, through its revolutionary transformation into a highly accessible national university, and the National Academies of Sciences, Engineering, and Medicine through their innovations in responsibly communicating science, offer exemplars of how trust in science can be built. These inspiring leaders and their courageous ideas recognize the urgency of our moment and offer strong frameworks from which to build.
Arthur Lupia
Gerald R. Ford Distinguished University Professor
Associate Vice President for Research, Large Scale Strategies
Executive Director, Bold Challenges
University of Michigan
Some years ago, I conducted a content analysis of five of the leading American history textbooks sold in the United States. The premise of the study was that most young people get more information about the history of science and pathbreaking discoveries in their history courses than in the typical secondary school course in chemistry or physics. I wanted to compare the extent and depth of coverage of great science compared with the coverage of politics, historical events, and the arts, among other topics.
The results were somewhat surprising. First, there was almost no coverage of science at all in these texts. Second, the only topic that received more than cursory attention was the discovery of the atomic bomb. Third, in comparative terms, the singer Madonna received more coverage in these texts than did the discovery of the DNA molecule by Watson and Crick. In short, there was almost no coverage of science.
When I asked authors why they did not include more about science, their answers were straight forward. As one put it: “Science doesn’t sell, according to my publisher,” and “Frankly, I don’t know enough about science myself to write with confidence about it.”
This brings me to Marcia McNutt and Michael M. Crow’s important essay on producing greater public trust in science as well as some higher level of scientific and technological literacy. Trust is a hard thing to regain once it is lost. McNutt and Crow suggest significant ways to improve public trust in science. I would expand a bit further on their playbook.
Probably 30% of the American population know little to nothing about science and have no desire to be educated about it and the discoveries that have changed their lives. They are lost. But a majority are believers in science and technology. When universities are becoming multidisciplinary and increasing institutions without borders, we must harness the abilities and knowledge that exists within these houses of intellect—and expertise beyond academic walls—to make the case for science as the greatest driver of American productivity and improved health care that we have.
When you survey people about science, you are apt to get more negative responses to very general questions than if you ask them to assess specific products and discoveries by scientists. The group Research America! consistently finds that the vast majority of US citizens approve of spending more federal money on scientific research. They applaud the discovery of the laser, of the gene-editing tool CRISPR, of computer chips, and of the vaccines derived from RNA research.
A few scientists have the gift for translating their work in ways that lead to accurate and deeper public understanding of their scientific research and discoveries. But the vast majority do not. That can’t be their job.
As McNutt and Crow suggest, it is now time to create a truly multidisciplinary effort to transfer knowledge from the laboratory to the public. A few scientists have the gift for translating their work in ways that lead to accurate and deeper public understanding of their scientific research and discoveries. But the vast majority do not. That can’t be their job. Here is where we need the talent and expertise of humanists, historians, ethicists, artists, and leading technology experts outside of the academy, as well as the producers of stories, films, and devises that ought to be used for learning. New academic foci of attention on science and technology as part of this movement of knowledge toward interdisciplinarity ought to be fostered inside our universities.
The congressional hearings centered on events of January 6 offer an excellent example of the collaboration between legislators and Hollywood producers. The product was a coherent story that could be easily understood. We should teach scientists to be good communicators with the communicators. They must also help to make complex ideas both accurate and understandable to the public. There are many scientists who can do this—and a few who can tell their own stories. This suggests the importance of training excellent science reporters and interlocutors who can evaluate scientific results and translate those discoveries into interesting reading for the attentive public. These science writers need additional training in the quality of research so that they don’t publish stories based on weak science that leads to misinformation—such as the tiny, flawed studies that were presented to the public as fact that led to false beliefs about autism or the effects of dietary cholesterol and heart disease.
We should be looking especially toward educating the young. The future of science and technology lies with their enthusiasms and beliefs. That enthusiasm for learning about women’s and minority members health, about global climate change, about finding cures and preventions for disease lies ultimately with their knowledge and action. The total immersive learning at Arizona State University is an excellent prototype of what is possible. Now those educational, total immersion models—so easily understood by the young—should be developed and used in all the nation’s secondary schools. We can bypass the politically influenced textbook industry by working directly with secondary schools and even more directly with young people who can use new technology better than their elders.
We have witnessed a growth in autobiographies by scientists, especially by women and members of minority groups. More scientists should tell their stories to the public. We also need gifted authors, such as Walter Isaacson, or before him Walter Sullivan or Stephen J. Gould, telling the stories of extraordinary scientists and their discoveries. Finally, we should be more willing to advertise ourselves. We have an important story to tell and work to be done. We should unabashedly tell those stories through organized efforts by the National Academies (such as their biographies of women scientists), research universities, and very well-trained expositors about science. Through these mechanisms we can build much improved public understanding of science and technology and the derivative trust that that will bring.
Jonathan R. Cole
John Mitchell Mason Professor of the University
Provost and Dean of Faculties (1989–2003)
Columbia University
CHIPS and Science Opens a Door for Society
In August 2022, President Biden signed the CHIPS and Science Act into law, a bill my colleagues and I passed to ensure US leadership in semiconductor development and innovation across a multitude of sectors. The law secured historic authorizations in American manufacturing, support for our workforce in science, technology, engineering, and mathematics (STEM), and bolstering of the nation’s research competitiveness in emerging technologies. A year later, Congress must find the political will to fund the science component of the act, while ensuring these investments are socially and ethically responsible for all Americans.
In recent decades, emerging technologies were quickly perfected and rapidly proliferated to transform our economy and society. Powerful forces are now overwhelmingly at our fingertips, either through mass production or the digital superhighway brought on by fiber optics. What we know today about various materials and energy uses differs dramatically from when we were first harnessing the capabilities of plastics, tool and die making, and the combustion engine. Are we capable of learning from a past when we could not see as clearly into the future as we can today? How can we create a structure to adjust or more ethically adapt to changing environments and weigh social standards for implementing new technology?
Today, we see that many emerging technologies will continue to have profound impacts on the lives of American citizens. Technologies such as artificial intelligence and synthetic biology hold tremendous promise, but they also carry tremendous risks. AI and quantum cryptography, for example, will drastically influence the privacy of the average internet user. These are known risks for which we can take steps, including developing legislation, such as a bill I authored, the Privacy Enhancing Technology Research Act, to mitigate such risks. There is also a universe of unknown risks. But even in those cases we have tools and expertise to think through what those risks might be and how to assign value to them.
The ethical and societal considerations in CHIPS and Science were designed to empower scientists and engineers to consider the ethical, social, safety, and security implications of their research throughout its lifecycle, potentially mitigating any harms before they happen. And where researchers lack the tools or knowledge to consider these risks on their own, they might turn to professional ethicists or consensus guidelines within their disciplines for help.
The intent was not only to ensure representation in fields developing and applying the global shaping technologies of the future, but also to put value on the notion that American science can be more culturally just and equitable.
Incorporating these considerations into our federal agencies’ research design and review processes is consistent with the American approach of scientific self-governance. The enacted scientific legislation plays to the strengths of our policymaking in that we entrust researchers to use their intellectual autonomy to create technological solutions for the potential ethical and societal challenges of their work and give them the freedom to pursue new research directions altogether.
While prioritizing our law on STEM diversity, the intent was not only to ensure representation in fields developing and applying the global shaping technologies of the future, but also to put value on the notion that American science can be more culturally just and equitable. This occurs when diverse voices are in the research lab and at the commercialization table.
Seeing the CHIPS and Science Act fully funded remains one of my top priorities. New and emerging technologies, such as AI, quantum computing, and engineering biology, have a vast potential to re-shore American manufacturing, create sustainable supply chains, and bring powerful benefits to all Americans everywhere. However, these societal and ethical benefits cannot be realized if we are not also intentional in considering the societal context for these investments. If we do not lead with our values, other countries whose values we may not share will step in to fill the void. It is time for us to revitalize federal support for all kinds of research and development—including social and ethical initiatives—that have long made the United States a beacon of excellence in science and innovation.
Rep. Haley Stevens
Michigan, 11th District
Ranking Member of the Committee on Science, Space, and Technology’s Subcommittee on Research and Technology
As David H. Guston intimates, the CHIPS and Science Act presents a new opportunity for the National Science Foundation to make another important step in fostering the social aspects of science. The act can also champion existing and emerging efforts focused on understanding the way social science can deeply inform and shape the entire scientific enterprise.
Contemporary issues demand a substantive increase in the support for social science. This research is critically necessary to understand the social impacts of our changing environment and technological systems, and how to design and develop solutions and pathways that equitably center humanity.
By describing the historical arc of the evolving place of social science at NSF, Guston illustrates how the unbalance, syncopated, and often arhythmical dance between NSF and social science did not necessarily benefit either. I am optimistic about what specific sections of the CHIPS and Science Act directly require, tacitly imply, and conceptually allude to. The history of NSF is replete with examples of scientific research that fundamentally altered the way humans interact, communicate, and live in a shared world. Contemporary issues—in such diverse areas as rising climate variability and the place of artificial intelligence in our everyday interactions—demand a substantive increase in the support for social science. This research is critically necessary to understand the social impacts of our changing environment and technological systems, and how to design and develop solutions and pathways that equitably center humanity. As the world always shows, we are on the cusp of a new moment. This new moment needs to be driven by social science and social scientists in concert with natural and physical scientists. I use the term concert, and its referent to artistic and sonic creative collaborations, deliberately to evoke a different framework of collaborative and interdisciplinary effort. Part of the solution is to always remember that science is a human endeavor.
In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential.
In thinking about the place of social science in the next evolution of interdisciplinary research, I believe the cupcake metaphor is instructive. As a child of the 1970s, I remember the cupcake was a birthday celebration staple. I really liked the cake part but was greatly indifferent to the frosting or sprinkles. If I had to choose, I would always select the cupcake with sprinkles for one reason: they were easy to knock off. In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential. Social science is not the egg, the flour, or the sugar. Sprinkles are neither in the batter, nor do they see the oven. Sprinkles are a late addition. No matter the stylistic or aesthetic impact, they never alter the substance of the “cake” in the cupcake. The potential of certain provisions of the CHIPS and Science Act hope to chart a pathway for scientific research that makes social science a key component of the scientific batter to bake social scientific knowledge, skill, and expertise into twenty-first century scientific “cupcakes.”
Rayvon Fouché
Professor in Communication Studies and the Medill School of Journalism
Northwestern University
Former Division Director, Social and Economic Sciences
National Science Foundation
David H. Guston expertly describes how provisions written into the ambitious CHIPS and Science Act could make ethical and societal considerations a primary factor in the National Science Foundation’s grantmaking priorities, thereby transforming science and innovation policy for generations to come.
Of particular interest, Guston makes reference to public interest technology (PIT), a growing movement of practitioners in academia, civil society, government, and the private sector to build practices to design, deploy, and govern technology to advance the public interest. Here, I extend his analysis by applying core concepts from PIT that have been articulated and operationalized by the Public Interest Technology University Network (PIT-UN), a 64-member network of universities and colleges that I oversee as director of public interest technology for New America. (Guston is a founding member of PIT-UN and has led several efforts to establish and institutionalize PIT at Arizona State University and in the academic community more broadly.)
As Guston describes, the CHIPS and Science Act “expand[s] congressional expectations of more integrated, upstream attention to ethical and societal considerations” in NSF’s process for awarding funds. This is undoubtedly a step in the right direction. However, operationalizing the concept of “ethical and societal considerations” requires that we get specific about who researchers must include in their process of articulating foreseeable risks and building partnerships to “mitigate risk and amplify societal benefit.”
Universities and other NSF-funded institutions must invest more in these kinds of community partnerships to regularly challenge and update our understanding of “the public.”
Public interest technology asserts that the needs and concerns of people most vulnerable to technological harm must be integrated into the process of designing, deploying, and governing technology. While existing methods to assess ethical and societal considerations of technology such as impact evaluations or user-centered design can be beneficial, they often fail to adequately incorporate the needs and concerns of marginalized and underserved communities that have been systematically shut out of stakeholder conversations. Without a clear understanding of how specific communities have been excluded from technology throughout US history—and a shared analysis of how those communities are continually exploited or made vulnerable to the negative impacts of technology—we run the risk of not only repeating the injustices of the past, but also embedding biases and harmful assumptions into emerging technologies. Frameworks and insights from interdisciplinary PIT scholars such as Ruha Benjamin, Cathy O’Neil, Meredith Broussard, and Afua Bruce that map relationships between technology and power structures must inform NSF’s policymaking if the funds made available through the CHIPS and Science Act are to effectively address ethical and societal considerations.
Furthermore, a robust operationalization of these considerations will require a continual push to develop and extend community partnerships in a way that expands our notion of the public. Who should be included in the definition of “the public”? Does it include under-resourced small businesses and nonprofits? People who are vulnerable to tech abuse? People living on the front lines of climate change? In advancing this broader understanding of the public, a strategic partnership with international organizations becomes essential, including cooperation with emerging research entities that focus on the ethical issues within emerging technologies and artificial intelligence such as the Distributed Artificial Intelligence Research Institute, the Algorithmic Justice League, the Center for AI and Digital Policy, the Electronic Frontier Foundation, and the OECD AI Policy Observatory, among others.
Guston points to participatory technology assessments undertaken through the NSF-funded Center for Nanotechnology in Society at Arizona State University as an example of how to engage the public in understanding and mitigating technological risks. Universities and other NSF-funded institutions must invest more in these kinds of community partnerships to regularly challenge and update our understanding of “the public,” to ensure that technological outputs are truly reflective of the voices, perspectives, and needs of the public as a whole, not only those of policymakers, academics, philanthropists, and technology executives.
Andreen Soley
Director of Public Interest Technology at New America and the Public Interest Technology University Network
These comments draw in part from recommendations crafted by PIT-UN scholars to NSF’s request for information on “Developing a Roadmap for the Directorate for Technology, Innovation, and Partnerships.”
The most important word in David H. Guston’s article addressing the societal considerations of the CHIPS and Science Act occurs in the first sentence: “promised.” For scholars and practitioners of science and technology policy, the law has created genuine excitement. This is a dynamic moment, where new practices are being envisioned and new institutions are being established to link scientific research more strongly and directly with societal outcomes.
Many factors will need to converge to realize the promise that Guston describes. One set of contributors that are crucial, yet often overlooked, in this changing ecosystem of science and technology policy are science philanthropies. Science philanthropy has played a key role in the formation and evolution of the current research enterprise, and these funders are especially well-positioned to actualize the kind of use-inspired, societally oriented scholarship that Guston emphasizes. How can science philanthropy assist in achieving these goals? I see three fruitful areas of investigation.
The first is experimentingwith alternative approaches to funding. Increasingly, funders from both philanthropy and government are experimenting with different ways of financing scientific research to respond rapidly to scientific and societal needs. Some foundations have explored randomizing grant awards to address the inherent biases of peer review. New institutional arrangements, called Focused Research Organizations, have been established outside of universities to undertake applied, use-inspired research aimed at solving critical challenges related to health and climate change. There is the capacity for science philanthropies to do even more. For instance, participatory grantmaking is emerging as a complementary approach to allocating funds, in which the expected community beneficiaries of a program have a direct say in which awards are made. While this approach has yet to be directly applied to science funding, such alternative decisionmaking processes offer opportunities to place societal implications front and center.
Science philanthropies, because of the wide latitude they have in designing and structuring their programs, are uniquely situated to sponsor interdisciplinary research.
The second is making connections and filling knowledge gaps across disciplines and sectors. Interdisciplinary research is notoriously difficult to fund through conventional federal grantmaking programs. Science philanthropies, because of the wide latitude they have in designing and structuring their programs, are uniquely situated to sponsor such scholarship. As an example, the Energy and Environment program that I oversee at the Alfred P. Sloan Foundation is focused on advancing interdisciplinary social science and bringing together different perspectives and methodologies to ask and answer central questions about energy system decarbonization. The program supports interdisciplinary research topics such as examining the societal dimensions of carbon dioxide removal technologies, a project in which Guston is directly involved; highlighting the factors that are vital in accelerating the electrification of the energy system; and concentrating on the local, place-based challenges of realizing a just, equitable energy transition. Additional investments from science philanthropies can expand and extend interdisciplinary scholarship across all domains of inquiry.
The third is learning through iteration and evaluation. Guston traces the historical context of how societal concerns have always been present in federal science funding, even if their role has been obscured or marginalized. Science philanthropies can play a pivotal role in resourcing efforts to better understand the historical origins and subsequent evolution of the field of science and technology policy. For this reason, the Sloan Foundation recently funded a series of historically oriented research projects that will illuminate important developments related to the practices and institutions of the scientific enterprise. Further, science philanthropies should do more to encourage retrospective evaluation and impact assessment to inform how society is served by publicly and privately funded research. To that end, over the past three years I have helped to lead the Measurement, Evaluation, and Learning Special Interest Group of the Science Philanthropy Alliance, a forum for alliance members to come together and learn from one another about different approaches and perspectives on program monitoring and evaluation. As Guston writes, there is much promise in the CHIPS and Science Act. Science philanthropies will be essential partners to achieve its full potential.
Evan S. Michelson
Program Director
Alfred P. Sloan Foundation
We agree with David Guston’s assertion that the CHIPS and Science Act of 2022, which established the National Science Foundation’s Directorate for Technology, Innovation, and Partnerships, presents a significant opportunity to increase the public benefits from—and minimize the adverse effects of—US research investments.
We also agree that the TIP directorate’s focus on public engagement in research is promising for amplifying scientific impact. Our experiences leading the Transforming Evidence Funders Network, a global group of funders interested in increasing the societal impact of research, are consistent with a recent NSF report, which states that engaged research “conducted via meaningful collaboration among scientist and nonscientist actors explicitly recognizes that scientific expertise alone is not always sufficient to pose effective research questions, enable new discoveries, and rapidly translate scientific discoveries to address society’s grand challenges.”
We have also found that engaged research could be an essential strategy for identifying, anticipating, and integrating into science the “ethical and societal considerations” mentioned in the CHIPS and Science Act. The NSF-funded centers for nanotechnology in society provide an illustrative example in developing and normalizing participatory technology assessment. As one center notes on its website, the centers use engagement and other tactics to build capacity for collaboration among researchers and the public, allowing the groups to work together to “guide the path of nanotechnology knowledge and innovation toward more socially desirable outcomes and away from undesirable ones.”
Engaged research could be an essential strategy for identifying, anticipating, and integrating into science the “ethical and societal considerations” mentioned in the CHIPS and Science Act.
But to ensure that engaged research can deliver on the potential these collaborative methods hold, we argue for an expansion of funding for rigorous studies that address questions about when engagement and other strategies are effective for improving the relevance and use of research for societal needs—and who benefits (and who doesn’t) from these strategies. Such studies will increase our understanding of the conditions that enable engagement and other tactics to deliver their intended impacts. For example, scholarship shows that allowing sufficient time for relationship-building between researchers and decisionmakers is important for unlocking the potential of engaged research. Findings from such studies could, and should, shape future research investments aimed at improving societal outcomes.
Efforts to expand understanding in this area—Guston calls these efforts “socio-technical integration research”—include studies on the use of research evidence, science and technology studies, decision science, and implementation science, among several other areas. But so far, this body of research has been relatively siloed and has inconsistently informed research investments. The CHIPS and Science Act may help spur research investments in this important area with its requirement that NSF “make awards to improve our understanding of the impacts of federally funded research on society, the economy, and the workforce.” And the NSF’s TIP directorate provides a helpful precedent for funding studies that develop an understanding of when, and under what conditions, research drives change in decision-making and when (and for whom) research improves outcomes. But much must still be done to meet the need.
The CHIPS and Science Act and the TIP directorate present an important opportunity to scale research efforts that better reflect societal and ethical considerations. To support progress in this area, we have begun coordinating grantmakers in the Transforming Evidence Funders Network to build evidence about the essential elements of success for work at the intersection of science and society. We invite funders to connect with us to make use of the opportunity presented by these shifts in federal science funding and to join us as we build knowledge about how to maximize the societal benefits—and minimize the adverse effects—of research investments.
Angela Bednarek
Project Director
The Pew Charitable Trusts Evidence Project
Ben Miyamoto
Principal Associate
The Pew Charitable Trusts Evidence Project
Adding Humanity to Anatomy Lessons
In “When Our Medical Students Learn Anatomy, They See a Person, Not a Specimen” (Issues, Spring 2023), Guo-Fang Tseng provides a wake-up call to treat anatomy as a humanistic as well as a scientific discipline. This is not new, as a move in a humanistic direction has been evident for some years and across a variety of countries and cultures. However, within the Silent Mentor Program that Tseng describes, it goes considerably further than generally found elsewhere, with far more involvement of family members at every stage.
The Silent Mentor Program is conducted within a Buddhist culture. Should this be normalized and viewed as the ideal practice for those in different societies with varying religious or cultural perspectives? As arguments in favor, the practices have led to major increases in body donations within these communities, and they have enhanced the humanity and empathy of clinicians.
To gain further insight, my colleague Mike R. King and I conducted a study to explore why in most academic settings in the Western world cadavers in the dissecting rooms of anatomy departments are routinely stripped of their identity. This has meant that medical and other health science students have been provided with limited, if any, information on the identities or medical histories of those they are dissecting. The study, published in Anatomical Sciences Education in 2017, identified four ways that the cadavers were treated: total anonymization; nonidentification, low information; nonidentification, moderate information; identification, full information. We concluded that at the heart of the debate is the altruism of the donors and the integrity of those responsible for the donors’ bodies.
The Silent Mentor Program is conducted within a Buddhist culture. Should this be normalized and viewed as the ideal practice for those in different societies with varying religious or cultural perspectives?
We further concluded that if potentially identifying information adds value to anatomical education, it should be provided. But other values also enter the picture, namely, the views of the donors and their families. What if the families do not wish to go down this road? This demonstrates that the direction outlined for the Silent Mentor Program depends upon full acceptance by all parties involved, with the families’ views being uppermost.
Then there are the students. It is unlikely that in a pluralist society all will want as much personal information about the body as possible. Thus, there must be a balance achieved between the students’ emotional or psychological reactions and the pedagogical value of the information.
The situation is more confused in some societies where certain ethnic or cultural groups oppose the donation of bodies on cultural grounds, so that students belonging to these groups must overcome an antipathy to the process of dissection. For them, identification of the bodies would likely be a step too far.
While the Silent Mentor Program is situated in a Buddhist society, it does not represent all Buddhist perspectives. For instance, donation programs in Sri Lanka have been the norm for many years, with Buddhist monks giving blessings for the afterlife of the deceased person in the deceased’s home prior to the cadaver being transferred to a local university anatomy department. After receipt of the cadaver, all identification marks are removed, thereby maintaining the anonymity of the deceased. The relatives have no further contact with the remains. Following dissection, Buddhist ceremonies are conducted by monks, thereby placing the whole process of donation and dissection within a Buddhist context, with participation by students and family members. This represents a variation on the Silent Mentors Program, encouraging altruism and involving the family in some aspects of the process of teaching anatomy within their own Buddhist context. This demonstrates that more than one model may serve to achieve humanistic ends.
D. Gareth Jones
Department of Anatomy
University of Otago
Dunedin, New Zealand
The first systematic dissection of the human body has been attributed to the ancient Greek anatomist Herophilus, who lived from 355 BC to 280 BC. Unfortunately, Herophilus was ultimately accused of performing vivisections of living human beings. Dissection then ceased after his days and recommenced only in the mid-sixteenth century. As dissection began to play a prominent role in the learning of the human body, the growing shortage of cadavers resulted in body snatching from graves and even the commission of murders, leading to the enactment in the United Kingdom of the Anatomy Act of 1832, which also served to regulate human body donation.
Cadaver-based dissection of human bodies to learn human anatomy has now become a cornerstone of the curriculum of many medical schools. As students actively explore the body, they are able to perceive the spatial relationships of the different structures and organs, as well as appreciate anatomical variations of various body structures. More recently, alternative methods—including the use of 3D visualization technologies such as augmented reality, virtual reality, and mixed reality—have been increasingly utilized, especially during the COVID-19 pandemic, in the light of limited access to anatomy laboratories and the implementation of safe distancing measures.
In his essay, Guo-Fang Tseng elegantly highlights the humanistic approach to the teaching and learning of human anatomy through bodies willed to the Tzu Chi University’s School of Medicine, in what is called the Silent Mentor Program. What are usually termed as cadavers are now accorded the status of Silent Mentors. Indeed, while these altruistic individuals may no longer be able to speak, their donated bodies are still used to impart the intricacies of the human anatomy. Students are constantly reminded to treat their Silent Mentors with the utmost dignity, respect, and gratitude.
I was able to witness firsthand the indescribably touching ceremony; it is certainly no exaggeration to say there was not a dry eye in the house.
The Tzu Chi program is a unique human body donation program where students and residents not only learn how their Silent Mentors had lived while they were still in this world, but also have close interactions with their Silent Mentors’ families. At the end of the gross anatomy dissection course, students place all the organs and tissues back into their Silent Mentors’ bodies, suture the skin together, and dress their Silent Mentors in formal clothes. The students then join the family members in sending the bodies to the crematorium, followed by a gratitude ceremony where there is sharing and reflection by both the students and family.
Thus far, the Silent Mentor Program has served as a salient example to the anatomy and medical community of how the approach taken to understand the individual donor could enhance the humanity of doctors in training. Having had the privilege of attending a Tzu Chi Surgical Silent Mentor Simulation Workshop, I was able to witness firsthand the indescribably touching ceremony; it is certainly no exaggeration to say there was not a dry eye in the house. This vivid experience has remained firmly etched in my mind.
A critical reason why the Tzu Chi Silent Mentor Program is highly successful and is being emulated by other medical schools is that it has a hardworking team that truly believes in the humanistic approach to the learning of human anatomy, undergirded by unwavering support from the university administration, including Dharma Master Cheng Yen, the founder of the Tzu Chi Foundation. Guo-Fang Tseng himself also leads by example, and his lifetime dedication to the program is aptly reflected in his intention to deliver his last anatomy lessons as a Silent Mentor.
Boon Huat Bay
Professor, Department of Anatomy
Yong Loo Lin School of Medicine
National University of Singapore
In his article for the Summer Issues, Guo-Fang Tseng describes the silent mentor program at Tzu Chi University’s School of Medicine, where medical students get to know the body they will dissect by meeting the deceased’s family. Tseng writes that there are notable, concrete outcomes to this approach, but he thinks the program’s effects “are much more profound: it enhances the humanity of clinicians and those they serve.”
I would like to highlight the high cost of not cultivating empathy and humanity in medical professionals. Consider pregnancy and birth, for example. As many as 33% of women report negative or traumatic birth experiences. Estimates of postpartum depression and post-traumatic stress disorder range from 5% to 25%. Even medical spending in the year following birth is substantially higher among women experiencing postpartum depression.
Authentic human connection during health care interactions is not a nice to have—it is a critical requirement that helps us make meaning from our medical experiences.
The causes of postpartum PTSD and depression are complex. However, dissatisfaction with social support, lack of control, and mistreatment by medical staff are reasons that rise to the top in studies on the issue—reasons that directly relate to lack of empathy.
In 2007, I visited a Viennese medical museum and saw an eighteenth-century wax anatomical model—a woman with her abdomen dissected and a fetus inside. At the time, I identified with the Enlightenment anatomists who made the model because as a former biology student, I had happily enjoyed dissecting animals. But later that same year, I gave birth for the first time and became one of the many women who left the hospital with a healthy baby and a troubled mind. I suddenly saw myself reflected in the wax model herself, and I felt heartbreakingly linked to the centuries-long anatomical tradition of disconnecting body and mind. In the operating room, I was reduced to an anonymous body on a table and my mind suffered for it.
I have spent years trying to understand what happened to me, and it took me a long time to realize that empathy was a critical missing component. I can’t help but wonder, what if a medical professional had looked me in the eye during or immediately following my ordeal and truly acknowledged all that had happened? I feel certain that regardless of the physiological complications I experienced, being treated as a whole human would have greatly lessened my struggle.
Tseng’s article about the silent mentor program brought tears to my eyes. Empathy can and should be taught, even with a dead body. Authentic human connection during health care interactions is not a nice to have—it is a critical requirement that helps us make meaning from our medical experiences.
Alison Fromme
Writer
Ithaca, New York
Navigating Interdisciplinary Careers
In “Finding the ‘I’ in Interdisciplinarity” (Issues, Spring 2023), Annie Y. Patrick raises important challenges for both interdisciplinary research—an oft-cited, rarely achieved aim in contemporary scholarship—and qualitative research more broadly. Many norms of traditional inquiry implicitly encourage the separation of the researcher from the research, a condition that Patrick compellingly argues against. The received wisdom is that researchers should leave their backgrounds, traditional or otherwise, “at the door.” This is a necessary critique of bracketing—where researchers consider what assumptions they bring to a research endeavor and then set them aside for the purposes of conducting and analyzing the phenomenon—and its implications.
As an interdisciplinary researcher myself, I know from experience that explicitly sharing points of commonality and difference within diverse teams is essential for the conduct of fulfilling research. After all, researchers are people first. What I find especially powerful in Patrick’s essay is the insistence on the human element of social science research for both the researcher and the researched. As she writes, “they were not simply informants or categories of data, but actual humans.” Why might Patrick be intimidated by the engineering faculty at Virginia Tech? She has seen patients and their families at their absolute lowest and quickly earned their trust and care. The faculty are only human, too.
Explicitly sharing points of commonality and difference within diverse teams is essential for the conduct of fulfilling research. After all, researchers are people first.
Similarly, I see her work explaining the real-life challenges of the student experience to faculty as reminding them that students are human, too, and have a whole host of embodied needs and experiences outside of classroom performance. Implicitly, Patrick calls the academy to task for treating humans with impersonal language such as “informants” and encourages researchers to claim our backgrounds that inform our research, and hopefully informs our groundwork as well.
I find Patrick’s call to action through groundwork to be a useful corrective. “When I saw something going wrong,” she writes, “my every professional instinct was to intervene.” As researchers, if we see something truly wrong and harmful taking place, shouldn’t we intervene? Her essay also reminded me of the gendered professions of both engineering and nursing. Despite being historically associated with men and women respectively, the emphasis on weed-out culture in both areas and how that interacts with gender could be something to further consider in the future. For these and other reasons, I appreciate this powerful and thought-provoking essay and its lessons very much.
Saralyn McKinnon-Crowley
Incoming Assistant Professor, Higher Education Studies & Leadership
Baylor University
Annie Y. Patrick makes astute observations about the challenges of interdisciplinary research. She describes “feeling out of place” and having to come to grips with unfamiliar jargon and disciplinary assumptions. These are experiences that will no doubt resonate with many researchers who work in interdisciplinary contexts. She also takes the brave step of sharing a mindset shift she went through during her PhD—one that went from viewing expertise as being about depth in a single domain and eschewing “non-scholarly” experience, to drawing on the full complement of her work and life experiences.
Patrick encourages researchers “to embrace their whole selves” in pursuit of interdisciplinarity. This kind of mindset is not commonly discussed as a critical ingredient for interdisciplinarity, but it should be. Embracing one’s whole self involves recognizing the importance of experiences beyond academia, as well as the multiple hats many of us wear—as colleagues, family members, and members of our broader communities. For interdisciplinary collaborations to really work, members of the team need to be valued for what each brings to the table. Taking time to appreciate the richness of our own experiences hopefully opens us up to appreciate those of others too. And indeed, to find shared ground beyond our academic silos. Caring for the individuals in an interdisciplinary collaboration—not just the subject matter—is an important ingredient for interdisciplinary success, as are good doses of curiosity and humility.
For interdisciplinary collaborations to really work, members of the team need to be valued for what each brings to the table.
Patrick’s account offers us tangible and positive examples of what interdisciplinarity can bring to a project. But from her description it’s clear that challenges still remain to achieving the interdisciplinary integration called for by the National Academies. For instance, it seems as if the interventions she developed at Virginia Tech were “extras” to the project she was part of, rather than central to the work of revolutionizing engineering education. Patrick emphasized the support and good will she received from more senior scholars, having demonstrated her work ethic and commitment to the project over four years. This kind of support is not always forthcoming. Junior scholars are often the ones who end up doing the risky work of interdisciplinarity. And they must typically do this in addition to achieving the milestones and depth seen as necessary to be experts in their own disciplinary domain.
Interdisciplinary interventions of the kinds Patrick describes—a podcast, career panels, and white papers—aren’t necessarily valued as highly as a peer-reviewed publication in a high-profile journal. Yet in practice these are likely more effective ways of bringing diverse communities together around shared concerns. For interdisciplinary research to become more embedded in academia, we need stronger support and reward systems for junior scholars embarking on this important but time-intensive and risky work.
Emma Frow
Associate Professor
School for the Future of Innovation in Society and School of Biological and Health Systems Engineering
Arizona State University
“I was surprised to discover that becoming an effective interdisciplinary researcher also required that I embrace the value of what I call inner interdisciplinarity—my own unconventional background—and what it could bring to the team,” Annie Y. Patrick writes. Her personal reflection on the labor of academia—the conversations, the comprehensibility-making—foregrounds infrastructures for engagement by academics in our professional practice that may exceed our disciplinary training.
Those of us who study knowledge in general—and the convergence of science, technology, and society (STS) in particular—often write about people who hold a single disciplinary identity, be they electrical engineers, geophysicists, or something else entirely. Through the kinds of training scholars such as Patrick and myself have experienced, we also become “disciplined” and develop certain shared ways to be in the world. These ways of being often diverge significantly from those cultivated by the engineers and scientists we study, making contrasts particularly evident when we examine their technoscientific work or seek to enter collaborations with them. But as Patrick reminds us, we may not be trained in only a single discipline. The ways of being we’ve been trained into are not simply lost when we undertake thinking and acting in new ways. Or if that happens to some people, it certainly doesn’t happen to all.
The ways of being we’ve been trained into are not simply lost when we undertake thinking and acting in new ways.
I’ve never switched disciplines—at least not to the extent that Patrick has. I consistently pursued training in cultural anthropology since I discovered it existed, during my second year of college many decades back, then while concentrating on STS, and then more. For me, this experience was one of alignment, though my expertise and practice has developed through slow, iterative, contradictory personal and professional experiences. I lay them out in my 2023 book, ¡Alerta!, which examines a controversial technology developed in Mexico City to mitigate earthquake risk and, through that, considers how engineers and other experts are theorizing life with threatening environments. There, I make the case that these experiences have accumulated to make my life and scholarship possible in ways that are methodologically important to grapple with.
Patrick reflects on her efforts to apply insights, using her frustration to highlight her disciplinarities. As she does so, she highlights an important puzzle: What is an application? What counts as a viable answer to the perennial question so what? What can we conceive as meaningful implementation, and who might we see as fellow travelers in these efforts? We must understand that this, too, might be trained into us by our disciplines and schools of thought.
Patrick documents a pathway that led, eventually, to what she terms her “groundwork.” There are so very many others, though, with radically different ways of understanding that puzzle and figuring out what kinds of activities could flourish in their resolutions. I think the “making and doing” movement in STS is most exciting when it opens a space for many conceptions of conceiving of action and gives us the tools to understand their different logics, from radical to institutional. As such, it can also be a space for exploring how we might choose to articulate our disciplinary backgrounds and commitments—for imagining and reimagining STS and scholarly life too.
Elizabeth Reddy
Assistant Professor, Department of Engineering, Design, and Society
Associate Director, Humanitarian Engineering and Science
Colorado School of Mines
Nursing and the Power of Change
In “The Transformation of American Nursing” (Issues, Spring 2023), Dominique A. Tobbell presents a fascinating, complicated, and multidetermined case for the post-World War II development of PhD programs in nursing. Built around the faith that there was a “nursing science”—akin to but foundationally different from the dominance of “biomedical science”—the white women (and they were almost exclusively white women) used financial support from the federal government’s health scientists’ programs to first earn PhDs in related disciplines such as sociology, education, and psychology and then to translate borrowed concepts into the ideological stance and the practice of nursing.
Some initiatives were spectacular successes: the changes that coalesced around nurse Hildegard Peplau’s intellectual translation of Henry Stack Sullivan’s interpersonal theory of human relationships forever changed nursing practice into one that focused intensely of what we now call (and teach and research as) patient-centered care. Others were as spectacular failures: the edict from nursing’s national accreditation association that all schools had to teach nursing content and practice specifically organized around one of the models Tobbell describes was a mercifully short-lived disaster after it became apparent that classroom content had no relation to clinical experiences.
In nursing, we have evidence of the power of change driven by collaborations among clinicians at the point of intersections with patients in need of care.
Such unevenness, of course, is hardly unique to any knowledge-building enterprise. My question is why, after more than 80 years of this enterprise, are people outside the narrow confines of my discipline still puzzled when they learn of my PhD and hear the term “nursing science.” I honestly do not blame them. And I think this points to yet another source of tension that the history of PhD education in nursing elucidates: Should knowledge-building in nursing or in any another discipline be a “top down” or “bottom up” experience?
In nursing, we have evidence of the power of change driven by collaborations among clinicians at the point of intersections with patients in need of care. The nurse practitioner movement, for example, came about in the same political, social, and technological contexts and among the added pressures of shortages among primary care practitioners. In response, collaborative, entrepreneurial efforts of physicians and nurses seeking expanded opportunities came together in individual dyads across the country to experiment with shared responsibilities for medical thinking, medical diagnosis, and prescribed treatments. Similarly, in coronary care units, dedicated to ensuring the survival of “hearts too young to die,” the new technology of electrocardiology brought physicians and nurses together to learn how to read rhythm strips. Both groups quickly learned, again together, that it was not necessary to have to wait for a physician to intervene in life-threatening emergencies as nurses could interpret arrythmias and respond immediately with life-saving protocols. Our current health care system now organizes itself around these two innovations.
The PhD in nursing, by contrast, came about as a solution to a problem that only a relatively small group of nursing educators identified. It would be a new form of knowledge generation, albeit one distanced from the bedside and imbricated with the knowledge-generating tools most valued by the biomedical establishment. It was, I would suggest, an essentially political and prestige process. And really interesting questions remain to be asked. Did the status position of nursing in clinical care and knowledge development necessitate a surrendering to the stronger and more privileged epistemological position of medicine for its own validity? Will nursing’s claims that it “asks different questions” survive the collapsing of boundaries between acute and chronic care needs of patients? And, to me most important, does the inherently interdisciplinary knowledge that we know nurses need to practice fail to translate into a knowledge agenda when it exists within an academy and a culture that knows only firm disciplinary boundaries?
Patricia D’Antonio
Carol Ware Professor of Mental Health Nursing
Director, Barbara Bates Center for the Study of the History of Nursing
University of Pennsylvania
Stop Patching!
In “How to Keep Emerging Research Institutions from Slipping Through the Cracks” (Issues, Spring 2023), Anna M. Quider and Gerald C. Blazey raise interesting questions about how to address the misalignment in distribution of federal research dollars and students from diverse communities being educated in science, technology, engineering, mathematics, and medicine—the STEMM fields—across the full range of higher education institutions. If we wish to produce a diverse STEMM workforce for the twenty-first century, the authors explain, we need to recognize and consider how to address this mismatch.
Historically, institutions have usually been targeted for attention when agencies have been directed, largely by congressional action, to develop strategies and “carveouts” to affect the distribution across the full range of institutions. Quider and Blazey rightly point out the limits of such carveouts and special designations to achieve the goal of contributing to increased diversity of the STEMM community. Research support in institutions can provide research opportunities to next-generation scholars and researchers from diverse communities. Research participation has also been demonstrated to support retention of these students in STEMM as well as to promote their choice for graduate education, thus addressing the critical need for faculty diversity.
The difficulty in directing research support to a wider range of institutions cannot be underestimated. Institutions that have received even small advantages in research investments over the decades will present proposals not only where the ideas are excellent, but where research infrastructure is more than likely to be superior as well, advantages having accumulated. Institutions that have not enjoyed such investment may have excellent researchers with excellent proposals, but, lacking research infrastructure, they may not be as competitive as the research behemoths. Carveouts allow for a section of the playing field to be leveled, where similarly situated institutions can compete. The authors note that although a number of carveouts have been created, not all funding “cracks” have been plugged. Missing from the litany of special programs are so-called emerging research institutions that are also taking on the critical role of contributing to the diversity of the STEMM community.
The difficulty in directing research support to a wider range of institutions cannot be underestimated.
While the carveouts have been important to developing and maintaining research capacity across a larger range of institutions, they only delay needed reforms that are more systemic, directing how only a small share of total research and development funding is deployed while leaving the overwhelming majority of funding to the same set of institutions that have always topped the list of those receiving federal R&D support.
It is easy to have conversations about spreading the wealth in a time when budgets are expanding. But even when they are, such as in the doubling of the National Institutes of Health’s budget, they do not necessarily lead to a different distribution of supported institutions. Considering a flat funding environment, what would a reordering of strategic priorities that guide investment look like? Actions would include:
Ensuring widely distributed research capacity across a range of criteria.
Re-examining the research agenda and the process of setting it—who establishes, who benefits, and who is disadvantaged.
Specifically addressing the environment in which research is being done—that it be free of bias and allow all to thrive.
Harking back to the “Luke principle” I articulated previously in Issues, all research investments, in whatever the institutions, should include attention to equity and inclusion in developing the scholars and workforce of the future as a central element of supporting excellence and addressing the diversity-innovation paradox.
While we could stand up another targeted effort to address the cracks pointed out by the authors as a stop-gap measure, it is time to re-examine the overall research support structure in light of today’s needs and realities. Stop patching!
Shirley M. Malcom
Senior Advisor and Director of SEA Change
Former Director of Education and Human Resources Programs
American Association for the Advancement of Science
I applaud Anna M. Quider and Gerald C. Blazey for drawing attention to the critical importance of emerging research institutions (ERIs) in the nation’s research ecosystem. ERIs are often dominated by students of color from low-income families, who may not have been admitted to a major research university or could not afford such a school’s tuition and cost of living. Or they may simply have preferred to enroll in a smaller university, perhaps closer to home.
If the nation does not embrace all ERIs, the disparities between the haves and have nots will become even greater and the nation will not fully achieve its research and diversity goals.
We have dozens of ERIs in California, and most are dominated by underrepresented minorities. The California State Universities are excellent examples of institutions that are in same category as the authors’ home institution, Northern Illinois University, in that they do not benefit from additional federal funding simply because they are geographically located in a state that has a number of major R1 universities.
I worry that if the nation does not embrace all ERIs, the disparities between the haves and have nots will become even greater and the nation will not fully achieve its research and diversity goals. I have firsthand knowledge of these disparities since I graduated from an emerging research institution. However, I am also an example of the potential of these students to contribute to the national research priorities.
Roger M. Wakimoto
Vice Chancellor for Research & Creative Activities
University of California, Los Angeles
Regulations for the Bioeconomy
In “Racing to Be First to Be Second” (Issues, Spring 2023), Mary E. Maxon ably describes the regulatory challenges to the emerging bioeconomy in the United States. The Biden administration has recognized explicitly the transition from a “chemical” economy to one in which inputs, processes, and products are largely the result of “biology,” and has chosen to help facilitate that transition.
The United States regulates products, not technologies. The regulatory paths these products take are defined by their intended use or “regulatory trigger” (i.e., the legal concept determining whether and how a product is regulated) regardless of manufacturing method. Intended use has generally been a good guide in determining which agency has primacy of regulatory oversight, as envisioned in the federal government’s Coordinated Framework for the Regulation of Biotechnology, first issued in 1986.
Almost 40 years on, one questions if that is still the case. To paraphrase the Irish playwright and political activist George Bernard Shaw, regulators and the regulated communities are divided by a common purpose—the safe, effective, and yet efficient introduction of products into commerce. Some of these have traversed the regulatory system slowly, but under the aegis of one agency; others have been shuttled among agencies asking approximately the same risk questions. Duplicative regulation rarely provides additional protection; instead, it can make a mash of policy that can undermine public confidence. It further poses enormous costs to manufacturers and the chronically under-resourced and over-burdened regulatory agencies. And we have yet to find a way to estimate the direct costs and externalities of not developing the bioeconomy.
Should we continue to regulate the products of the bioeconomy the same way we regulate the products of the chemical economy?
The examples that Maxon and others cite are products first developed over 20 years ago. What fate will befall products still “on the bench” or yet to occur in their inventors’ minds? Many participants in the field, me included, have advocated for the creation of a “single door,” possibly placed in a proposed bioeconomy Initiative Coordination Office, through which all (or almost all) products of the bioeconomy would be directed to the appropriate lead agency. Additionally, proposals have been floated to cross-train regulators, developers, funders, and legislators, possibly via mid-career sabbaticals or fellowships, about the various facets of the bioeconomy so that all are better prepared for regulatory oversight. These two steps could provide a mechanism for charting an efficient and transparent regulatory path. They will, of course, require nontrivial effort and coordination among and within agencies known more for their siloed behaviors than their cooperative interactions.
But a larger question lingers: Should we continue to regulate the products of the bioeconomy the same way we regulate the products of the chemical economy? Emerging technologies and their products can often require reframing risk pathways: it’s not that the endpoints (risks) are all that different; rather, the nature and kind of questions that characterize those risks can be more nuanced. Fortunately, we have also developed powerful, more appropriate tools to supplant the often irrelevant assays traditionally used to evaluate risks. We have also begun to understand that products posing minimal risks may not require the same regulatory scrutiny as products not yet seen by regulatory systems; these may require different and more complex hazard characterizations. Perhaps in addition to improving administrative paths, we should put some of the nation’s best minds toward the continued development of risk and safety assessment paradigms to be used simultaneously with product development so that regulation becomes—and is seen as—part of efficient, relevant, and responsible innovation and not just an unnecessary burden or box-checking exercise.
Larisa Rudenko
Research Affiliate, Program on Emerging Technologies, Massachusetts Institute of Technology
Cofounder, BioPolicy Solutions LLC
Former Senior Adviser for Biotechnology, Center for Veterinary Medicine, US Food and Drug Administration
Mary E. Maxon advocates for a coordinated regulatory system as a critical need toward building the biotechnology ecosystem of the future. She’s exactly right, but coordination is just one piece of the regulatory puzzle and could be taken a step farther still.
The products that will drive the next century of paradigm-shifting economic growth defy easy definition or jurisdiction. Having witnessed the discussions that take place on products that cross boundaries of agency jurisdiction, I have heard each entity’s lawyers and regulatory experts make a clear and cogent case about why their agency has jurisdiction and why the risks of the technology are relevant to their mission to protect the public.
The problem is, they are all right in their arguments, which makes reaching consensus a challenge. Navigating their disagreements is particularly difficult when it comes to emerging biotechnologies, where the risk space is uncertain and agencies vary in their comfort level with different types of risk, whether to human health or innovation. In the federal context, this can be paralyzing; lack of consensus creates endless wheel-spinning or logjams, particularly when the parties involved do not share a common executive decisionmaker below the level of the president.
What’s needed is a third-party arbiter who has the authority to cut through disagreement to establish clear precedents and an evidence base for future decisionmaking that gives industry more certainty about regulatory pathways.
In an ideal world, this wouldn’t matter. Each regulatory agency has a vigorous regulatory process and the ability to bring in additional subject matter expertise when needed. That suggests a flexible process would be best, with a common regulatory port of entry and a fixed amount of time, as Maxon recommends, to determine a cognizant agency. Unfortunately, one person’s flexibility is another’s ambiguity, and this does not solve the issue of the regulated community of developers who understandably want to shape their data collection around the culture and requirements of the agency with whom they’ll be dealing so they can most easily navigate the regulatory process. Moreover, this will lead to inconsistency, as Maxon notes in the case of the genetically modified mosquitos, in which agencies, based on their own cultural norms around risk assessment, will operate under very different timelines and come to different conclusions.
How do you overcome this quandary? What’s needed is a third-party arbiter who has the authority to cut through disagreement to establish clear precedents and an evidence base for future decisionmaking that gives industry more certainty about regulatory pathways. The arbiter could also serve as a pre-submission advisory group for developers and agencies. This arbiter could be a White House-based Initiative Coordination Office (ICO), as Maxon suggests, but I would argue that more heft is needed to ensure resolution. One possibility would be a small council, administered by the ICO, with representation at a senior level from the agencies and appropriate White House offices, such as the Office of Science and Technology Policy, the Domestic Policy Council, and the Office of Information and Regulatory Affairs, with clearly delegated authority from the president. When decisions are made, the resulting deliberations could be made public, to give a set of “case law” to the developer and regulatory community and assure the public of the integrity of safety assessments. This would be a very different model than the current and ineffective voluntary approach emphasizing the soft diplomacy of coordination between agencies. Congress could also consider establishing a clear arbiter in future legislation that has power to determine which agency has final decisionmaking responsibility on any individual product. As the various parties work through options, however, one thing remains certain. New paradigm-shifting biological products will continue to emerge from the US innovation ecosystem, and Maxon is correct that it is time for a parallel shift in thinking about regulation and governance.
Carrie D. Wolinetz
Lewis-Burke Associates LLC
For nearly 40 years academic and industrial laboratories have been working on “industrializing” biology, usually referred to as biotechnology. As Mary E. Maxon points out, the process has been extremely successful, but it has been halting and selective. The future potential is enormous and has implications for many sectors of the US economy. To date the vision of a wide bioindustry has been hampered, in part by what can be politely called regulatory confusion. Maxon proposes an ambitious regulatory reform that would clarify and accelerate the regulatory process under the oversight of a new entity, an Initiative Coordination Office that would work with the various agencies identified in President Biden’s Executive Order launching a National Biomanufacturing and Biotechnology Initiative. Based on past experience with biotechnology regulation, this suggestion is what is often described as necessary but not sufficient.
It is amazing that the core structure for the nation’s current regulatory process is still the 1986 Coordinated Framework for the Regulation of Biotechnology. Maxon describes the weakness of that structure, but misses two important elements that must be considered in the development of any new structure. First, the Coordinated Framework places a major emphasis not on the product under review but on how the product was produced. She cites an excellent example of that problem in the case of laboratory-grown mosquitoes where Oxitec failed and MosquitoMate succeed based on how essentially the same product was produced.
It is amazing that the core structure for the nation’s current regulatory process is still the 1986 Coordinated Framework for the Regulation of Biotechnology.
The second weakness of the Coordinated Framework is the promise of cooperation between the various agencies that had no strong commitment at the top management level. Each agency official responsible for coordination had very little incentive to “share their turf” with another regulator, often citing the constraints of the enabling legislation. The Coordinated Framework was endorsed unanimously at the Cabinet level, but the message never was heard in the ranks. If the proposed new Initiative Coordination Office is to have any impact, more than new rules are needed. Strong leadership and the articulation of the value and urgency of the bioeconomy to the country is essential. Regulators must realize that their job is not to block new products but to work with their customers to quickly identify any problems and move things through the pipeline smoothly. How a product is produced is an anachronism.
The distressing element related to the continued development of the bioeconomy is not just the absence of a functional and meaningful regulatory framework. Without public confidence in the results, even approved products will not be successful in the marketplace. Over the past few years, we have seen an alarming degradation of public confidence in government guidance and in scientific information, even that produced by highly qualified experts. Reversing this trend is going to be an enormous challenge, but may be far more important than the development of a robust regulatory framework. Initiatives such as BioFutures, created and administered by the philanthropy Schmidt Futures, can play a significant role in this process, but they need to stand back and look at the whole pipeline of the biofuture transformation.
David T. Kingsbury
Former Chief Program Officer for Science (2004–2008)
Gordon and Betty Moore Foundation
Former Chair (1985–1988), White House Biotechnology Science Coordinating Committee
Mary E. Maxon packages nearly 30 years of biotechnology governance into a call for action that cannot be ignored, centered on aligning regulations with the times. Indeed, of all the issues that plague the future of the US bioeconomy, a regulatory structure that no longer suits its regulatory context is worthy of special consideration.
Maxon presents examples of biotechnologies that have been delayed or even lost, ultimately due to deficits in “biocoordination.” While I second Maxon’s suggestion that the Initiative Coordination Office, if established in the White House Office of Science and Technology Policy, should support agency collaboration on horizon-scanning, transparency, and guided processing for future biotechnologies, coordination needs to be central to the framework, not an accessory to it. As long as its individual regulatory elements (the Environmental Protection Agency, Department of Agriculture, and Food and Drug Administration, among others) lack the infrastructure to “share regulatory space,” the current federally established Coordinated Framework for the Regulation of Biotechnology will continue to present gaps in coordination that threaten the bioeconomy.
Of all the issues that plague the future of the US bioeconomy, a regulatory structure that no longer suits its regulatory context is worthy of special consideration.
Moreover, in considering ways to establish a regulatory framework that scales with future biotechnology, it will be essential to incorporate more public input and community reflection into the regulatory process. Maxon recommends the use of enforcement discretion as a strategy to fast-track new products that agencies consider low risk. This raises broader questions, however, of who determines safety and who determines risk? People and communities perceive risk differently, based on their lived experiences and their perceptions of what they have to lose. The same is true for safety, which also needs a collective definition that is grounded by social considerations. Creating a transparent decisionmaking process for biotechnology that integrates public input starts with redefining of risks and safety as a collective.
To put it plainly, if the nation maintains a collaboration that is built upon poor communication, then we ought not expect coordination. While collaborative governance is found throughout the US regulatory system, advancement will require acknowledgement of the regulatory problems that result from such governance strategies. In 2012, the Administrative Conference of the United States released a report titled Improving Coordination of Related Agency Responsibilities. When addressing the concept of shared regulatory space, the report states: “Such delegations may produce redundancy, inefficiency, and gaps, but they also create underappreciated coordination challenges.” As Maxon cleverly points out, this coordination challenge is petitioning for the creation of a regulatory framework for the bioeconomy—not just biotechnology.
To build on the author’s observations, concerted and deliberate policy action is crucial for fostering a regulatory ecosystem that advances the bioeconomy—subject, of course, to public trust—and increases national competitiveness, both now and in the future.
Christopher J. Gillespie
PhD Candidate, Department of Entomology and Plant Pathology
North Carolina State University
Mary E. Maxon argues for the establishment of a bioeconomy Initiative Coordination Office that could facilitate interagency collaboration, cross-train regulators, conduct horizon-scanning, and establish a single point of contact for guiding developers of biotechnology products through the regulatory process. This may seem like an impossibly long list of activities, especially in the area of biotechnology regulation, but I believe they are achievable, given the right support from the White House and Congress.
I say this because, as part of a team of Obama administration officials, I worked with dozens of experts from across the federal government, and I saw firsthand that it is possible to address the complexity and confusion in the biotechnology regulatory system. We delivered two public-facing policy documents: the 2017 Update to the Coordinated Framework for the Regulation of Biotechnology (2017 Coordinated Framework), which represented the first time in 30 years that the government had produced a comprehensive summary of the roles and responsibilities of the Food and Drug Administration (FDA) , the Environmental Protection Agency (EPA), and the Department of Agriculture with respect to regulating biotechnology products; and the National Strategy for Modernizing the Regulatory System for Biotechnology Products (2016 Strategy), which described a set of steps those agencies were planning to take to prepare for future products of biotechnology.
I saw firsthand that it is possible to address the complexity and confusion in the biotechnology regulatory system.
These documents were not cure-alls, but they represented progress. And don’t take just my word for it. The Trump administration, hardly known for cheerleading Obama-era policies, issued an Executive Order in 2019 stating that the 2017 Coordinated Framework and the 2016 Strategy “were important steps in clarifying Federal regulatory roles and responsibilities.”
To further support my contention that progress is possible (and to clarify one detail that Maxon discusses), I point to one of the policy changes that came out of the Obama-Trump Biotechnology Regulatory Modernization effort. This change specifically addresses the case studies of laboratory-grown mosquitos that Maxon describes. As she stated, the Oxitec mosquito, which was developed with genetic engineering, and the MosquitoMate mosquito, which was infected with a bacteria called Wolbachia, were both products with very similar mosquito population control (i.e., pesticidal) claims. However, one mosquito (Oxitec) was regulated by the FDA and the other (MosquitoMate) was regulated by the EPA.
The interagency team that developed the 2016 Strategy and the 2017 Coordinated Framework recognized this inconsistency and addressed it. In the 2016 Strategy, the EPA and the FDA committed to “better align their responsibilities over genetically engineered insects with their traditional oversight roles.” In October 2017, the FDA issued a final policy, clarifying that the EPA will regulate mosquito-related products intended to function as pesticides and the FDA will continue to have jurisdiction over mosquito-related products intended to prevent, treat, mitigate, or cure a disease. Since this clarification, Oxitec has received the green-light from the EPA to conduct field trials of its genetically engineered mosquitos in Florida and California.
A quarter of a century passed between the 1992 and 2017 updates to the Coordinated Framework for the Regulation of Biotechnology, during which time advances in biotechnology altered the product landscape. This mismatch between the regulatory system and technological progress made it difficult for the public to understand how the safety of some biotechnology products was evaluated, and also made it challenging for biotechnology companies to navigate the regulatory process. In the past eight years progress has been made, and there is clearly now momentum in Congress and in the White House to advance that momentum.
Robbie Barbero
Chief Business Officer, Ceres Nanosciences
Advisory Board Member, National Science Policy Network
Mary E. Maxon argues convincingly that the White House Office of Science and Technology Policy (OSTP) should establish a bioeconomy Initiative Coordination Office (ICO) as mandated in the CHIPS and Science Act of 2022. Although Maxon focuses on its key role in the biotechnology regulatory system, it is important to think broadly and strategically about the many activities that a bioeconomy ICO should lead and coordinate governmentwide. The office should work not only to support the biotechnology regulatory system but also to coordinate strategic planning of federal investments in the bioeconomy; to facilitate interagency processes to safeguard biotechnology infrastructure, tools, and capabilities; and to serve as a focal point for government engagement with industry, academia, and other stakeholders across the bioeconomy.
In addition to providing fresh eyes and new perspectives, a program of this type would present opportunities for training and cross-sectoral engagement for regulators and would improve understanding of the biotechnology regulatory system across the bioeconomy.
This broad purview is supported by the language in the CHIPS and Science Act and would also encompass many of the activities included in President Biden’s Executive Order 14081 on “Establishing a National Biomanufacturing and Biotechnology Initiative.” Indeed, a recent bipartisan letter confirms Congress’s intent that the ICO described in the legislation incorporate this broader initiative. A bioeconomy ICO would be analogous to other congressionally mandated Coordination Offices at OSTP that drive effective interagency coordination and outreach, including those for the US Global Change Research Program, the National Nanotechnology Initiative, and the Networking and Information Technology Research and Development Program.
A public-facing bioeconomy ICO will make an ideal home for the biotechnology regulatory system’s “single point of entry” for product developers, and Maxon rightly places this issue as a top priority. To establish this approach, the ICO should work closely with the principal regulatory agencies to define ground rules for this process that will support efficient decisionmaking while also reflecting and protecting each agency’s autonomy in interpreting its own statutes and responsibilities. As experience is gained, the ICO should work to address bottlenecks to decisionmaking and help distill generalizable principles and useful guidance.
Another critical role for the ICO should be to support and coordinate a project-based fellowship program that brings together individuals with a wide range of perspectives from government agencies, industry, academia, the legal profession, and other sectors to focus on issues of relevance to the biotechnology regulatory system. In addition to providing fresh eyes and new perspectives, a program of this type would present opportunities for training and cross-sectoral engagement for regulators and would improve understanding of the biotechnology regulatory system across the bioeconomy.
Executive Order 14081 and the CHIPS and Science Act have kicked off a flurry of activity within the federal government related to the bioeconomy, and the regulatory system will need additional tools to keep up. Now is the time for OSTP to establish a bioeconomy ICO as a foundation for robust and durable interagency coordination that can lead in transforming the range of possible beneficial outcomes into reality.
Sarah R. Carter
Principal
Science Policy Consulting LLC
How Open Should American Science Be?
In “The Precarious Balance Between Research Openness and Security” (Issues, Spring 2023), E. William Colglazier makes an important contribution to the ongoing dialog about science security, and particularly regarding the United States’ basic science relationship with China. As a former director of the Department of Energy Office of Science, I agree with his assessment that rushing to engineer and implement even more restrictive top-down controls on basic science collaboration could be counterproductive, especially without a thoughtful analysis of the impact of the actions that already have been taken to thwart nefarious Chinese behavior.
In our personal lives, we instinctively understand when a relationship is not mutually beneficial and when we are being taken advantage of even when the rules are vague. It is true that the government of China, previously operating from a position of weakness, has pursued a coordinated and comprehensive strategy to harvest US scientific and technological progress and talent through a variety of overt and obscured means. This is frustrating and not sustainable, not least because China is no longer the same techno-economic junior partner it once was. In response, the United States has taken some substantial administrative and policy actions designed primarily to shed light on relationships and conflicts of commitment in sponsored work and in government laboratories, but also to signal a meaningful change in our willingness to be taken advantage of. These are recent developments, and the effects are as yet not understood.
The only effective long-term strategy in this race for global science and technology primacy is to out-invest and out-compete.
Looking again to our personal, human experience, cutting off contact and refusing to talk even in a difficult relationship is a defensive posture not consistent with competitive strength or confidence. Moreover, a reactive strategy of shutting doors and closing windows in an attempt to maintain science and technology leadership betrays a lack of understanding of the fungibility of talent in an increasingly educated world, the almost instantaneous and global flow of science and technology knowledge, and the vastly improved intrinsic science capabilities of China.
I believe that instead of defensive measures, the only effective long-term strategy in this race for global science and technology primacy is to out-invest and out-compete. Given transparent scientific relationships not motivated by easy access to resources, we also should not be afraid to work with anyone and particularly in basic research. We benefit from collaboration in part because we generally learn as much as we teach in a meaningful scientific exchange, and in part because our open and confident engagement is a fantastic advertisement for the attractiveness and effectiveness—and, in my opinion, the superiority—of our system and culture of science and technology.
The cost to US science and technology competitiveness and the flow of indispensable new talent of a regime of distrust or punitive control may well be greater than any theft of ideas or emigration of expertise, and disengaging and therefore blinding ourselves to a nuanced understanding of where our increasingly capable competitor is in this global science race may likewise hurt rather than help. Perhaps we should evaluate the effects of the new legal and policy adjustments we have made already, reconsider our end goals, and understand better the costs versus benefits before making further adjustments to the openness of the United States’ amazing engine of science and innovation.
Chris Fall
Former Director (2019–2021)
US Department of Energy’s Office of Science
I am sympathetic to the familiar and well-reasoned arguments that E. William Colglazier makes, but I can’t shake the feeling that reading his essay is like watching a parade of antique cars on the 4th of July.
The US scientific research community, overwhelmingly funded by the federal government and mostly resident in universities, is reeling from increased government scrutiny of its international engagements. Colglazier’s arguments and recommendations are thoughtful, responsible pushback against that scrutiny eroding the value—to the United States—of science diplomacy and international scientific engagement. This is all to the good, but hitting the right balance of openness and protections in international scientific collaboration is a sideshow to the center stage events affecting US commercial and defense technological leadership.
These main events are the struggles, both within and among nations, over the role of advanced technologies and innovation—driven in the democracies primarily by private companies—in a new world order of economic and military competition, confrontation, and collaboration (among allies). For the United States, the events center around the pluses and minuses of export controls of advanced commercial products used as sanctions; the impact of technologically advanced multinational companies on US technological sovereignty; government reviews of inbound and outbound foreign direct (private sector) investments; and legislation such as the Inflation Reduction Act, which through its buy American provisions punishes innovative companies operating from nations that are long-standing national security allies.
We need a new playbook for commercial and defense international R&D engagement that can live alongside the traditional playbook of science diplomacy.
In the closing sections of his essay, Colglazier argues for leadership from the National Academies and professional societies for more personal cross-border engagement among researchers and government security and research officials. This is a good idea and may help protect the cross-border scientific research enterprise from the worst excesses of government scrutiny and oversight. But the voices that most need to be heard to navigate the current challenges are from the private sector, published more often in the Financial Times and the Wall Street Journal than in more narrowly targeted journals such as Science or even Issues in Science and Technology.
Take, for example, the recent interview with the CEO of Nvidia published in the Financial Times. In commenting on the recent US prohibition on domestic companies from selling artificial intelligence computer chips to China, he pointed out that “If [China] can’t buy from … the United States, they’ll just build it themselves.” This reveals a fundamental underlying characteristic of the new world order in which commercial and defense R&D and innovation capability is already widely distributed around the world. A simple, seemingly reasonable action to protect US “technological leadership”—drawn from the antique car/Cold War era of US technological dominance—could easily have the exact opposite effect of that intended. I’d argue that we need a new playbook for commercial and defense international R&D engagement that can live alongside the traditional playbook of science diplomacy. The Biden administration is moving in that direction, by relying heavily on the National Security Council to coordinate the activities of groups such as the National Science Foundation and the National Institutes of Health with the Departments of Commerce and Defense. In responding to current technological challenges in international economics and geopolitics, balancing openness and protection in government-supported international scientific research (and the cross-border activities of universities) is part of the show, but it is not the main event. That role falls to the cross-border activities and collaborations of companies, albeit enabled or impeded by a wide variety of regulation by governments.
Bruce Guile
The Applied Research Consortia (ARC) Project
E. William Colglazier offers a critical assessment at a very important time. Almost a decade of scientific exchange between the United States and Russia has been curtailed following Russia’s invasion of Ukraine. Over the past half a decade or so, the same is happening with China and several countries in the Middle East. Even US collaborations with friendly allies have become increasingly difficult when risks are perceived differently. Data that US research organizations might normally share freely or develop commonly with collaborators might now be blocked if the parties don’t share the same point of view. In this context, I would like to add a few thoughts to the author’s excellent description of international collaborations.
First, it is imperative to understand and accept the arguments from the proponents of more research security as well as the defenders of unquestioned openness. They are both valid and need to be listened to. But a word I would add to the conversation on how to move forward is “trust.” There must be trust that the research enterprise and principal investigators want to protect what is important to the United States, especially when we see a potential collaborator doing the opposite. Today, the consensus that international collaborations provide benefits is questioned. At the same time, the science community has lost at least some of this trust—otherwise we would not be having these conversations.
There must be trust that the research enterprise and principal investigators want to protect what is important to the United States, especially when we see a potential collaborator doing the opposite.
The dialogue around protection and trust must engage those at the forefront, in addition to occurring within expert panels and small group discussions. Principal investigators must be provided with opportunities to gain enough information to help them understand any potential risks going forward and get trained in how to deal with them. Or they or their institutions may decide not to pursue a project further. In this matter, there are ideas being explored at the National Science Foundation and elsewhere to provide such platforms for information exchange—and we should all wholeheartedly support those efforts. If home organizations prescribe how to manage the risk, they should take responsibility for the outcome as well—good or bad. As always, authority and responsibility have to line up, independent of what system of control is chosen. Since the research enterprise, the government, and companies and groups in the private sector all benefit from international collaborations, they should also share the risk.
Lawmakers, science funders, and managers of the US research enterprise must understand the opportunity cost of not collaborating, or the nation will be overwhelmed by surprises, underwhelmed by progress, and forced to scramble. Every time I attend a conference in Europe, I learn about progress in emerging technologies happening in countries we have curtailed scientific exchange with. After a few years of learning only second hand, even in the small slice of science and technology I’m engaged in, it is increasingly scary. There are more and more things we don’t know. Not seeing means not knowing. I share this experience with many colleagues and it underlines the urgency to restart international collaborations in both directions, albeit with controls applied.
Colglazier concludes that there is “no need to fundamentally change a strategy that has benefited our country so greatly.” Almost 80 years of success supports this statement, as do I. But in every collaboration it takes two to tango. If one side changes the rules of engagement, the answer shouldn’t be to not collaborate, but to establish a security culture that allows a measured approach.
Norbert Holtkamp
Science Fellow, Hoover Institution
Stanford University
E. William Colglazier rightly points out that scientific cooperation was viewed, 40 years ago, as a low-risk path to strengthen the US-China relationship. The shift in risk assessment from low to high over the decades resulted from China’s successful commitment to building a world-leading science and technology sector. However, the solution to the challenge that China now poses for the United States is vastly more complicated than one crafted for dealing with the Soviet Union in the early 1980s.
US views on China have shifted rapidly. Imputing nefarious motivations to China, casting its researchers and students as part of a “whole nation” enterprise set on taking advantage of naïve American benefaction, differs markedly from the position espoused just a few years before. In 2010, US cooperation with China was noted by President Obama to be beneficial to the United States. By 2018, cooperation was viewed with suspicion, and China’s policy initiatives were met with accusations of fomenting everything from intellectual property theft to industrial espionage. The swift change in rhetoric, from China as a partner to an adversary, suggests political purposes rather than any change in the benefits of scientific cooperation. Chinese nationals and those working with them began to be prosecuted. Noting the change in underlying political atmospherics, cooperation between the two nations began to drop even as US cooperation with Europe was sustained.
The swift change in rhetoric, from China as a partner to an adversary, suggests political purposes rather than any change in the benefits of scientific cooperation.
Similar to the US relationship with the former Soviet Union, the current views on China, reminiscent of the “Red Scare” and xenophobia, were and are internal to the United States. These views are depriving US research and development of potential benefits of cooperation. Unlike the conditions of global research at the time of the 1982 Corson report, which Colglazier cites, when the United States dominated world science, China is now fully capable of finding alternative sources to working with us. Perhaps it was possible during the Cold War to “contain” the knowledge sector, but in the globalized world of the 2020s, where as much as one-third of all published research is multinational in origin, cutting off China serves mainly to redirect it to working with other scientifically advanced nations.
There is an unstated sense of betrayal in Western nations that scientific cooperation has not resulted in China’s political liberalization. The Enlightenment view posits an inextricable link between science and democracy. “Freedom is the first-born daughter of science,” said Thomas Jefferson, declaring that the enlightened citizenry participates in an ordered governance. In 1978–79, many US scientists and policymakers thought that if we would open our country to Chinese students and scholars, as President Jimmy Carter offered to China’s then president Deng Xiaoping, they would return home with new values more aligned with ours. Behind the science and technology agreements and the welcoming of more than 5.2 million students was the unspoken assumption that the United States would gift China with science, that science would enhance prosperity, and that from this would spring a more open, more market-led, and more liberal China. That this did not occur may cause some observers to reevaluate the relationship between science and government. However, to respond by betraying a core US value of openness does more damage to US science and technology than it does to China. It also does tangible damage to the bilateral relationship, making it much more costly than any sense of security that may ensue. With the asymmetries of the past.
Caroline S. Wagner
Professor, John Glenn College of Public Affairs
The Ohio State University
Denis F. Simon
Professor, Kenan-Flagler Business School
University of North Carolina at Chapel Hill
Science and innovation have always flourished in times and places where openness prevails. This was true for the ancient Greeks and during the Renaissance, and it remains true today. The United States’ leadership in science and technology is linked to its joint status as a global economic hub and the home of a free and open society.
Science is the “seed corn” to many useful and valuable technologies. These technologies are commercially valuable and support national security, defense, and the nation’s economic prosperity. To protect these vital interests, key technologies are often controlled by governments and companies through security restrictions that limit who can access key knowledge or who can participate in the design, production, trade, sale, or use of these technologies.
Today, shifting economic and geopolitical tensions are again upsetting the balance that has served the United States so well since the end of World War II.
While these restrictions protect misuse of technologies, they also adversely impact the open environment that fostered their development in the first place. To maximize the benefits, this tension must be dynamically balanced, responding to the actions and behaviors of adversaries and marketplace competitors.
In his essay, E. William Colglazier takes a fresh look at the interplay between international scientific collaboration and research security restrictions. At a time when the balance is rapidly tilting away from openness and toward more restrictions, the author leverages his deep experience and expertise to remind us that the maximum benefit to the country is in the optimal balance, not in the maximum amount of protection. Colglazier effectively uses the history of US science diplomacy in past periods of heightened geopolitical tension, including the Cold War and the opening with China in the 1970s, to clearly illustrate the benefits of open scientific exchange and engagement, even at times of great tension.
Through examples, Colglazier reminds us of the benefits of robust global scientific engagement, from the Montreal Protocol implemented in 1989 to protect Earth’s ozone layer to contemporary efforts in the global response to greenhouse gas emissions. Most dramatically, he tells how US and Soviet scientists made significant contributions to the nuclear arms control efforts in the 1990s through their informal, nongovernmental “Track 2” engagement in the 1980s.
Today, shifting economic and geopolitical tensions are again upsetting the balance that has served the United States so well since the end of World War II. The forces causing this new imbalance were explored in the recent National Academies study Protecting US Technological Advantage, which I co-chaired. Our report agreed that responding to these pressures with restrictions alone will only diminish our country’s “openness” advantage. In his essay, Colglazier concludes as we did: The United States does not need to throw away its principles of openness but rather to wisely reoptimize its controls to find that balance point of maximum advantage.
Patrick Gallagher
Chancellor
University of Pittsburgh
E. William Colglazier provides a clear analysis of the growing tensions between open basic research and America’s concerns about its national security and competitiveness. He correctly notes that open research is critical (not optional) to innovation, and that America’s attractiveness to foreign-born talent has been our competitive advantage. He also notes that “politics remains, however, a more powerful force than science.” Thus finding a “balance” for America’s research institutions is precarious indeed.
I participated in one of the National Academies’ Roundtable discussions in November 2022. I was impressed with the skill and effort that both the leaders of academic research institutions and the national security personnel demonstrated when engaging in deep conversations consistent with the recommendations that Colglazier offers. To bridge the wide gap between these communities’ “open” and “security” cultures, people on both sides will need to strive to learn about all the players and understand their concerns. As in research, institutions don’t collaborate—people do. But only after they have built a basis of mutual understanding and respect.
As Colglazier points out, these concerns are not new. They have existed since the Cold War and are revived anew in each generation. Yet America has continued to lead the world in innovation. Our research institutions know how to protect in secure facilities that which is clearly identified as needing protection. They also know how to conduct critical basic research with colleagues across the planet.
Don’t ignore the need to prepare our students to understand, compete with, and collaborate with those from places where America will be competing for talent and innovation.
What Colglazier correctly fears is asking an assortment of federal agencies to independently define what needs to be protected. This risks a drive to the broadest—but varying and vague—“areas” needing protection. It also risks chilling needed basic research for fear that what was proper before may later become problematic. The solution here can already be found in National Security Directive 189, laid out in 1985: to the greatest extent possible, fundamental research should be unrestricted, and where restrictions are needed, they should be imposed by “classification.” The directive adds that each agency is responsible for determining whether classification is appropriate prior to the award of a research grant, contract, or cooperative agreement, and, if so, controlling the research results through standard classification procedures. This clarity eliminates the “gray zone” risk to both interests.
If I were to add a single additional thought, it would be: don’t ignore the need to prepare our students to understand, compete with, and collaborate with those from places where America will be competing for talent and innovation. We need to find ways in which common global issues (such as climate change) can be the subject of joint student collaborations. We might use technology to create “walled gardens”—bounded areas for collaborative research that does not touch the areas of military sensitivity—in which bright students at global institutions can work and learn together to analyze shared global problems. I might also note, by way of example and challenge, that Tsinghua University, a national public research institution in Beijing, China, now hosts a competition for analysis—in English—of issues focused on the United Nations’ Sustainable Development Goals.
Joseph Bankoff
Former Chair, Sam Nunn School of International Affairs
Georgia Institute of Technology
Asking the Hard Questions
As an executive at the most innovative university in the United States and a graduate of what I call “a liberal arts college masquerading as an engineering school,” I find it refreshing when scholar-leaders in science, technology, engineering, and mathematics—the STEM fields—speak both passionately and eloquently about the arts and humanities. Thus, I found the interview with Freeman A. Hrabowski III (Issues, Spring 2023) particularly rewarding.
Although West Point launched the United States’ first school of engineering in 1802, my alma mater, the US Air Force Academy, is perhaps the most technologically forward-thinking of all the military service academies. But as Hrabowski reminds us, “If we are simply creating techies who can only work with the technology, we’re in big trouble.” The same can be said of our future turbocharged, technologically enhanced officer corps. They too must be deeply rooted in what makes us human, especially when generative artificial intelligence is beginning to distort our collective conceptualization of “knowledge.”
I find it refreshing when scholar-leaders in science, technology, engineering, and mathematics—the STEM fields—speak both passionately and eloquently about the arts and humanities.
Raised in the Deep South during the throes of the Civil Rights movement, Hrabowski draws a direct line from the sense of agency he gained while participating in Dr. Martin Luther King Jr.’s Children’s March in Alabama (an act that landed him in jail) to not only advocating for more Black PhDs in STEM but actually producing more of them. Hrabowski accomplished this heady task by completing what he identifies as among the most difficult tasks one can attempt: changing an institution’s culture—in this case, at the University of Maryland, Baltimore County. “To change the culture, we must be empowered to look in the mirror and to be honest with ourselves,” he reflects, if you’ll pardon the pun. Looking in the mirror, Hrabowski and his colleagues changed expectations, proclaiming and proving that underrepresented minority students can and will do math as well as their counterparts. But even after a successful 30-year run as a university president (when the average tenure is closer to six), Hrabowski’s efforts to promote improved outcomes for students, pre-K to PhD, haven’t slowed.
With a $1.5 billion scholars program funded and named in his honor by the Howard Hughes Medical Institute, Hrabowski has taken his crusade to even higher levels. Acknowledging that despite his team’s Herculean efforts, the average number of Black students earning PhDs in STEM fields has moved from just 2.2% of all PhD STEM graduates to 2.3% in the recent past, Hrabowski realizes his work is far from done. Just as important, he is quick to note that less than 50% of students starting college graduate in six years, regardless of race.
Reflecting on his lifelong work, Hrabowski asks, perhaps rhetorically, but perhaps not: “What is it going to take to create a professoriate that will make exceptional achievement in STEM by people of color the rule rather than the exception?” One certainty: Freeman Hrabowski won’t stop asking that and even more difficult questions, just as he has been doing for the past four decades.
Chris Howard
Executive Vice President and Chief Operating Officer
Arizona State University
The Limits of Science Communication?
There is little doubt that scientists struggle with effectively communicating the results of their research, particularly when it conflicts with strongly held mental models, as in the case of climate change. Significant effort and resources have been put into programs that support science communication and engaging with communities, with most universities now offering courses around these topics.
However, in “Mental Models for Scientists Communicating With the Public” (Issues, Winter 2023), Kara Morgan and Baruch Fischhoff draw a distinction between simple, unilateral science communication and bilateral risk communication. The authors argue that risk communication is a prerequisite for effective science communication, and outline an engagement process for eliciting goals and mental models from target audiences. This iterative process centers on developing a series of influence diagrams to document stakeholder concerns relative to research outcomes. The process involves convening focus groups and a seemingly intensive schedule of interactive meetings between scientists and target audiences. Ultimately, Morgan and Fischhoff argue that, in general, scientists do not possess the relevant skill set to accomplish any of these activities.
I suspect this is not actually true, given the emphasis on research translation and science communication now prevalent across many graduate programs. It also appears that following the well-established process documented by the authors is likely to be effective when scientists and researchers have the opportunity to engage directly with target audiences and stakeholders. The concern is that the ability to have this kind of interactive engagement represents an atypical situation in the context of most science communication, and is not scalable to the target audience of greatest concern, the general public, or to the global risks of most concern.
The ability to have this kind of interactive engagement represents an atypical situation in the context of most science communication, and is not scalable to the target audience of greatest concern, the general public, or to the global risks of most concern.
Many independent analyses conclude that we now face significant existential and systemic environmental risks as a consequence of human economic activity. We risk exceeding planetary boundaries, that is, the biophysical capacity of the planet to support life as we have come to know. The kinds of large-scale issues that have emerged—biodiversity loss, climate change, varying pressures on global south versus north, and environmental and social inequities—would seem to require science and risk communication at a global scale, and across diverse but biased audiences, without the luxury of establishing personal relationships among researchers and stakeholders.
Understanding the mental models that shape audience perceptions is clearly important. Survey-based research reveals, for example, that in the United States there are distinct archetypal biases that inform how scientific information will be received. One biased mental model relating to climate change is based on the belief that it is not real, or that it is not caused by human activity. This stems from a number of biases, including confirmation bias, in which people seek information that confirms their preexisting beliefs, and the availability heuristic, in which people overestimate the likelihood of events based on how easily they come to mind.
The mental model that denies climate change is not supported by scientific evidence, which overwhelmingly shows that climate change not only is occurring, but is caused by human activities such as burning fossil fuels and deforestation. It is also harmful because it can lead people to resist efforts to address climate change, which will have serious consequences for the environment and ecological and human health.
I am curious what the authors would recommend for the risk and science communication around these kinds of issues, which increasingly dominate public discourse, and for which solutions will require integrated, systems solutions to address and for which conventional and traditional models of risk communication are unlikely to suffice.
Katherine von Stackelberg
Department of Environmental Health
Harvard T. H. Chan School of Public Health
NEK Associates
Managing the Risks of International Collaboration
Over a short time span, international academic cooperation has gone from being regarded as unambiguously positive and widely promoted by governments and research funders to something that is complicated, controversial, and even contested. Rising geopolitical frictions combined with competition over dominance of key technologies lie at the heart of this shift. As a result, universities and researchers who had come to take the global enterprise of science for granted are now navigating issues such as dual use regulations, export controls, screening of foreign students and scholars, and whether researchers should be required to disclose their international sources of funding. Governments are devising or considering measures to restrict or control international academic exchanges deemed a threat to national interests.
Researchers and university administrators are increasingly calling for clearer and more concrete guidance and frameworks for international collaboration. One challenge is how to balance rules for international scientific engagement with the preservation of academic freedom to choose research topics and partners. In “Navigating the Gray Zones of International Research” (Issues, Winter 2023), Tommy Shih offers some important insight here. In particular, he suggests that research funders can play an important role in developing global norms for collaboration. I agree. Such norms could, among other things, constitute a valuable step toward a framework of global governance for research that would safeguard international scientific cooperation while acknowledging national interests and protecting ethical principles. This is particularly important at a time when growing caution in academia and government risks preventing or ending valuable cross-border academic exchange and cooperation.
Universities and researchers who had come to take the global enterprise of science for granted are now navigating issues such as dual use regulations, export controls, screening of foreign students and scholars, and whether researchers should be required to disclose their international sources of funding.
Having worked in university administration, government, and research funding organizations, I have seen cases of research cooperation that clearly should not have happened because they violated ethical standards or undermined national security. Preventing such collaborations should be a priority for researchers, universities, and funders. At the same time, there is currently a growing tendency for researchers and universities to shy away from collaborative efforts that could significantly benefit science, society, and the planet because of some perceived potential risks. This concern is well captured in a report titled University Engagement with China: An MIT Approach, published in November 2022, which states: “An important aspect of this review process is to consider the risks of not undertaking proposed engagements, as well as the risks of doing so.”
As Shih correctly points out, international research cooperation is not as binary—unequivocally good or bad—as it is sometimes made out to be. Some cooperation has significant potential benefits while at the same time incurring risks. Binary guidelines are not suitable for handling such cases; rather they require instruments for managing risks.
Rising tensions between the two largest funders and producers of scientific knowledge—the United States and China—risk turning international academic cooperation into a zero-sum game that can hurt both science and humanity’s prospects of addressing pressing challenges. Preventing unsuitable research cooperation without scaring off collaborations that are beneficial and noncontroversial is a concern for institutions and countries committed to good science and prospering societies. Another is managing collaborations, which could bring significant benefits but also incur certain risks. Addressing these issues requires a combination of norms, support, and rules—and should be a priority for research performers and funders alike.
Sylvia Schwaag Serger
Professor, School of Economics and Management
Lund University, Sweden
Chair, Scientific Council of Formas (Swedish Research Council for Sustainable Development)
Chair, Austrian Council for Research and Technology Development
Boundary-Pushing Citizen Engagement
In “How Would You Defend the Planet from Asteroids?” (Issues, Winter 2023), Mahmud Farooque and Jason L. Kessler reflect on the Asteroid Grand Challenge (AGC), a series of public deliberation exercises organized by members of the Expert & Citizen Assessment of Science and Technology (ECAST) network and NASA in 2014. Center stage were the positive impacts that citizen deliberations had on NASA representatives and NASA decisionmaking. However, the authors lament that citizen engagement at the agency similar to the AGC has not happened again. As Kessler points out, while the value of citizen engagement is acknowledged within NASA to this day, the “interstitial tissue that enables it to happen” is lacking.
In response to this replication challenge, Farooque poses an “existential question” specifically to the ECAST network, but one that resonates more broadly for engagement scholar-practitioners: Should we continue to pursue experimental engagement from the outside or work to concentrate capacity for engagement within federal agencies? While this “outside” vs. “inside” debate remains perennial for pursuing political change, we suggest that the two strategies must work hand-in-hand. From our perspective, the AGC case study provides a road map for how to embrace the nexus of agency process (inside) and boundary-pushing engagement (outside).
First, crucial partnerships between the inside and outside enable success for citizen deliberations. Professionals such as Kessler search and advocate for opportunities and resources for citizen engagement from the inside of agencies such as NASA. Practitioners such as Farooque transport and translate questions, ideas, and perspectives from the outside that expand the immediate priorities of the agency. For example, although NASA presented only two options to focus citizen debate, Farooque explains that citizen discussions produced additional governance questions and options that broadened the impact of deliberation.
Should we continue to pursue experimental engagement from the outside or work to concentrate capacity for engagement within federal agencies? While this “outside” vs. “inside” debate remains perennial for pursuing political change, we suggest that the two strategies must work hand-in-hand.
Second, centering citizen deliberations around agency priorities yields important impacts for agency decisionmaking. In the AGC, a planetary defense officer confirmed in Farooque and Kessler’s account that an important outcome of the exercise was learning from public perspectives on planetary defense and hearing “how important it was for NASA to be doing it.” This social learning was valuable to agency decisionmaking, as experiencing this public support somewhat alleviated NASA’s decisionmaking gridlock and “pushed it over the threshold.” Citizen deliberations organized from the outside might not gain the internal audience to have such impacts on decisionmaking.
Lastly, interaction between agency representatives and citizens energizes both parties. As one participant reported, the opportunity to interact with NASA representatives “made this session special” for citizen participants. Moreover, interactions could be extended to the outside by inviting agency representatives to participate in external events. Continuous agency exposure to public perspectives could in turn build more support for engagement from the inside. The AGC’s success as institutionalized citizen engagement came from linking the spheres of agency process and boundary pushing engagement. This inside/outside strategy poses more of a model than a dilemma, as such exercises accumulate to build the “interstitial tissue” that could support a more dynamic, continuous, boundary-crossing engagement ecosystem.
Dalton R. George
Postdoctoral Research Scholar, School for the Future of Innovation in Society
Arizona State University
Jason A. Delborne
Professor of Science, Policy, and Society, Genetic Engineering and Society Center
North Carolina State University
Mahmud Farooque and Jason L. Kessler’s first-person account of how scholars and policymakers worked to integrate public views into NASA’s Asteroid Grand Challenge initiative describes the twists and turns involved in deploying a relatively new social science research approach, called participatory technology assessment (pTA), to provide policy-relevant input from members of the public on how NASA should prioritize and implement its approach in designing a planetary defense system.
The article provides many helpful takeaways. One of the most important is that even though there is much talk about the importance of involving the public in discussions about how new technological innovations could impact society, figuring out how to do this in practice remains challenging. The pTA approach—daylong events that combine informational sessions about a cutting-edge area of technology with interactive, facilitated discussions on how these technologies might be best managed—advances a new way of strengthening the link between public engagement and decisionmaking. Over the past decade, the pTA approach has been applied to numerous topic areas, and new efforts are underway as well. This includes a project funded by the Sloan Foundation, led by Farooque at Arizona State University, that will apply the pTA methodology to the issue of how to best manage the societal implications of carbon dioxide removal options—which seek to remove greenhouse gases from the atmosphere—that are in the process of being researched and deployed. This pTA effort heeds the call of two landmark consensus studies from the National Academies that highlight the need for more social science research on the rollout of negative emissions technologies and the ocean’s role in carbon dioxide sequestration.
Even though there is much talk about the importance of involving the public in discussions about how new technological innovations could impact society, figuring out how to do this in practice remains challenging.
More funders from philanthropy and government need to be willing to support this innovative social science approach and help to scale its application across a wider range of technological domains. As Farooque and Kessler so tellingly describe, it can be difficult for funders to make this leap. Due to unfamiliarity with the process, there is inevitable uncertainty upfront about the value of these pTA sessions. Since funders may not know what to expect from pTA processes, that can lead to caution in deciding to finance these efforts. Additionally, it can be difficult for funders familiar with supporting expert-driven science to adapt their mindsets and recognize that such public deliberation activities generate invaluable insight into the strengths and drawbacks of different technology governance options.
There are ways of overcoming these barriers. First, experiencing pTA sessions first-hand is key to understanding their value. Kessler helpfully reflects on this point, noting that going into the pTA sessions, NASA “didn’t really know what would come out of it,” but that as the sessions progressed “it was clear the results could exceed even our most optimistic expectations.” Second, funders can view pTA as a methodological tool that can complement more typical social science approaches, such as one-on-one interviews, focus groups, and surveys. Unlike individual interviews, the pTA approach benefits from group conversation and interaction. Unlike focus groups, pTA is structured to engage hundreds of participates over multiple dialogue sessions. Unlike surveys, time is taken to inform public participants about a technology’s development and lay out available governance options.
This is a period of experimentation for funders of science, with philanthropies and governments trying wholly new forms of allocating resources, from lotteries to grant randomization to entirely new institutional arrangements. Along with experimenting with how scientific research is supported, funders need to be similarly bold and willing to advance new approaches to social science research, which is critical to ensuring that public views are effectively brought into science policy debates.
Evan S. Michelson
Program Director
Alfred P. Sloan Foundation
Materially Different
In “Computers on Wheels?” (Issues, Winter 2023), Matthew Eisler makes a significant contribution in understanding the roots of the modern electric vehicle (EV) revolution. He provides many missing details of the thinking behind Tesla’s beginnings, especially the ideas for framing automobiles as consumer commodities. More importantly, he highlights the incompleteness of the “computer on wheels” analogy to the bane of legacy automakers and policymakers alike.
As Eisler notes, while electric vehicles and computers are similar in some respects, they “are significantly different in terms of scale, complexity, and, importantly, lifecycle.” One such difference is the intense demand EVs place on being able to develop and sustain extremely complex software not only for safety-critical battery management but for the rest of the vehicle’s systems. Tesla’s organic software-development capability is a critical reason it has been able to forge ahead of legacy automakers in terms of both features and manufacturing costs. While EV batteries contribute to some 35–40% of an EV’s manufacturing cost, vehicle development costs attributable to software are rapidly approaching 50%.
Although the amount of software reinforces the analogy of an EV being a computer on wheels, the analogy fails to account for how EVs materially differ from their internal combustion engine counterparts. EVs represent a new class of cyber-physical system, one that dynamically interacts with and affects its environment in novel ways. For instance, EVs with their software-controlled electric motors no longer need physical linkages to steer or apply power—a joystick or another computer will do. With additional devices to sense the outside world along with requisite computing capability, EVs can more easily drive themselves sans human interaction than can combustion-powered vehicles. Tesla realized this early and made creating autonomous driving capability a priority. In developing self-driving, the company further increased its software prowess over legacy automakers.
EVs represent a new class of cyber-physical system, one that dynamically interacts with and affects its environment in novel ways.
As Eisler notes, policymakers wholeheartedly embraced EVs, first to fight pollution and later to combat climate change. However, policymakers have also embraced the potential of autonomous-driving EVs and are counting on them to limit individual vehicle ownership, thus reducing traffic congestion and ultimately reducing greenhouse gas emissions by up to 80% by 2050. Even for Tesla, creating fully self-driving vehicles has been much more difficult than it imagined, illustrating the dangers of policymakers adopting nascent technologies as a future given.
This highlights another critical problem that Eisler pinpoints as resulting from policymakers’ embracing EVs as computers on wheels—that of scale. Transitioning to EVs at scale not only demands radical transformations in automakers’ global logistical supply chains but also establishes new interdependencies on systems and their capabilities outside their control, from lithium mines to the electrical grid. The grid, for example, will need increased energy generation capacity as well as a significantly improved software capability to keep local utilities from experiencing blackouts as millions of EVs charge concurrently. Policymakers are only now coming to terms with the plethora of network effects EVs, and their related policies, create.
Eisler clearly underscores the myriad challenges EVs present. How well they will be met is an open question.
In 2019, I faced the chore of replacing the family car. In visiting various dealerships, I found myself listening to lengthy descriptions of vehicle control panels, self-parking features, integration of contact lists with built-in phone systems, and navigation and mapping options. I heard nothing about safety features, engine integrity and maintenance, handling on the road, passenger comfort, or even gas mileage (I was looking at all types of engine options). I was barely even encouraged to take a test drive. It was as if, indeed, I was shopping for a computing machine on wheels.
By opening with the “computer on wheels” metaphor, Matthew Eisler provides an opportunity to think about a major technology shift—from the internal combustion engine to the electric motor—from multiple perspectives. How is a car like a computer? How does a computer operate a car? How did electric-vehicle visionaries adapt design and production techniques from the unlike business of producing computers? How are the specific requirements of the single-owner mode of transportation different from other engineering challenges? How are geopolitical crises and the loci of manufacturing of component parts implicated in car production? What might answers to these questions tell us about complex systems and technological change over time?
What might answers to these questions tell us about complex systems and technological change over time?
As Eisler deftly argues, the modern push for electric cars represented the confluence of multiple social, economic, technological, and political scenarios. EV enthusiasts looked outside Detroit for new approaches to car building. The methodology of the information technology industries offered the notion of assembling off-the-shelf component parts, but the specific safety requirements—and the related engineering complexity—of automobiles put the process on a longer trajectory. On the one hand, the near simultaneous cancellation of General Motors’ early electric car EV1 and bursting of the dot-com bubble discouraged investment in a new EV industry. On the other, public sentiment and resulting public policy created a regulatory environment in which a technically successful EV could flourish.
Eisler highlights additional tensions. A fundamental mismatch between battery life and engine life undermined the interest of the traditional auto industry in these newly designed vehicles and belied difficulties in production and maintenance. And the trend of globalization, with its attendant divide between design in the West and manufacturing in the East, persisted beyond policy initiatives to onshore all elements of car production in the United States. Most profoundly, taking the longer view toward the future, Eisler indicates that thinking about cars as computers on wheels fails to consider the larger sociotechnical system in which EVs are inevitably embedded: electric power networks.
Today, Americans plug mobile computing devices without wheels into outlets everywhere, expecting only to withdraw energy. Sure, some of us recharge our phones with laptops as well. But we don’t really see laptops as storage batteries for the power grid. Nor do we generally consider when to recharge phones to avoid periods of peak power demand. But the energy exchange potential of an all-EV fleet that may replace current hydrocarbon-burning cars, buses, trucks, and trains suggests a much more complex electrical future. Eisler gives just one of multiple examples, noting that EV recharging shortens the service life of local power transformers. EVs are complicating construction and maintenance of power distribution networks and are already much more than computers on wheels. A wide range of industries have difficult and interesting questions ahead about whether and how these hybrid IT/mobility devices will fit into our highly electrified future.
Julie A. Cohn
Non-Resident Scholar, Center for Energy Studies, Baker Institute
Matthew Eisler makes a welcome contribution to the emerging conversation about the history of the rebirth of the electric vehicle. In particular, Eisler rightly highlights two factors that are often left out of breathier, more presentist accounts.
First, Tesla—the company synonymous with the faster, better EV that competes head-to-head with internal combustion—was not founded by Elon Musk. Although Musk subsequently negotiated a deal by which he was officially recognized as a Tesla founder, Eisler rightly focuses on the efforts of Martin Eberhard and Marc Tarpenning (and later JB Straubel) who recognized the opportunity arising from the confluence of advances in lithium-ion (Li-ion) storage batteries and drivetrain technology. Although many liken Musk to an “electric Henry Ford,” it is a poor analogy, as Eisler makes clear.
Second, Eisler rightly focuses on the social contexts of innovation in both the consumer electronics and automotive industries. The story of the Tesla founders trying to convince a reluctant Asian manufacturer to sell them high-performance Li-ion batteries for their pilot vehicle (eventually the Roadster) stands in sharp contrast with current efforts to “blitzscale” battery production for the growing electric vehicle market. The early battery makers focused on consumer electronics and therefore underestimated demand for Li-ion cells from electric vehicle manufacturers. Conversely, today’s rush to mass produce Li-ion cells everywhere may lead to overinvestment and rapid commodification rather than future riches. The Li-ion-powered electric vehicle is an industrial accident, not a carefully orchestrated transitional technology. Its history is definitely not one characterized by the seamless adjustments of efficient markets, a point further underscored by Eisler’s recognition of the role of state and federal policymakers, both in the initial rebirth of the EV and in support of Tesla.
There are three areas where I think Eisler might have missed the mark:
The Li-ion-powered electric vehicle is an industrial accident, not a carefully orchestrated transitional technology.
First, the idea of the car as a computer on wheels goes back to the dawn of the solid state era. In my own work on the history of EVs (The Electric Vehicle and the Burden of History, Rutgers University Press, 2000), I found engineers in Santa Clara county in the mid-1960s talking about the electrification of the automobile as a result of advances in solid state electronics. But it turned out that the incumbent auto industry responded by electrifying everything except the drivetrain. The car was an “electrical cabinet on wheels” before it became a computer. In this respect, thinking about electrification predates the birth of Silicon Valley, not to mention the dot-com era and everything that followed.
Second, many historians of technology may not wish to hear it in such stark terms, but it is very hard to imagine the EV transformation Eisler describes occurring in the absence of the important technological advances in energy storage and drivetrains. The “computer on wheels” was simply not plausible in the late-1980s. Innovation matters. Technological change creates affordances that shape downstream social and economic outcomes. Before those affordances were available, much of Eisler’s story would not have been possible. Third, the success of the standalone electric vehicle may have blinded Eisler (and others) to some of the paths not taken. For many years, EV supporters focused less on the electric vehicle as a replacement for internal combustion and more on adjacent market opportunities. For this group, electrification might have looked like electric scooters or electric-assist bicycles, or like micro cars such as city- or neighborhood-electric vehicles, or even like electric light delivery vans and small buses. Recent events have pared away the many other ways that the electrification of the auto might have developed.
David A. Kirsch
Associate Professor, Robert H. Smith School of Business
University of Maryland
Caring for People With Brain Injuries
In “The Complicated Legacy of Terry Wallis and His Brain Injury” (Issues, Winter 2023), Joseph J. Fins employs the story of one man to underscore a severe shortcoming in the US health care system. Disorders of consciousness (DoC) are conditions of the brain when there is not brain death but there also is not consistent responsiveness to external stimuli. People who experience DoC may progress through coma, unresponsive wakefulness, and minimally conscious states before sleep/wake cycles are re-established and reliable responsiveness to external cues return.
People who experience prolonged DoC following traumatic brain injury encounter multiple faults in the existing service delivery system. Despite recent evidence that three of four persons with DoC due to traumatic brain injury will become responsive by one year after injury, many families are being asked to make decisions regarding withdrawal of life supports just 72 hours after injury. These families cannot be expected to know about prognosis; it is the responsibility of the health care system to provide unbiased and evidence-based data upon which these critical decisions can be made.
It is also incumbent upon the health care system to provide competent care tailored to the needs of persons with prolonged DoC. At discharge from trauma services, care by professionals who are competent to assess and treat unresponsive patients is obligatory. With the promulgation of guidelines by a joint committee of the American Academy of Neurology, the American Congress of Rehabilitation Medicine, and the National Institute on Disability, Independent Living, and Rehabilitation Research, we can no longer claim ignorance regarding the competencies needed to treat this population.
People who experience prolonged DoC following traumatic brain injury encounter multiple faults in the existing service delivery system.
Tailoring care to the needs of people with DoC also includes placement in health care settings that can optimize rehabilitative treatments while protecting against complications that limit recovery. Movement to and between a long-term care facility and specialized inpatient rehabilitation programs should not be based on criteria developed for other populations of patients. For instance, there is no medical basis for the requirement that a person with a DoC actively participate in rehabilitation therapies when passive motor and cognitive interventions are the appropriate care. Effective and humanitarian treatment requires monitoring and coordination across a number of health care settings including, in some cases, the patient’s own home. A person with a prolonged DoC deserves periodic reassessment to assure that complications are not developing and, more important, to detect when an improvement in arousal or responsiveness necessitates a change in therapeutic approach. This type of coordinated approach across settings is not a strength of the US health care system.
Other Western countries—most notably Great Britain and the Netherlands—have recognized the unique needs of persons with prolonged DoC and have designed health care pathways that optimize the opportunity for a person to attain their full potential. It should not be a matter of luck, personal means, or a relentless family that conveys the opportunity to regain responsiveness after prolonged DoC. We have the knowledge to provide appropriate care to this population; it is now a matter of will.
John D. Corrigan
Professor, Department of Physical Medicine & Rehabilitation
The Ohio State University
As I read Joseph J. Fins’ essay, I was trying to envision the article’s optimal target audience. As a neurologist who cares for critically ill patients with acute brain injuries from trauma, stroke, and hypoxic-ischemic brain injury after cardiac arrest, the clinical details of cases such as described are familiar. But this story is of a person, not a patient, and it reinforced the view that my particular domain, the neurological intensive care unit, has afforded me. People such as Terry Wallis and their families have a complex journey that involves direct and indirect intersections with intensive care units, skilled nursing facilities, rehabilitation facilities, hospitals, outpatient clinics, and insurance payors (governmental and private), as well as with the doctors, nurses, therapists, administrators, social workers, ethicists, and interest groups that inhabit these organizations, and with legislators who craft policies that overarch all. Mr. Wallis’s poignant story seems to be one of disconnection. I would like to think that each of these groups that needs to hear this story has good intentions, but there is a clear lack of ownership, follow-through, and “big picture” that may even incentivize leaving those with severe neurologic impairments (or more often their families) to find their own way.
Inaccurate early negative prognostication can lead to a self-fulfilling prophecy of death or poor outcome if care is limited as a result of this assessment.
Several potentially disparate aspects of cases such as Mr. Wallis’s bear discussion and emphasize the need for a holistic patient-centered view of his experience. These include prognostic assessment by medical personnel, values-based judgment of living a disabled life, and the civil rights that consciousness necessitates. It is increasingly recognized that inaccurate early negative prognostication can lead to a self-fulfilling prophecy of death or poor outcome if care is limited as a result of this assessment. Medically, prognostic uncertainty can be considered as the difference between “phenotype” (what a patient looks like on clinical examination) and “endotype” (the underlying biological mechanism for why the patient looks like this). The author’s discussion of cognitive-motor dissociation is part of this consideration, as is clinical humility in prognostication (as described by the neurologist-bioethicist James Bernat). The comment that Mr. Wallis’s treating doctors “couldn’t imagine that the life he led was worth living” is also a common paternalistic view that pushes clinicians away from patients and diminishes the value of patients’ and their families’ goals of care. And perhaps most novel for treating physicians, the idea that civil rights of patients with impaired consciousness might be compromised if desired care is not accessible and provided is compelling and difficult to rebut. It is too easy for patients with disorders of consciousness to get disconnected.
A recent study by Daniel Kondziella and colleagues, reported in the journal Brain Communications, estimated that 103,000 overall cases of coma occur in the United States annually. Efforts such as the Neurocritical Care Society’s Curing Coma Campaign seek to push the science toward recovery and bring a more holistic view of the care of patients across their experience. The story of Terry Wallis is not a one-off. I hope his journey can get to the audiences who need to hear.
J. Claude Hemphill III
Professor of Neurology
University of California, San Francisco
Cochair, Curing Coma Campaign
Neurocritical Care Society
Joseph J. Fins narrates Terry Wallis’s fascinating and important case and explains the scientific and social lessons it taught. Here, I offer an additional scientific insight from the story and discuss its implications.
Neurologists and neuroscientists who studied Mr. Wallis’s case investigated the mechanism to explain the unusually long delay in his improvement following a serious traumatic brain injury. They performed brain MRI studies using diffusion tensor imaging, a technology that assesses the integrity of white matter tracts, which contain nerve fibers that serve to connect the cerebral cortex with different areas of the brain and spinal cord. These studies showed that the mechanism of his brain damage was diffuse axonal injury. This type of brain injury is produced by blunt rotational head trauma causing widespread severing of the axons of brain neurons. Additionally, observers noticed that the white matter changes evolved over time. They interpreted these findings as gradual axonal regrowth of the severed axons that likely accounted for Mr. Wallis’s long-delayed improvement. Presumably, it took nearly two decades for the slow axonal regrowth to adequately reconnect disconnected brain regions and restore his ability to talk.
The Terry Wallis case illustrates that improvement in neurological function after severe brain injury remains possible, even after many years.
Most types of serious global brain injury, such as those caused by trauma or lack of blood flow during cardiac arrest, primarily damage brain neurons. By contrast, diffuse axonal injury generally spares the cell bodies of neurons and damages only their axons. Diffuse axonal damage disconnects brain neurons from each other and produces severe brain dysfunction. The resulting widespread neuronal disconnection is sufficient to induce a disorder of consciousness such as the vegetative state or, as in Mr. Wallis’s case, the minimally conscious state.
Although often severe, diffuse axonal injury may have a better prognosis than a brain injury of similar magnitude that primarily damages neurons, such as that produced by absent brain circulation during cardiac arrest, as in the widely publicized case of Teresa Schiavo 20 years ago. Sheared axons with intact neuronal cell bodies retain the capacity to regrow, whereas severely damaged neurons usually do not. As the article explains, the Terry Wallis case illustrates that improvement in neurological function after severe brain injury remains possible, even after many years, particularly when the mechanism is diffuse axonal injury.
James L. Bernat
Professor of Neurology, Active Emeritus
Dartmouth Geisel School of Medicine
The Social Side of Evidence-Based Policy
“To Support Evidence-Based Policymaking, Bring Researchers and Policymakers Together,” by D. Max Crowley and J. Taylor Scott (Issues, Winter 2023), captures a simple truth: getting scientific evidence used in policy is about building relationships of trust between researchers and policymakers—the social side of evidence use. While the idea may seem obvious, it challenges prevailing notions of evidence-based policymaking, which typically rest on a logic akin to “if we build it, they will come.” In fact, the idea that producing high-quality evidence ensures its use is demonstrably false. Even when evidence is timely, relevant, and accessible, and even after researchers have filed their rigorous findings in a clearinghouse, the gap between evidence production and evidence use remains wide.
But how to build such relationships of trust? More than a decade of findings from research supported by the William T. Grant Foundation demonstrates the need for an infrastructure that supports evidence use. Such an infrastructure may involve new roles for staff within policy organizations to engage with research and researchers, as well as provision of resources that build their capacity to do so. For researchers, this infrastructure may involve committing to ongoing, mutual engagement with policymakers, in contrast with the traditional role of conveying written results or presenting findings without necessarily prioritizing policymakers’ concerns. Intermediary organizations such as funders and advocacy groups can play a key role in advancing the two-way streets through which researchers and policymakers can forge closer, more productive relationships.
More than a decade of findings from research supported by the William T. Grant Foundation demonstrates the need for an infrastructure that supports evidence use.
Research-practice partnerships, which consist of sustained, formalized relationships between researchers and practitioners or policymakers, are one way to create and reinforce the infrastructure for supporting relationships that advance evidence use. Such partnerships are especially common in education, where they often bring together universities and school districts or state education agencies to collaborate on developing research agendas, communicating findings, and interpreting evidence.
Crowley and Scott have demonstrated an innovative approach to creating relationships between researchers and policymakers, one that is well suited to deliberative bodies such as the US Congress, but which could also apply to administrative offices. In the Research-to-Policy Collaboration model the authors describe, the Evidence-to-Impact Collaborative operates as an intermediary, or broker, that brings together researchers and congressional staff in structured relationships to create opportunities for development of trust. These relationships are mutually beneficial: they build policymakers’ capacity to access and interpret evidence and allow for researchers to learn how to interact effectively with policymakers. Thanks to their unique, doubly randomized research design (i.e., both policymakers and researchers were randomized to treatment and control groups), Crowley and Scott are able to demonstrate that the Research-to-Policy Collaboration model has benefits on both sides.
It is past time to move beyond the idea that the key to research use is producing high-quality, timely, relevant, and accessible evidence. These qualities are important, but as Crowley and Scott have shown, the chances of use are greatly enhanced when research findings are examined in the context of a trusting relationship between researchers and policymakers, fortified by the intermediaries who bring them together.
Adam Gamoran
President
William T. Grant Foundation
Export Control as National Security Policy
In 1909, as part of the Declaration of London on the Laws of Naval War, a group of nations produced a list of items we would today consider “dual use,” but at the time were called “conditional contraband.” The list was the first time a large set of states had agreed to a common understanding of what goods and technologies represented a security concern.
Interestingly, the list included an item that is not on current export control lists, but is very much on the minds of people engaged in security governance today: balloons. Like general aviation airplanes, box cutters, or novel genetic sequences, balloons, such as the ones floating over the United States recently, represent a type of security concern that is not really visible to, and therefore governable by, today’s conventional export controls. But they still represent security concerns to the state.
In “Change and Continuity in US Export Control Policy” (Issues, Winter 2023), John Krige and Mario Daniels discuss how a historical gaze allows us to better understand “the context, effects, prospects, and challenges of the Biden administration’s current policy changes” on export controls. But there is a bigger conversation about export controls that we seem unable to have: When is this system of governance not the right tool for the job?
There is a bigger conversation about export controls that we seem unable to have: When is this system of governance not the right tool for the job?
Many aspects of the modern export control system took shape in the 1940s. What was once primarily a concern of the movement of goods from seaports is now about the movement of those goods, and the knowledge around them, from computer ports and laboratory doors. Krige and Daniels amply critique the central idea in much current export control policy: that security comes from preventing foreign supply. And the view that we can know what we need to be concerned about with enough time to put export controls in place—at least two years if you want to have international harmonization—doesn’t need that much inspection to find many areas where it doesn’t fit anymore.
Just five years after nations produced that first international lists of goods and technologies that represented a security concern, the concept of conditional contraband essentially fell apart in World War I and the era of total war. While export controls may not be on a similar precipice at the moment, their limitations are becoming only more apparent. In recognizing these limitations, we open the window to thinking differently about whose security matters, what counts as a security concern, and who has responsibility for doing something about it. Krige and Daniels note the obstacles the current export control policies will likely encounter, but it is also worth noting that we can capitalize on these obstacles to have a bigger conversation on when export controls are not the right tool for the job—and what the right tool might look like.
Sam Weiss Evans
Senior Research Fellow, Program on Science, Technology, and Society
John F. Kennedy School of Government, Harvard University
“National security” is a beguiling concept. Who does not wish to be secure among one’s people? Yet the very idea of a secured nation, as well as the instruments to achieve it, is not so much about the safety and well-being of the people in a country but the maintenance and expansion of state power, often at the cost of such safety and well-being. The normalization of national security obscures its contested origin and the violence it invokes.
As John Krige and Mario Daniels elucidate in their essay, national security as a whole-of-society response to perpetual danger grew out of institutional legacies of World War II and quickly took hold at the onset of the Cold War. Export controls have been central to this mission: to keep US adversaries technologically inferior and economically poorer, hence militarily weaker.
Since the beginning, export control regulations have faced pushback from proponents of free trade. Yet the dual objectives of a secured nation and a free market are in tension only if one believes in the fairness or at least neutrality of the capitalist market, and mistakes the purported ideals of America for reality. The so-called liberal international order, including financial systems, intellectual property regime, and trade rules, overwhelmingly favors US corporate interests and aids its geopolitical agenda. Export controls are another set of tools in service of US hegemony.
By the parochial logic of techno-nationalism, safety from dual-use technology is achieved not by restricting its harmful use but by restricting its users.
A country’s foreign policy cannot be detached from its domestic politics. During the Cold War, US policymakers wielded the threat of communism as justification to wage wars and stage coups abroad, and to suppress speech, crush unions, and obstruct racial justice at home. Export controls should be understood within this broader context: more than just directing what can or cannot move across borders, these exclusionary policies also help define the borders they enforce. Both within and beyond the territorial bounds of the United States, the interests of capital and stratification of labor follow a racialized and gendered hierarchy. Export control policies reflect and reinforce these disparities; they are exercises of necropolitics on a global scale: to dictate who may live and who must die.
By the parochial logic of techno-nationalism, safety from dual-use technology is achieved not by restricting its harmful use but by restricting its users. Guns are good as long as they are pointed at the other side. The implications of this mindset are dangerous not just for existing technology but also for the future of science, as the anticipation of war shapes the contours of inquiry. When the Biden administration issued sweeping bans on the export of high-end semiconductor technology to China, citing the potential of “AI-powered” weaponry, the military application of artificial intelligence was no longer treated as a path that can be refused with collective agency but as destiny. The lust for a robot army further distracts from the many harms automated algorithms already cause, as they perpetuate systemic bias and aggravate social inequality. The securitization of a national border around knowledge depletes the global commons and closes off the moral imagination. The public is left poorer and less safe.
Yangyang Cheng
Research Scholar in Law and Fellow
Yale Law School’s Paul Tsai China Center
Lessons From the Ukraine-Russia War
Ukraine’s response to Russia’s invasion is reshaping our understanding of modern warfare along with defense research and development. At the same time, it presents an opportunity for already strong allies to forge new pathways of collaboration across the public and private sectors to bring commercial technology to the future battlefield. With help from public and private organizations, the Ukrainian armed forces have quickly embraced both military and civilian technologies as a means to confront fast-changing battlefield realities.
In “What the Ukraine-Russia War Means for South Korea’s Defense R&D” (Issues, Winter 2023), Keonyeong Jeong, Yongseok Seo, and Kyungmoo Heo argue that the “siloed,” “centralized” South Korean R&D defense sector should take a page from Ukraine’s playbook and better integrate itself with the broader commercial technology sector. The authors recommend prioritizing longer-term R&D challenges rather than the immediate needs of the South Korean armed forces, focusing innovation in critical technologies on new conflict scenarios and dynamic planning over the long run.
With help from public and private organizations, the Ukrainian armed forces have quickly embraced both military and civilian technologies as a means to confront fast-changing battlefield realities.
In recent years, South Korean policymakers have increasingly recognized the defense sector as a key area for advancing the country’s security and economic interests. Propelled in part by many of the same government-led policy support mechanisms that have made the country a global leader in telecommunications, semiconductors, and robotics, South Korea has become the fastest-growing arms supplier in the world, with arms exports reaching more than $17 billion in 2022. Yet as Jeong, Seo, and Heo note, South Korea’s defense community still faces obstacles to effective adoption of nondefense technologies that have played an important role in Ukraine, such as 3D printing, artificial intelligence-based voice recognition and translation software, and commercial space remote sensing. What’s more, South Korea’s failure to develop an inclusive R&D environment has hindered innovation in the nation’s defense ecosystem. Large companies account for almost 90% of sales among defense firms, leaving little room for smaller, innovative enterprises to find success in the Korean defense ecosystem.
The United States faces many of the same challenges. A 2021 report from Georgetown University’s Center for Security and Emerging Technology argued that under the US Department of Defense’s current organizational structure, “defense innovation is disconnected from defense procurement,” which is hampering efforts to adopt novel technologies at scale. Like its South Korean counterpart, the US defense industrial base is also characterized by high levels of market concentration among top defense contractors.
Jeong, Seo, and Heo offer recommendations that closely align with recent Defense Department efforts to foster innovation and accelerate adoption of the technologies that are fast transforming the US national security landscape. In light of lessons learned in Ukraine, the South Korean and US militaries should work together to develop and adopt disruptive technologies, ultimately enabling a joint fighting force in the Asia-Pacific region capable of deterring and defeating future adversaries.