Training More Biosafety Officers
The United States has long claimed that there is a need to focus on the safety and security of biological research and engineering, but we are only beginning to see that call turn into high-level action on funding and support for biosafety and biosecurity governance. The CHIPS and Science Act, for example, calls for the White House Office of Science and Technology Policy to support “research and other activities related to the safety and security implications of engineering biology,” and for the office’s interagency committee to develop and update every five years a strategic plan for “applied biorisk management.” The committee is further charged with evaluating “existing biosecurity governance policies, guidance, and directives for the purposes of creating an adaptable, evidence-based framework to respond to emerging biosecurity challenges created by advances in engineering biology.”
To carry out this mouthful of assignments, more people need to be trained in biosafety and biosecurity. But what does good training look like? Moreover, what forms of knowledge should be incorporated into an adaptable evidence-based framework?
In “The Making of a Biosafety Officer” (Issues, Spring 2023), David Gillum shows the power and importance of tacit knowledge—“picked up here and there, both situationally and systemically”—in the practice of biosafety governance, while at the same time stressing the importance of the need to formalize biosafety education and training. This is due, in part, to the lack of places where people can receive formal training in biosafety. But it is also a recognition of, as Gillum puts it, the type of knowledge biosafety needs—knowledge “at the junction between rules, human behavior, facilities, and microbes.”
The present lack of formalized biosafety education and training presents an opportunity to re-create what it means to be a biosafety officer as well as to redefine what biosafety and biosecurity are within a broader research infrastructure and societal context. This opening, in turn, should be pursued in tandem with agenda-setting for research on the social aspects of biosafety and biosecurity. It is increasingly unrealistic to base a biosafety system primarily on lists of known concerns and standardized practices for laboratory management. Instead, adaptive frameworks are needed that are responsive to the role that tacit knowledge plays in ensuring biosafety practices and are aligned with current advances in bioengineering and the organizational and social dynamics within which it is done.
Proficiency in biosafety and biosecurity expertise today means attending to the formal requirements of policies and regulations while also generating new knowledge about the gaps in those requirements and a well-developed sense of the workings of a particular institution. The challenge for both training and agenda-setting is how to endorse, disseminate, and assimilate the tacit knowledge generated by biosafety officers’ real-life experiences. For students and policymakers alike, a textbook introduction to biosafety’s methodological standards, fundamental concepts, and specific items of concern will surely come about as biosafety research becomes more codified. But even as some aspects of tacit knowledge become more explicit, routinized, and standardized, the emergence of new and ever valuable tacit knowledge will always remain a key part of biosafety expertise and experience.
Gillum’s vivid examples of real-life experiences involving anthrax exposures, the organizational peculiarities of information technology infrastructures, and the rollout of regulations of select bioagents demonstrate that, at a basic level, biosafety officers and those with whom they work need to be attuned to adaptability, uncertainty, and contingency in specific situations. Cultivating this required mode of attunement among future biosafety professionals means embracing the fact that biosafety, like science itself, is a constantly evolving social practice, embedded within particular institutional and political frameworks. As such, it means that formal biosafety educational programs must not reduce what counts as “biosafety basics” to technical know-how alone, but ought to prioritize situational awareness and adaptability as part of its pedagogy. Biosafety and biosecurity research such as that envisioned in the CHIPS and Science Act will advance the training and work of the next generation of biosafety professionals only if it recognizes this key facet of biosafety.
Melissa Salm
Biosecurity Postdoctoral Fellow in the Center for International Security & Cooperation
Stanford University
Sam Weiss Evans
Senior Research Fellow in the Program on Science, Technology, and Society
Harvard Kennedy School
David Gillum illustrates the importance of codifying and transferring knowledge that biosafety professionals learn on the job. It is certainly true that not every biosafety incident can be anticipated, and that biosafety professionals must be prepared to draw on their knowledge, experience, and professional judgment to handle situations as they arise. But it is also true that as empirical evidence of laboratory hazards and their appropriate mitigations accumulate, means should be developed by which this evidence is analyzed, aggregated, and shared.
There will always be lessons that can only be learned the hard way—but they shouldn’t be learned the hard way more than once. There is a strong argument for codifying and institutionalizing these biosafety “lessons learned” through means such as formalized training or certification. Not only will that improve the practice of biosafety, but it will also help convince researchers—a population particularly sensitive to the need for empirical evidence and logical reasoning as the basis for action—that the concerns raised by biosafety professionals need to be taken seriously.
This challenge would be significant enough if the only potential hazards from research in the life sciences flowed from accidents—human error or system malfunction—or from incomplete understanding of the consequences of research activities. But the problem is worse than that. Biosecurity, as contrasted with biosafety, deals with threats posed by those who would deliberately apply methods, materials, or knowledge from life science research for harm. Unfortunately, when it comes to those who might pose deliberate biological threats, we cannot exclude researchers or even biosafety professionals themselves. As a result, the case for codifying and sharing potential biosecurity failures and vulnerabilities is much more fraught than it is for biosafety: the audience might include the very individuals who are the source of the problem—people who might utilize the scenarios that are being shared, or who might even modify their plans once they learn how others seek to thwart them. Rather than setting up a registry or database by which lessons learned can be compiled and shared, one confronts the paradox of creating the Journal of Results Too Dangerous to Publish. Dealing with such so-called information hazards is one factor differentiating biosafety from biosecurity. Often, however, we call upon the same experts to deal with both.
Personal relationships do not immunize against such insider threats, as we learn every time the capture of a spy prompts expressions of shock from coworkers or friends who could not imagine that the person they knew was secretly living a vastly different life. However, informal networks of trust and personal relationships are likely a better basis on which to share sensitive biosecurity information than relying on mutual membership in the same profession or professional society. So while there is little downside to learning how to better institutionalize, codify, and share the tacit knowledge and experience with biosafety that Gillum describes so well, it will always be more difficult to do so in a biosecurity context.
Gerald L. Epstein
Contributing Scholar
Johns Hopkins Center for Health Security