Better Biosecurity for the Bioeconomy
Before 2020, the public wasn’t especially interested in biosafety levels, personal protective equipment, or disinfectant protocols. The COVID-19 pandemic changed that. When living-room conversations turned to assessing viral variants and spike proteins, and then comparing vaccine side effects, biological risk was no longer something for scientists to manage; it became a daily, personal, lived experience.
Soon, biosafety—the work of protecting people, communities, and the environment from accidental exposure to biological risks—became a topic of everyday conversation. As someone who has been working in biosafety for more than 30 years, I watched what had once been the backstage work of my fellow compliance personnel, biosafety committees, and researchers suddenly become front-page news. With few clear answers about COVID-19’s origins, speculation filled the gap. The lab-leak theory emerged, not just as a scientific question, but as a political one. At the same time, documented lab incidents resurfaced and pathogen research labs caught the public’s attention.
Public interest in biosafety set off a policy reaction: Agencies scrambled, and in May 2025 the president issued an executive order (EO 14292) promising to “stop dangerous gain-of-function research.” New legislation proposes sweeping oversight of “high-risk life sciences.” These efforts come from a familiar mold: top-down, rigid, and reactive, focused on controlling federally funded research at the proposal stage. But risk doesn’t stand still and wait for enforcement; it is constantly evolving alongside the advances of science. To effectively prevent the future’s still-unknown threats, biosafety must be flexible, with engaged oversight over the entire research life cycle. And rather than spreading responsibility across multiple federal agencies, biosafety needs leadership and coordination from the top of the federal government.
There is no question that oversight of high-consequence biological research is needed. But what kind, and how much? For the first time in decades, political leaders, scientists, and the public are looking seriously at biosecurity. This offers a rare policy window to design oversight that’s adaptive, collaborative, and capable of earning public trust.
There is no question that oversight of high-consequence biological research is needed. But what kind, and how much?
Most of the policies that are now under consideration treat oversight as a one-time hurdle, stopping at the beginning of the life cycle without looking at the significant risks in execution and dissemination of research. EO 14292, titled “Improving the Safety and Security of Biological Research,” is a perfect example. It bans so-called dangerous gain-of-function research that could cause “significant societal consequences,” which is something most of us would agree with. However, the order does not define those terms or offer infrastructure or resources for implementation. Already some researchers whose work could fall under this massive umbrella are walking away from promising lines of inquiry or taking their research to other countries, while others are rewriting grant proposals in vague language to avoid review.
In a similar vein, the proposed Risky Research Review Act, introduced this March, focuses on the inception of the research rather than its conduct. The legislation would create a central board to review “high-risk” research before it takes place, but only one of the nine reviewers would have a biosafety background. Like the executive order and other initiatives stretching back to the 2001 Patriot Act, this proposed legislation uses a static model of risk that focuses on materials, such as pathogens, samples, and lab access, and assumes that risk is easy to spot. But biology isn’t just physical anymore: DNA can be digitized, and some experiments happen on laptops. Data can easily move across borders. Oversight that looks only at containment misses most of what’s happening in today’s research.
Let me walk you through the problem with trying to enforce biosafety through checklists of impermissible organisms and procedures. Let’s say a researcher submits a proposal to do research on a potentially deadly organism like anthrax, or to perform procedures likely to alter a contagious virus. If the Risky Research Review Act has its way, that kind of proposal would be flagged immediately and almost certainly be denied. That generally makes sense. But now let’s say a mid-career scientist doing research on a commonplace organism like Escherichia coli submits a pretty mundane proposal. Well, having passed the checklist, the research would get a green light. But focusing on the proposal stage may unintentionally ignore how risks arise over time as research is conducted.
Risk doesn’t follow a predictable schedule: It shifts as research moves forward, sometimes in small ways but sometimes in unimaginably big ways. This is particularly true when a basic experiment inadvertently changes how an organism behaves. For example, when researchers knock out a gene in a bacterium like E. coli to study its function, they often confirm its role by reintroducing the gene on a small ring of DNA called a plasmid. However, sometimes this routine test results in a “shaggy” microbe—an overgrowth of the external filaments, called hyperpiliation—which can be associated with increased virulence. Anticipating risk at the proposal stage, without paying attention to how it evolves, would not catch this danger that originates at the bench. Oversight is required across the research life cycle, in a process that checks in before the research is planned and when it is in process—as well as downstream, when the products go out into the world.
Risk doesn’t follow a predictable schedule: It shifts as research moves forward, sometimes in small ways but sometimes in unimaginably big ways.
Further complications may arise when teams grow and laboratory procedures and methods evolve. At that point, engaged oversight is required to help ensure everyone understands the risks and is working to manage them. In the scenario described above, imagine a junior scientist observing the unexpectedly shaggy microbes. If the researcher lacks education, training, or mentorship, they may not even realize there is a problem. Midstream oversight helps surface those moments, fostering critical thinking and a team-based risk awareness and mitigation approach. This type of oversight is less about being a lab policeman and more about being a partner who works to reduce risks of all kinds.
Oversight is also needed downstream, when findings move out into the world as published results and shared data. Once results leave the lab, other researchers build on the work, sometimes in ways the original team didn’t expect, potentially bringing new risks and dangers. For instance, a published study showing that adding a gene to E. coli causes hyperpiliation might seem utterly benign. However, another lab or researcher, perhaps with bad intent, might be inspired to insert that gene into a cousin of E. coli, such as Shigella or Salmonella, conceivably creating an infectious strain. So before the research is disseminated, another stage of oversight should ask whether this research could have dual uses and explore whether the information needs to be flagged, framed, or protected to avoid danger.
Creating oversight over the entire research life cycle can stay ahead of risks without getting in the way of discovery. Other high-consequence fields already use phased oversight. In aviation, for example, safety protocols are implemented at multiple stages—design, testing, operation, and incident review—to ensure continuous monitoring without stalling innovation. The nuclear energy sector employs a life cycle regulatory approach that includes initial site assessment, ongoing operational audits, and decommissioning planning, all under federal oversight. Similarly, the pharmaceutical industry is guided by stage-gated clinical trials, with regulatory checks at preclinical, Phase I–III, and post-market surveillance phases. These models show that effective, phased oversight is not only possible but essential for complex, risk-intensive domains. Rather than slowing research down or tangling it in red tape, an adaptive risk management life cycle will offer support when projects shift direction, bringing in biosafety officers, funders, peer reviewers, and security experts as they’re needed.
The case for a National Biosafety and Biosecurity Agency
Changing to a flexible system is not all that needs to be done. Research moves across systems, platforms, and borders easily—and so do risks. Today, oversight responsibilities are scattered across the National Institutes of Health (NIH), Centers for Disease Control and Prevention (CDC), US Department of Agriculture (USDA), and other entities, each with different rules, mandates, and priorities. That patchwork leaves gaps and creates confusion, especially when research spans disciplines, sectors, or borders. A National Biosafety and Biosecurity Agency (NBBA) could change that. By connecting the dots across agencies, institutions, and researchers, the NBBA could build a more coherent and responsive oversight system that shifts current practices from reactive enforcement to proactive engagement, offering tools and guidance before problems arise. This approach aligns with the plan advanced by the recent report from the National Security Commission on Emerging Biotechnology.
Effective, phased oversight is not only possible but essential for complex, risk-intensive domains.
An NBBA would be in a position to build a coherent structure for biosecurity, whereas today there are fragmented protocols and ad hoc rules. Without a common language or agreed-upon standards, institutions are left guessing who gets to decide what terms such as “highly transmissible” or “dangerous gain of function” mean, undermining safety and accountability. An NBBA could create shared definitions and clear review criteria for complex risk categories, including those classified as dual use research of concern, which could be misused to cause serious harm, and pathogens with enhanced pandemic potential—enabling faster, more consistent risk assessments and allowing resources to be directed toward the projects that truly warrant heightened scrutiny.
Structural reform could shift oversight of high-containment laboratories to the federal level and apply standards across academia, government, and industry. Currently, institutions manage their own biosafety programs, typically through institutional biosafety committees, with guidance from multiple agencies. CDC and USDA regulate select agents and toxins under the Federal Select Agent Program, but they don’t cover all pathogens or lab risks. NIH sets biosafety guidelines for federally funded research involving recombinant or synthetic nucleic acid molecules, but those aren’t binding to all institutions. Industry labs often operate under Occupational Safety and Health Administration standards, specific funding requirements, and internal policies. As a result, standards, inspection rigor, and enforcement vary widely depending on whether a lab is in academia, government, or the private sector. A centralized framework would promote uniform design requirements, reliable staffing protocols, and continuous operational review. To support these standards, the NBBA could provide a network of on-demand resources so researchers and biosafety professionals could act quickly and responsibly as risks evolve. This system would be capable of addressing emerging concerns such as synthetic cells, AI-designed organisms, mirror life, and digital biosecurity threats that require anticipatory governance mechanisms.
Federal leadership should include a national, anonymous incident reporting system modeled on proven safety programs in other high-risk industries (e.g., shipping, aviation, nuclear, pharmaceutical). A similar system could help the biosafety field learn from close calls while encouraging a culture of transparency and improvement.
Standards, inspection rigor, and enforcement vary widely depending on whether a lab is in academia, government, or the private sector.
Similarly, the NBBA would be able to lead cultural change, so that oversight is no longer a punitive mechanism or a bureaucratic obstacle; it should be a collaborative process rooted in partnership. That means defining expectations, communicating openly, and supporting research teams when science moves in unexpected directions. The NBBA could embody this shift, serving not as an enforcer but as an enabler of safer, more innovative research.
Just as importantly, the people doing biosafety work today need better support. Biosafety professionals serve as the connective tissue of the life sciences, offering technical knowledge, lived experience, and localized insight. However, too often they work behind the scenes with limited visibility, authority, or influence, which has led to an aging workforce that is not being replaced. The NBBA could elevate this field by working with funders, institutions, and biosafety committees to strengthen the workforce through training, certification, and continuing education. It could formalize biosafety as a recognized profession, define standards for practice, and ensure that biosafety experts are at the table when national policies are shaped. This approach can make science safer and the entire research ecosystem more resilient, accountable, and ready for future challenges.
In the aftermath of the global pandemic, policymakers want clarity about how to handle high-risk science. The public wants accountability. Scientists want to keep doing good work. Falling back on brittle rules or blunt instruments won’t create a system that provides the clarity policymakers demand, the accountability the public expects, and the operational freedom scientists need, all while offering oversight that adapts to and anticipates what’s coming. A National Biosafety and Biosecurity Agency could provide a structure for adaptive oversight by offering clarity and flexibility in a world short on both.
With national aspirations to build a bioeconomy that will transform both agriculture and industry, society will require increasingly effective approaches to biosafety. This cannot be accomplished with the checklists of the last century—not only because they are not up to today’s increasingly complex tasks, but also because the public in general needs to be able to trust the agents of the bioeconomy to act in their interest. An entity like the NBBA could go beyond managing risk, building trust by being transparent and showing the public how it makes decisions. As the bioeconomy evolves, the country needs to invest in a safety infrastructure that gives researchers, investors, policymakers, and the public confidence that risk is being managed in a trustworthy way. This is a rare window of policy opportunity to build such assurance as we enter a new phase of industry.
