Limiting the Tools of War
Controlling Dangerous Pathogens
More systemic protection is needed to guard against the deliberate or inadvertent creation of advanced disease agents.
Remarkable advances are underway in the biological sciences. One can credibly imagine the eradication of a number of known diseases, but also the deliberate or inadvertent creation of new disease agents that are dramatically more dangerous than those that currently exist. Depending on how the same basic knowledge is applied, millions of lives might be enhanced, saved, degraded, or lost.
Unfortunately, this ability to alter basic life processes is not matched by a corresponding ability to understand or manage the potentially negative consequences of such research. At the moment, there is very little organized protection against the deliberate diversion of science to malicious purposes. There is even less protection against the problem of inadvertence, of legitimate scientists initiating chains of consequence they cannot visualize and did not intend.
Current regulation of advanced biology in the United States is concerned primarily with controlling access to dangerous pathogens. Only very limited efforts have been made thus far to consider the potential implications of proposed research projects before they are undertaken. Instead, attention is increasingly being directed toward security classification and expanded biodefense efforts to deal with concerns about the misuse of science for hostile purposes. Few U.S. officials appear to recognize the global scope of the microbiological research community, and thus the global nature of the threat. We believe that more systematic protection, based on internationally agreed rules, is necessary to prevent destructive applications of the biological sciences, and we have worked with colleagues to develop one possible approach.
The emerging threat
Shortly after the September 11, 2001, terrorist attacks, envelopes containing relatively pure, highly concentrated Bacillus anthracis powder were mailed to several prominent U.S. media outlets and politicians. After years of warnings, anthrax had been unleashed in a bioterrorist attack on U.S. soil. In the end, 5 people died and 17 were injured. An estimated 32,000 people were given antibiotics prophylactically, with some 10,300 of those being urged to continue treatment for 60 days. Although adherence to the full treatment regimen was poor, the prompt initiation of antibiotics may have prevented hundreds if not thousands of others from dying or becoming ill. What would have happened if a more sophisticated delivery system or an antibiotic-resistant strain of anthrax had been used instead?
Biological weapons experts have debated for years whether the biotechnology revolution would lead to the development of new types of biological agents that were more lethal, more difficult to detect, or harder to treat. Some believed that there was little advantage in trying to improve the wide range of highly dangerous pathogens already available in nature. Beginning in the late 1980s, however, reports from defectors and other former Soviet biological weapons scientists proved this notion to be false. According to these sources, under the Soviet offensive program, Legionella bacteria were genetically engineered to produce myelin, resulting in an autoimmune disease with a mortality rate in animals of nearly 100 percent. In another project, Venezuelan equine encephalomyelitis genes were inserted into vaccinia (the vaccine strain of smallpox) reportedly as part of an effort to create new combination agents known as “chimeras.” In yet another project, genes from a bacterium that causes food poisoning, Bacillus cereus, were introduced into Bacillus anthracis, producing a more virulent strain of anthrax that even killed hamsters that had been vaccinated against the disease.
One need not look only to the former Soviet program for examples of how advances in the biological sciences could be deliberately or inadvertently misused for destructive applications. Research with possible destructive consequences is also being carried out in the civilian biomedical and agricultural community, both in universities and private-sector laboratories. Perhaps the most famous example is the mousepox experiment, in which Australian researchers trying to develop a means of controlling the mouse population inserted an interleukin-4 (IL-4) gene into the mousepox virus and in so doing created a pathogen that was lethal even to mice vaccinated against the disease. This work immediately raised the question of whether the introduction of IL-4 into other orthopox viruses, such as smallpox, would have similarly lethal effects. It also drew attention to the absence of internationally agreed rules on how to handle research results that could be misused. After publication of the research in February 2001, Ian Ramshaw, one of the principal investigators, called for the creation of an international committee to provide advice to scientists whose research produces unexpectedly dangerous results.
Other research projects since that time have been equally controversial. In one Department of Defense (DOD)-funded study, published in Science in July 2002, researchers from the State University of New York at Stony Brook created an infectious poliovirus from scratch by using genomic information available on the Internet and custom-made DNA material purchased through the mail. Members of Congress responded with a resolution criticizing the journal for publishing what was described as a blueprint for terrorists to create pathogens for use against Americans and calling on the executive branch to review existing policies regarding the classification and publication of federally funded research. Craig Venter of the private human genome project described the poliovirus work as “irresponsible” and, with University of Pennsylvania ethicist Arthur Caplan, called for new mechanisms to review and approve similar projects before they are carried out. A few months later, Venter and Nobel laureate Hamilton O. Smith announced their own rather provocative research goal: the creation of a novel organism with the minimum number of genes necessary to sustain life. Although the researchers emphasized that the organism would be deliberately engineered to prevent it from causing disease in humans or surviving outside of a laboratory dish, they acknowledged that others could use the same techniques to create new types of biological warfare agents.
In another project, University of Pennsylvania researchers, using previously published data on smallpox DNA, reverse-engineered a smallpox protein from vaccinia and then showed how smallpox evades the human immune system. The research, published in June 2002, raised the question of whether the same protein could be used to make other orthopox viruses such as vaccinia more lethal. In an unusual move, the article was accompanied by a commentary defending publication and arguing that it was more likely to stimulate advances in vaccines or viral therapy than to threaten security.
Researchers have also begun to discuss the implications of the progress made in recent years in sequencing the genome of the virus responsible for the 1918 influenza pandemic. In 1997, researchers at the Armed Forces Institute of Pathology succeeded in recovering fragments of the virus from preserved tissue samples. Already, several of the eight segments of the virus genome have been sequenced and published. Once the complete sequence is obtained, it may be possible to use reverse genetics to recreate the deadly virus, which is estimated to have killed as many as 40 million people in a single year.
Other, more future-oriented research is also of concern. Steven Block, who led a 1997 study for the U.S. government on next-generation biological weapons, has called attention to the possibility of gene therapy being subverted to introduce pathogenic sequences into humans, or of new zoonotic agents being developed that move from animals to humans. Both Block and George Poste, who chairs a DOD panel on biological weapons threats, have also noted the possibility of stealth viruses that could be introduced into a victim but not activated until later and of designer diseases that could disrupt critical body functions.
Thus far, the U.S. response to these developments has had a distinctly national focus. Less than a month after the first anthrax death, Congress enacted legislation aimed at tightening access to pathogens and other dual-use biological materials within the United States. Under the USA Patriot Act, signed into law on October 26, 2001, it is now a crime for anyone to knowingly possess any biological agent, toxin, or delivery system that is not reasonably justified for prophylactic, protective bona fide research or other peaceful purposes. The bill also makes it a crime for certain restricted persons, including illegal aliens and individuals from terrorist list countries, to possess, transport, or receive any of the threat agents on the Centers for Disease Control and Prevention’s (CDC’s) “select agent” list. The American Society for Microbiology (ASM) and others have criticized the restricted-persons provision, arguing that the absence of waiver authority could preclude legitimate researchers from restricted countries from undertaking work that could benefit the United States.
Other bioterrorism legislation passed in May 2002 requires any person who possesses, uses, or transfers a select agent to register with the secretary of Health and Human Services (HHS) and to adhere to safety and security requirements commensurate with the degree of risk that each agent poses to public health. The law requires a government background check for anyone who is to be given access to select agents. In addition, HHS is required to develop a national database of registered persons and the select agents they possess, including strain and other characterizing information if available, and to carry out inspections of relevant facilities. The Department of Agriculture (USDA) is required to develop parallel registration, security, record-keeping, and inspection measures for facilities that transfer or possess certain plant and animal pathogens. These new controls build on legislation adopted in 1996, after the Oklahoma City bombings and the acquisition of plague cultures by a member of the Aryan Nation, requiring any person involved in the transfer of a select agent to register with HHS and notify it of all proposed transfers.
In another move, seemingly at odds with the greatly expanded effort to control access to dangerous pathogens, the government has dramatically increased research funding related to biological warfare agents. In March 2002, the National Institutes of Health (NIH) announced a $1.7 billion fiscal year 2003 bioterrorism research program, a 2,000 percent increase over pre-September 11 budget levels. Under the program, some $440 million is to be spent on basic research, including genomic sequencing and proteomic analysis of up to 25 pathogens, and $520 million is to be used for new high-containment and maximum-containment laboratories and regional centers for bioterrorism training and research. In his 2003 State of the Union message, President Bush proposed to spend an additional $6 billion over 10 years to develop and quickly make available biological warfare agent vaccines and treatments under a new HHS-Department of Homeland Security program called Project Bioshield. The Department of Energy (DOE) has also been increasing its bioterrorism research program, which was first begun in 1997. As part of this effort, DOE is funding research aimed at determining the complete genetic sequence of anthrax and other potential biological warfare agents and comparing agent strains and species using DNA information. Other DOE studies are using genetic sequencing to identify genes that influence virulence and antibiotic resistance in anthrax and plague and to determine the structure of the lethal toxins produced by botulinum and other biological agents that can be used against humans.
Against this backdrop of increased research, the United States is also exploring possible restrictions on the dissemination of scientific findings that could have national security implications–what has been called “sensitive but unclassified” information. Since the Reagan administration, U.S. policy on this issue has been enshrined in National Security Decision Directive (NSDD) 189, which states: “…to the maximum extent possible, the products of fundamental research [should] remain unrestricted…where the national security requires control, the mechanism for control of information generated during federally funded fundamental research in science, technology and engineering…. is classification.” National Security Advisor Condoleezza Rice affirmed the administration’s commitment to NSDD 189 in a November 2001 letter.
But in a memorandum to federal agencies in March 2002, White House Chief of Staff Andrew Card raised the need to protect sensitive but unclassified information. At the same time, the Pentagon circulated a draft directive containing proposals for new categories of controlled information and for prepublication review of certain DOD-funded research. Because of strong criticism from the scientific community, the draft was withdrawn. Last fall, however, the White House Office of Management and Budget began developing rules for the “discussion and publication” of information that could have national security implications. These rules, which were reportedly requested by Homeland Security chief Tom Ridge, are expected to apply to research conducted by government scientists and contractors but not, at least initially, to federally funded research grants. This has not assuaged the concerns of the 42,000-member ASM, which in July 2002 sent a letter to the National Academies asking it to convene a meeting with journal publishers to explore measures the journals could implement voluntarily as an alternative to government regulation. This meeting, which was held in January 2003, laid the groundwork for a subsequent decision by 30 journal editors and scientists to support the development of new processes for considering the national security implications of proposed manuscripts and, where necessary, to modify or refrain from publishing papers whose potential harm outweighs their potential societal benefits.
In a surprising move, the government has also taken a very modest step toward strengthening the oversight process for biotechnology research in the United States. Under the new HHS regulations to implement the May 2002 controls on the possession, transfer, and use of select agents, the HHS secretary must approve genetic engineering experiments that could make a select agent resistant to known drugs or otherwise more lethal. The new USDA regulations appear to be even broader, in that they seem to apply to any microorganism or toxin, not just to those on the USDA control list. The latter provision mirrors the current requirements of the NIH Guidelines, under which biotechnology research has been regulated for more than a quarter century.
Under the original NIH Guidelines, published by the NIH Recombinant DNA Advisory Committee (RAC) in 1976, six types of experiments were prohibited. However, once it became clear that recombinant DNA research could be conducted safely, without an adverse impact on public health or the environment, these prohibitions were replaced by a system of tiered oversight and review, in which Institutional Biosafety Committees (IBCs) and Institutional Review Boards (IRBs) at individual facilities replaced the RAC as the primary oversight authority for most categories of regulated research.
Today, only two categories of laboratory research involving recombinant DNA technology are subject to NIH oversight. The first, “major actions,” cannot be initiated without the submission of relevant information on the proposed experiment to the NIH Office of Biotechnology Activities (OBA), and they require IBC approval, RAC review, and NIH director approval before initiation. This covers experiments that involve the “deliberate transfer of a drug resistance trait to microorganisms that are not known to acquire the trait naturally if such acquisition could compromise the use of the drug to control disease agents in humans, veterinary medicine, or agriculture.” The second category of experiments requiring IBC approval and NIH/ OBA review before initiation involves the cloning of toxin molecules with a median lethal dose (the dose found to be lethal to 50 percent of those to which it is administered) of less than 100 nanograms per kilogram of body weight. Unlike the requirements in the new select agent rules, the NIH Guidelines apply only to research conducted at institutes in the United States and abroad that received NIH funding for recombinant DNA research. Many private companies are believed to follow the guidelines voluntarily.
In addition to requiring prior approval for these two types of experiments, HHS and USDA asked for input from the scientific community on other types of experiments that might require enhanced oversight because of safety concerns, as well as on the form that such additional oversight should take. In particular, they sought comments on experiments with biological agents that could increase their virulence or pathogenicity; change their natural mode of transmission, route of exposure, or host range in ways adverse to humans, animals, or plants; result in the deliberate transfer of a drug-resistant trait or a toxin-producing capability to a microorganism in a manner that does not involve recombinant DNA techniques; or involve the smallpox virus.
Interestingly, the ASM did not rule out the possible need for additional oversight of certain types of microbiological research. However, in its comments on the draft HHS regulations, the ASM recommended that any additional oversight requirements be implemented through the NIH Guidelines rather than regulations, in order to provide a less cumbersome means of incorporating changes as technology evolves. The ASM also proposed the creation of a Select Agent Research Advisory Committee to provide advice to U.S. government agencies, including reviewing specific research projects or categories of research for which additional oversight is required.
A number of the domestic measures described above were also incorporated in the U.S. proposal to the Biological Weapons Convention (BWC) review conference in October 2001. Three months earlier, the United States had rejected the legally binding protocol that had been under negotiation to strengthen the 1972 treaty’s prohibition on the development, production, and possession of biological agents. In its place, the United States suggested a variety of largely voluntary measures to be pursued on a national basis by individual countries. This included a proposal that other countries adopt legislation requiring entities that possessed dangerous pathogens to register with the government, as is being done in the United States. The United States also proposed that countries implement strict biosafety procedures based on World Health Organization (WHO) or equivalent national guidelines, tightly regulate access to dangerous pathogens, explore options for national oversight of high-risk biological experiments, develop a code of conduct for scientists working with pathogens, and report internationally any biological releases that could affect other countries adversely. After an acrimonious meeting, which was suspended for a year after the U.S. call for the termination of both the protocol negotiations and the body in which they were being held, it was agreed that experts would meet for a two-week period each year to discuss five specific issues. Most of the issues related to strengthening controls over pathogens will be considered at the first experts’ meeting, to be held in August 2003.
U.S. approach falls short
The past several years have thus witnessed a range of U.S. initiatives aimed at reducing the likelihood that advances in the biological sciences will be used for destructive purposes. But whether viewed as a whole or as a series of discrete steps, the current approach falls short in a number of important respects:
The new controls on human, plant, and animal pathogens are too narrowly focused on a static list of threat agents. These controls can be circumvented entirely by research such as the poliovirus experiment, which demonstrated a means of acquiring a controlled agent covertly, without the use of pathogenic material; or like the mousepox experiment, which showed how to make a relatively benign pathogen into something much more lethal.
The expanded bioterrorism research effort is rapidly increasing the number of researchers and facilities working with the very pathogens that U.S. policy is seeking to control, before appropriate oversight procedures for such research have been put into place. Little thought appears to have been given to the fact that the same techniques that provide insights into enhancing our defenses against biological agents can also be misused to develop even more lethal agents.
The proposed restrictions on sensitive but unclassified research will not prevent similar research from being undertaken and published in other countries. Depending on the form such restrictions take, they could also increase suspicions abroad about U.S. activities, impede oversight of research, and interfere with the normal scientific process through which researchers review, replicate, and refine each other’s work and build on each other’s discoveries.
The new oversight requirements for certain categories of biotechnology research, like the NIH Guidelines on which they are based, subject only a very narrow subset of relevant research to national-level review. And if the ASM proposal to implement these and other additional oversight requirements through the NIH Guidelines is accepted, these requirements will no longer have the force of law, unlike requirements contained in regulations.
Finally, because of the current U.S. antipathy toward legally binding multilateral agreements, the BWC experts’ discussions on pathogen controls are unlikely to result in the adoption of a common set of standards for research that could have truly global implications.
As the mousepox experiment showed, advanced microbiological research is occurring in countries other than the United States. According to the chairman of the ASM Publications Board, of the nearly 14,000 manuscripts submitted to ASM’s 11 peer-reviewed journals during 2002, about 60 percent included non-U.S. authors, from at least 100 different countries. A total of 224 of these manuscripts involved select agents, of which 115, or slightly more than half, had non-U.S. authors. Research regulations that apply only in the United States therefore will not only be ineffective but will put U.S. scientists at a competitive disadvantage. The need for uniform standards, embodied in internationally agreed rules, is abundantly clear.
In order to be effective and to be accepted by those most directly affected, a new oversight arrangement must, in addition to being global in scope, also achieve a number of other objectives. First, it must be bottom-up. Rather than being the result of a political process, like the select agent regulations or the proposed U.S. government publication restrictions, any oversight system must be designed and operated primarily by scientists: those that have the technical expertise to make the necessary judgments about the potential implications of a given experiment.
Second, the system must be focused. It must define the obligations of individual scientists precisely in order to avoid uncertainty as to what is required to comply with agreed rules. This means relying on objective criteria rather than assessments of intent. This is especially important if the oversight system is legally binding, with possible penalties for violators. It also must be as limited as possible in terms of the range of activities that are covered. Not all microbiological research can or should be subject to oversight. Only the very small fraction of research that could have destructive applications is relevant.
Third, it must be flexible. Like the NIH Guidelines, any new oversight arrangement must include a mechanism for adapting to technological change. Most current concerns revolve around pathogens–either the modification of existing pathogens or the creation of new pathogens that are more deadly than those that presently exist. But as Steven Block has noted, “black biology” will in the not-too-distant future lead to the development of compounds that can affect the immune system and other basic life systems, or of microorganisms that can invade a host and unleash their deadly poison before being detected.
Finally, any new oversight arrangement must be secure. Both the genetic modification work undertaken as part of the Soviet offensive program and the more recent U.S. biodefense efforts underscore the importance of including all three relevant research communities–government, industry, and academia–in any future oversight system. This will require the development of provisions that allow the necessary degree of independent review without, at the same time, jeopardizing government national security information or industry or academic proprietary interests.
What then, might an internationally agreed oversight system aimed at achieving these objectives look like? To help explore this question, the Center for International and Security Studies at Maryland (CISSM) has, as part of a project launched even before September 11 and the anthrax attacks, consulted extensively with a diverse group of scientists, public policy experts, information technology specialists, and lawyers. Out of these deliberations has emerged a prototype system for protective oversight of certain categories of high-consequence biotechnology research. To the maximum extent possible, we have drawn on key elements of the oversight arrangements already in place. Like the NIH Guidelines, our system is based on the concept of tiered peer review, in which the level of risk of a particular research activity determines the nature and extent of oversight requirements. Like the select agent regulations, our system also includes provisions for registration (or licensing), reporting, and inspections.
We call our prototype the Biological Research Security System. At its foundation is a local review mechanism, or what we term a Local Pathogens Research Committee. This body is analogous to the IBCs and IRBs at universities and elsewhere in the United States that currently oversee recombinant DNA research (under the NIH Guidelines) and human clinical trials (under Food and Drug Administration regulations). In our system, this local committee would be responsible for overseeing potentially dangerous activities: research that increases the potential for otherwise benign pathogens to be used as weapons or that demonstrates techniques that could have destructive applications. This could include research that increases the virulence of a pathogen or that involves the de novo synthesis of a pathogen, as was done in the poliovirus experiment. Oversight at this level would be exercised through a combination of personnel and facility licensing, project review, and where appropriate, project approval. Under our approach, the vast majority of microbiological research would either fall into this category or not be covered at all.
At the next level, there would be a national review body, which we call a National Pathogens Research Authority. This body is analogous to the RAC. It would be responsible for overseeing moderately dangerous activities: research involving controlled agents or related agents, especially experiments that increase the weaponization potential of such agents. This could include research that increases the transmissibility or environmental stability of a controlled agent, or that involves the production of such an agent in powder or aerosol form, which are the most common means of disseminating biological warfare agents. All projects that fall into this category would have to be approved at the national level and could be carried out only by licensed researchers at licensed facilities. The national body would also be responsible for overseeing the work of the local review committees, including licensing qualified researchers and facilities, and for facilitating communications between the local and international levels.
At the top of the system would be a global standard-setting and review body, which we term the International Pathogens Research Agency. The closest analogy to this is the WHO Advisory Committee on Variola Virus Research, which oversees research with the smallpox virus at the two WHO-approved depositories: the CDC in Atlanta and Vector in Russia. This new body would be responsible for overseeing and approving extremely dangerous activities: research largely involving the most dangerous controlled agents, including research that could make such agents even more dangerous. This could include work with an eradicated agent such as smallpox or the construction of an antibiotic- or vaccine-resistant controlled agent, as was done during the Soviet offensive program. All projects in this category would have to be approved internationally, as would the researchers and facilities involved.
In addition to overseeing extremely dangerous research, the global body would also be responsible for defining the research activities that would be subject to oversight under the different categories and overseeing implementation by national governments of internationally agreed rules, including administering a secure database of information on research covered by the system. It would also help national governments in meeting their international obligations by, for example, providing assistance related to good laboratory practices. No existing organization currently fulfills all of these functions.
A more robust system
In today’s climate of heightened concern about bioterrorism, the idea of building on existing oversight processes to put in place a more robust system of independent peer review of high-consequence research seems less radical than when CISSM began this project in 2001. In the United States, there is a growing awareness that current domestic regulations do not provide adequate protection against the use of biotechnology research for destructive purposes. In May 2002, a senior White House Office of Homeland Security official urged the scientific community to “define appropriate criteria and procedures” for regulating scientific research related to weapons of mass destruction. In the coming months, a special committee appointed by the National Academies will decide whether to recommend enhanced oversight of recombinant DNA research in the United States, above and beyond that currently regulated by the RAC.
Others are ahead of the United States in recognizing the global dimensions of the problem. In September 2002, the International Committee of the Red Cross called on governments, the scientific and medical communities, and industry to work together to ensure that there are “effective controls” over potentially dangerous biotechnology, biological research, and biological agents. And in the run-up to the continuation of the BWC review conference last fall, the British counterpart to the National Academies, the Royal Society, called for agreement on a “universal set of standards for research” for incorporation into internationally supported treaties.
Thoughtful individuals will disagree about the research activities that should be covered by a new oversight arrangement, as well as the appropriate level of oversight that should be applied. They will also debate whether such a system should be legally binding, as envisioned in the prototype being developed by CISSM, or of a more voluntary nature, as has been suggested by researchers at Johns Hopkins University. But with each report of yet another high-consequence research project, fewer and fewer will doubt the nature of the emerging threat. Enhanced oversight of U.S. research is necessary but not sufficient. Common standards, reflected in internationally agreed rules, are essential if the full promise of the biotechnology revolution is to be realized and potentially dangerous consequences minimized. Our approach is one possible way of achieving that important goal.
“ASM Testimony on Conducting Research During the War on Terrorism: Balancing Openness and Security,” October 10, 2002, available at .
Steven M. Block, “The Growing Threat of Biological Weapons,” American Scientist 89, no. 1 (January-February 2001): 28-37.
Eileen Choffnes, “Bioweapons: New Labs, More Terror,” Bulletin of the Atomic Scientists 58, no. 5 (September-October 2002).
Gerald Epstein, “Controlling Biological Warfare Threats: Resolving Potential Tensions Among the Research Community, Industry and the National Security Community,” Critical Reviews in Microbiology 27, no. 4 (2001): 321-354.
Gigi Kwik, Joe Fitzgerald, Thomas V. Inglesby, and Tara O’Toole, “Biosecurity: Responsible Stewardship of Bioscience in an Age of Catastrophic Terrorism,” Biosecurity and Bioterrorism: Biodefense Strategy, Science, and Practice 1, no. 1 (2003): 1-9.
George Poste, “Biotechnology and terrorism,” Prospect Magazine, no. 74 (May 2002): 48-52, available at .
John Steinbruner, Elisa D. Harris, Nancy Gallagher, and Stacy Gunther, “Controlling Dangerous Pathogens: A Prototype Protective Oversight System,” February 5, 2003, available at www.puaf.umd.edu/cissm/projects/amcs/pathogens.html.
Raymond Zilinskas and Jonathan B. Tucker, “Limiting the Contribution of the Open Scientific Literature to the Biological Weapons Threat,” Journal of Homeland Security, December 2002, available at http://www.homelandsecurity.org/journal.
John D. Steinbruner (firstname.lastname@example.org) is professor of public policy at the University of Maryland and director of the Center for International and Security Studies at Maryland (CISSM). Elisa D. Harris (email@example.com) is a senior research scholar at CISSM and former director for nonproliferation and export controls on the National Security Council staff.