Give Genetic Engineering Some Breathing Room
Government regulations are suffocating applications that promise much public benefit. Fixes are available, if society and policymakers would only pay heed to science.
New genetic engineering techniques that are more precise and versatile than ever offer promise for bringing improved crops, animals, and microorganisms to the public. But these technologies also raise critical questions about public policy. How will the various regulatory agencies approach them as a matter of law and regulation? Will they repeat the costly excesses of the oversight of recombinant DNA technology? What will be the regulatory costs, time, and energy required to capture the public benefits of the new technologies? And further out, how will regulatory agencies approach the emerging field of synthetic biology, which involves the design and construction of new biological components, devices, and systems, so that standardized biological parts can be mixed and assembled?
Based on current experience, answers to such questions are not comforting. The regulation of recombinant DNA technology has been less than a stunning success. Most of the federal agencies involved have ignored the consensus of the scientific community that the new molecular techniques for genetic modification are extensions, or refinements, of earlier, more primitive ones, and policymakers and agencies have crafted sui generis, or particular, regulatory mechanisms that have prevented the field from reaching anything approaching its potential.
The regulatory burden on the use of recombinant DNA technology is disproportionate to its risk, and the opportunity costs of regulatory delays and expenses are formidable. The public and private sectors have squandered billions of dollars on complying with superfluous, redundant regulatory requirements that have priced public sector and small company research and development (R&D) out of the marketplace.
These inflated development costs are the primary reason that more than 99% of genetically engineered crops that are being cultivated are large-scale commodity crops—corn, cotton, canola, soy, alfalfa and sugar beets. Hawaiian papaya is one of the few examples of genetically engineered “specialty crops” such as fruits, nuts, or vegetables. The once-promising sector of “biopharming,” which uses genetic engineering techniques to induce crops such as corn, tomatoes, and tobacco to produce high concentrations of high-value pharmaceuticals, is moribund. The once high hopes for genetically engineered “biorational” microbial pesticides and microorganisms to clean up toxic wastes are dead and gone. Not surprisingly, few companies or other funding groups are willing to invest in the development of badly needed genetically improved varieties of the subsistence crops grown in the developing world.
The seminal question about the basis for regulation of genetic engineering in the 1970s was whether there were unique risks associated with the use of recombinant DNA techniques. Numerous national and international scientific organizations have repeatedly addressed this question, and their conclusions have been congruent: There are no unique risks from the use of molecular techniques of genetic engineering.
As long ago as 1982, an analysis performed by the World Health Organization’s Regional Office for Europe reminded regulators that “genetic modification is not new” and that “risks can be assessed and managed with current risk assessment strategies and control methods.” Similarly, the U.S. National Academy of Sciences issued a white paper in 1987 that found no evidence of the existence of unique hazards, either in the use of genetic engineering techniques or in the movement of genes between unrelated organisms.
In perhaps the most comprehensive and unequivocal analysis, the 1989 National Research Council report, “Field Testing of Genetically Modified Organisms,” on the risks of genetically engineered plants and microorganisms, concluded that “the same physical and biological laws govern the response of organisms modified by modern molecular and cellular methods and those produced by classical methods.” But this analysis went further, emphasizing that the more modern molecular techniques “are more precise, circumscribed, and predictable than other methods. They make it possible to introduce pieces of DNA, consisting of either single or multiple genes that can be defined in function and even in nucleotide sequence. With classical techniques of gene transfer, a variable number of genes can be transferred, the number depending on the mechanism of transfer; but predicting the precise number or the traits that have been transferred is difficult, and we cannot always predict the phenotype that will result. With organisms modified by molecular methods, we are in a better, if not perfect, position to predict the phenotypic expression.”
In 2000, the National Research Council released another report weighing in on the scientific basis of federal regulation of genetically engineered plants. It concurred with earlier assessments by other groups that “the properties of a genetically modified organism should be the focus of risk assessments, not the process by which it was produced.”
Various distinguished panels have continued to make the same points about genetic engineering and “genetically modified organisms” (GMOs). In September 2013, the United Kingdom’s Advisory Committee on Releases to the Environment published “Report 2: Why a modern understanding of genomes demonstrates the need for a new regulatory system for GMOs.” The report addressed the European Union’s (EU) regulatory system as applied to new techniques of molecular breeding. This except from the Executive Summary is especially salient: “Our understanding of genomes does not support a process-based approach to regulation. The continuing adoption of this approach has led to, and will increasingly lead to, problems. This includes problems of consistency, i.e. regulating organisms produced by some techniques and not others irrespective or their capacity to cause environmental harm. Our conclusion, that the EU’s regulatory approach is not fit for purpose for organisms generated by new technologies, also applies to transgenic organisms produced by ‘traditional’ GM [genetic modification] technology. . . [T]he potential for inconsistency is inherent because they may be phenotypically identical to organisms that are not regulated.”
There is, then, a broad consensus that process-based regulatory approaches are not “fit for purpose.” Inevitably, they are unscientific, anti-innovative, fail to take into consideration actual risks, and contravene the basic principle that similar things should be regulated similarly. It follows that U.S. and EU systems must be reformed to become scientifically defensible and risk-based.
In theory, the U.S. government accepted the fundamental logic of these analyses as the basis for regulation. In 1986, the White House Office of Science and Technology Policy published a policy statement on the regulation of biotechnology that focused oversight and regulatory triggers on the risk-related characteristics of products, such as plants’ weediness or toxicity. That approach specifically and unequivocally rejected regulation based on the particular process, or technique, used for genetic modification. In 1992, the federal government issued a second pivotal policy statement (sometimes known as the “scope document”) that reaffirmed the overarching principle for biotechnology regulation—that is, the degree and intrusiveness of oversight “should be based on the risk posed by the introduction and should not turn on the fact that an organism has been modified by a particular process or technique.”
Thus, there has been a broad consensus in the scientific community, reflected in statements of federal government policy going back more than 20 years, that the newest techniques of genetic modification are essentially an extension, or refinement, of older, less precise and less predictable ones, and that oversight should focus on the characteristics of products, not on the processes or technologies that produced them.
In spite of such guidance, however, regulatory agencies have generally chosen to exercise their discretion to identify and capture molecular genetic engineering—specifically, recombinant DNA technology—as the focus of regulations. Because the impacts of their decisions have drastically affected the progress of agricultural R&D, this cautionary tale is worth describing agency by agency.
A cautionary tale, repeated
The Department of Agriculture (USDA), through its Animal and Plant Health Inspection Service (APHIS), is responsible for the regulation of genetically engineered plants. APHIS had long regulated the importation and interstate movement of organisms (plants, bacteria, fungi, viruses, etc.) that are plant pests, which were defined by means of an inclusive list—essentially a binary “thumbs up or down” approach. A plant that an investigator might wish to introduce into the field is either on the prohibited list of plant pests, and therefore requires a permit, or it is exempt.
This straightforward approach is risk-based, in that the organisms required to undergo case-by-case governmental review are an enhanced-risk group (organisms that can injure or damage plants), unlike organisms not considered to be plant pests. But for more than a quarter-century, APHIS has applied a parallel regime (in addition to its basic risk-based regulation) that focuses exclusively on plants altered or produced with the most precise genetic engineering techniques. APHIS reworked the original concept of a plant pest (something known to be harmful) and crafted a new category—a “regulated article”—defined in a way that captures virtually every recombinant DNA-modified plant for case-by-case review, regardless of its potential risk, because it might be a plant pest.
In order to perform a field trial with a regulated article, a researcher must apply to APHIS and submit extensive paperwork before, during, and after the field trial. After conducting field trials for a number of years at many sites, the researcher must then submit a vast amount of data to APHIS and request “deregulation,” which is equivalent to approval for unconditional release and sale. These requirements make genetically engineered plants extraordinarily expensive to develop and test. The cost of discovery, development, and regulatory authorization of a new trait introduced between 2008 and 2012 averaged $136 million, according to Wendelyn Jones of DuPont Pioneer, a major corporation involved in crop genetics.
APHIS’s approach to recombinant DNA-modified plants is difficult to justify. Plants have long been selected by nature, as well as bred or otherwise manipulated by humans, for enhanced resistance or tolerance to external threats to their survival and productivity, such as insects, disease organisms, weeds, herbicides, and environmental stresses. Plants have also been modified for qualities attractive to consumers, such as seedless watermelons and grapes and the tangerine-grapefruit hybrid called a tangelo.
Along the way, plant breeders have learned from experience about the need for risk analysis, assessment, and management. New varieties of plants (whichever techniques are used to craft them) that normally harbor relatively high levels of various toxins are analyzed carefully to make sure that levels of those substances remain in the safe range. Celery, squash, and potatoes are among the crops in need of such attention.
The basic tenets of government regulation are that similar things should be regulated similarly, and the degree of oversight should be proportionate to the risk of the product or activity. For new varieties of plants, risk is a function of certain characteristics of the parental plant (such as weediness, toxicity, or ability to “outcross” with other plants) and of the introduced gene or genes. In other words, it is not the source or the method used to introduce a gene but its function that determines how it contributes to risk. Under USDA and APHIS, however, only plants made with the newest, most precise techniques have been subjected to more extensive and burdensome regulation, independent of the risk of the product.
Under its discriminatory and unscientific regulatory regime, APHIS has approved more than 90 genetically engineered traits, and farmers have widely and quickly adopted the crops incorporating them. After the cultivation worldwide of more than 3 billion acres of genetically engineered crops (by more than 17 million farmers in 30 countries) and the consumption of more than 3 trillion servings of food containing genetically engineered ingredients in North America alone, there has not been a single documented ecosystem disruption or a single confirmed tummy ache.
With this record of successful adoption and use, one might have thought that APHIS would reduce its regulatory burdens on genetically engineered crops, but there has been no hint of such a move. APHIS continues to push the costs for regulatory compliance into the stratosphere while its reviews of benign new crops become ever more dilatory: Evaluations that took an average of six months in the 1990s now take three-plus years. APHIS’s performance compares unfavorably with its counterparts abroad. Based on data gathered by the U.S. government and confirmed by industry groups, from January 2010 through June 2013, the average time from submission to decision was 372 days for Brazil and 771 days for Canada, versus 1,210 days for the United States.
APHIS has not shown any willingness to rationalize its regulatory approach—for example, by creating categorical exemptions for what are now known scientifically, and proven agronomically, to be negligible-risk genetically engineered crops. By creating such categorical exemptions, APHIS would simultaneously reduce its workload, lower R&D costs, spur innovation, and avoid the pitfalls of the requirements of the National Environmental Policy Act (NEPA). NEPA requires that agencies performing “major federal actions,” such as APHIS’s approvals, proceed through a succession of procedural hoops. Allegations from activists that regulators have failed to do so have tied up approvals in the federal courts, creating a litigation burden for regulators, scientists, and technology developers. (Regardless of their risk, the vast majority of plants “engineered” through more conventional genetic manipulation, such as crop breeding, do not require APHIS approval and, consequently, are not subject to NEPA or to the derivative lawsuits.)
The regulatory obstacles that discriminate against genetic engineering impede the development of crops with both commercial and humanitarian potential. Genetically engineered crops foreseen in the early days of the technology have literally withered on the vine as regulatory costs have made testing and commercial development economically unfeasible. In a 2010 letter to Nature Biotechnology, Jaime Miller and Kent Bradford of the University of California, Davis, described the impact of regulations on genetically engineered specialty crops (fruits, vegetables, nuts, turf, and ornamentals). They provided citations to 313 publications relating to 46 species and numerous traits beneficial to consumers, farmers, and the environment. However, they pointed out that only four of these crops had entered commercial cultivation in the United States, and none of them had reached the public outside of the United States (though the status of two in China was unclear). Of greater concern, they found that no genetically engineered specialty crop had been granted regulatory marketing approval anywhere since the year 2000. In supplementary data cited in their letter, Miller and Bradford provided information on 724 genetically engineered specialty plant lines that have been created but never commercialized.
Since the advent of recombinant DNA techniques in the 1970s, other newer, even more precise technologies for genetic engineering have been introduced to create organisms with new or enhanced traits. These approaches include, among others, RNA interference technology (RNAi) and the alteration of genes using so-called transcription activator-like effector nucleases (TALENs). Initially, APHIS had issued letters indicating that many crops developed through these newer techniques fall outside of the definition of a “regulated article” under the Plant Protection Act. But under pressure from anti-biotechnology groups, APHIS has also floated the idea that these crops could be captured for oversight as “noxious weeds” if they are invasive (e.g., turf grass), or cross-pollinate readily (alfalfa). Although the impact of invoking “noxious weed” regulatory authority is not yet clear, designating plants crafted with modern molecular techniques as falling in this category appears to be another example of unscientific, opportunistic regulation that will inhibit innovation.
Tortured statutes
The Environmental Protection Agency (EPA), like the USDA, has tortured its enabling statutes to undesirable effect. The EPA has long regulated field tests and the commercial use of pesticides under the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA). In 2001, the agency issued final rules for the regulation of genetically engineered plants and created a new concept, “plant-incorporated protectants” (PIPs), defined as “pesticidal substances produced and used by living plants.” EPA regulation captures pest-resistant plants only if the “protectant” has been introduced or enhanced by the most precise and predictable techniques of genetic engineering.
The testing required for registration of these new “pesticides” is excessive. It includes gathering copious data on the parental plant, the genetic construction, and the behavior of the test plant and its interaction with various species, among other factors. (These requirements could not be met for any plant with enhanced pest-resistance modified with older, cruder techniques, which are exempt from the FIFRA rules.) It should be noted that FIFRA provides a 10-acre research exemption for pesticides, even for extremely toxic chemicals, which does not apply to PIPs.
The EPA then conducts repeated, redundant case-by-case reviews: before the initial trial, when trials are scaled up or tested on additional sites, and again if even minor changes have been made in the plant’s genetic construct. The agency repeats those reviews at commercial scale. The agency’s classification of living plants as pesticides, even though the regulatory term is “plant-incorporated protectants,” has been vigorously condemned by the scientific community. And for good reason, since EPA’s approach has discouraged the development of new pest-resistant crops, encouraged greater use of synthetic chemical pesticides, and limited the use of the newest genetic engineering technology mainly to larger, private-sector developers that can absorb the substantial regulatory costs.
The vast majority of the acreage of plants made with recombinant DNA technology has been limited to huge-scale commodity crops. Even so, and in spite of discriminatory, burdensome regulation, their success has been impressive. Worldwide, these new varieties have provided “very significant net economic benefits at the farm level amounting to $18.8 billion in 2012 and $116.6 billion for the 17-year period” from 1996 to 2012, according to a report by PG Economics, Ltd, titled, “GM Crops: Global Socio-economic and Environmental Impacts 1996-2012, released in May 2014. Under the Toxic Substances Control Act (TSCA), the EPA regulates chemicals other than pesticides. Characteristically, in devising an approach to genetically engineered organisms, EPA chose to exercise its statutory discretion in a way that ignores scientific consensus but expands its regulatory scope. The agency focused on capturing for review any “new” organism, defined as one that contains combinations of DNA from sources that are not closely related phylogenetically. For the EPA, “newness” is synonymous with risk. As genetic engineering techniques can easily create new gene combinations with DNA from disparate sources, EPA concluded that these techniques therefore “have the greatest potential to pose risks to people or the environment,” according to the agency press release that accompanied the rule. Using TSCA, EPA decided that genetically modified microorganisms are “new chemicals” subject to pre-market approval for testing and commercial release.
But the EPA’s statement is a non sequitur. The particular genetic technique employed to construct new strains is irrelevant to risk, as is the origin of a snippet of DNA that may be moved from one organism to another. What matters is its function. Scientific principles and common sense dictate the questions that are central to risk analysis for any new organism. How hazardous is the original organism from which DNA was taken? Is it a harmless, ubiquitous organism found in garden soil, or one that causes illness in humans or animals? Does the added genetic material code for a potent toxin? Does the genetic change merely make the organism able to degrade oil more efficiently, or does it have other effects, such as making it more resistant to being killed by antibiotics or sunlight?
Like APHIS, the EPA ignored the scientific consensus holding that modern genetic engineering technology is essentially an extension, or refinement, of earlier, cruder techniques of genetic modification. In fact, the National Research Council’s 1989 report observed that, on average, the use of the newest genetic engineering techniques actually lowers the already minimal risk associated with field testing. The reason is that the new technology makes it possible to introduce pieces of DNA that contain one or a few well-characterized genes, while older genetic techniques transfer or modify a variable number of genes haphazardly. All of this means that users of the new techniques can be more certain about the traits they introduce into the organisms. The newer genetic engineering techniques allow even greater certainty about the traits being introduced and the precise location of those introduced traits in the genome of the recipient.
The bottom line is that organisms crafted with the newest, most sophisticated and precise genetic techniques are subject to discriminatory, excessive, burdensome, and costly regulation. Research proposals for field trials must be reviewed case by case, and companies face uncertainty about final commercial approvals of products down the road even if the products prove to be safe and effective.
The newest molecular breeding techniques have created anxiety at EPA, where there are internal pressures to declare that all forms of molecular modification create “new chemicals,” which would expand the agency’s regulatory reach still further under TSCA. If EPA were to adopt this “new chemicals” approach, there is legitimate concern that products from these new techniques could face the same fate as recombinant DNA-modified microorganisms: EPA has approved only one such microorganism since it declared them to be new chemicals in 1997.
Concurrently, EPA is considering an expansion of its FIFRA power, perhaps through the concept of “plant regulators,” to capture crops and products from the newest molecular modification techniques. In an EPA document published in May 2014, the agency received advice favoring the treatment of many uses of RNA interference technology as a pesticide, in spite of the testimony of Craig Mello—who discovered RNA interference, which won him the Nobel Prize for Physiology or Medicine in 2006—that the use of RNAi technology per se is inherently of very low risk and should elicit no incremental regulatory oversight. Similarly, James Carrington, president of the Donald Danforth Plant Science Center, testified to the “intrinsic non-hazardous properties of diverse RNA types,” stating that “there is no validated scientific evidence that [RNAi] causes or is even associated with ill effects. . . in humans, mammals, or any animals other than certain arthropods, nematodes, and certain microbes that consume or invade plants.”
Science suggests rational alternatives
There are far more rational—and proven—alternatives to the current unscientific regulation of genetic engineering. Indeed, science shows the way. For more than two decades, the Food and Drug Administration (FDA) has had a scientific, risk-based approach toward “novel foods” made with any technology. Published in 1992, the statement of policy emphasized that the agency’s Center for Food Safety and Nutrition does not impose discriminatory regulation based on the use of one technique or another. The FDA concluded that greater scrutiny is needed only when certain safety issues arise. Those safety issues include the presence of a completely new substance in the food supply, changes in a macronutrient, an increase in a natural toxicant, or the presence of an allergen where a consumer would not expect it. In addition, FDA has properly resisted calls for mandatory labeling of genetically engineered foods as not materially relevant information under the federal Food, Drug and Cosmetic Act, and as not consistent with the statutory requirement that food labeling must be accurate and not misleading. (As discussed above, another scientific and risk-based approach to regulation is the USDA’s long-standing treatment of potential plant pests.)
However, FDA has been less successful with its oversight of genetically engineered animals. In 1993, developers of a faster-maturing genetically engineered salmon—an Atlantic salmon containing a particular Pacific Chinook salmon growth hormone gene—first approached FDA. After 15 years of indecision, in 2008 the FDA’s Center for Veterinary Medicine decided that every genetically engineered animal intended for food would be evaluated as a veterinary drug and subjected to the same premarket approval procedures and regulations as drugs (such as pain relievers and anti-flea medicines) used to treat animals. The rationale offered was that a genetically engineered construct “that is in a [genetically engineered] animal and is intended to affect the animal’s structure or function meets the definition of an animal drug.” But this explanation conveniently ignores the science, the FDA’s own precedents, and the availability of other, more appropriate regulatory options.
Adoption of the FDA’s existing approach to foods (which is far less protracted and intensive than that for veterinary drugs) would have sufficed and should have been applied to genetically engineered animals intended for consumption. Instead, FDA interpreted its authority in a way that invokes a highly risk-averse, burdensome, and costly approach. The impact has been devastating: The FDA has not approved a single genetically engineered animal for food consumption. An entire, once-promising sector of genetic engineering has virtually disappeared.
Genetically engineered animals were first developed 30 years ago in land-grant university laboratories. Those animal science innovators have grown old without gaining a single approval for their work. Many academic researchers who have introduced promising traits into animals have moved their research to other nations, particularly Brazil. Many younger animal scientists have simply abandoned the field of genetically engineered animals. As for the faster-growing salmon, the FDA (and also, recently, the Obama White House) has kept it in regulatory limbo while imposing costs of more than $75 million on its developers. And there appears to be no regulatory resolution in sight for this safe, nutritious, environmentally beneficial alternative to the depletion of dwindling wild stocks of ocean fish.
The types of newer genetic engineering techniques emerging since the days of recombinant DNA technology that yielded the faster-growing salmon seem unlikely to fare any better at FDA. For example, a University of Minnesota animal scientist has used the TALENs technique to edit a gene in the Holstein dairy cattle breed to have the DNA sequence identical to the hornless (polled) trait found in the Angus beef cattle breed. This gene editing results in Holstein cattle which exhibit the hornless (polled) trait. This genetic modification provides greater animal welfare for dairy cattle (i.e., avoidance of dehorning) and greater safety for dairy farmers (i.e., avoidance of being gored). But FDA has refused to consider the genetically engineered Holsteins under the same approach it uses for genetically engineered foods. Rather, FDA has asserted that the genetically engineered Holstein cattle contain a “new animal drug” and that, therefore, the animals cannot be released or marketed until a new animal drug approval is granted.
The federal Fish and Wildlife Service (FWS) offers another example of anti-genetic engineering policies. Beginning in 2006, a nongovernmental health and environmental advocacy organization called the Center for Food Safety initiated a litigation campaign to force FWS to ban genetically engineered organisms from national wildlife refuges. The center argued that permitting the cultivation of genetically engineered crops constituted a “major federal action” that required environmental studies under the National Environmental Policy Act and compatibility studies under the National Wildlife Refuge Systems Act and the National Wildlife Refuge Improvement Act. FWS barely contested these allegations, and its own biologist testified inaccurately that genetically engineered agricultural crops posed significant environmental risks of biological contamination, weed resistance, and damage to soils. Not surprisingly, the courts ruled in the plaintiff’s favor.
Given FWS’s obvious lack of familiarity with genetic engineering and its officials’ apparent unwillingness to do the necessary homework, it is understandable that FWS did not respond appropriately to these court rulings. Instead of using its statutory authority to create categorical exemptions, which would have allowed modern farming practices on refuge lands, FWS banned genetically engineered crops for two years and convened a Leadership Team to determine whether such plants were “essential to accomplishing refuge purpose(s).” On July 17, 2014, FWS answered in the negative. Consequently, beginning January 1, 2016, FWS will ban genetically engineered plants from its refuges. Thus, not only did FWS reject science, but it ignored the enhanced resilience and environmental benefits that genetic engineering can foster.
Epilogue
Is there any reason for optimism about the future? Will reasonableness emerge suddenly in agencies’ oversight of recombinant DNA technology? How will the various regulatory agencies approach the newest refinements of genetic engineering? How will they respond to synthetic biology?
The opportunity costs of unnecessary regulatory delays and inflated development expenses are formidable. As David Zilberman, an agricultural economist at the University of California, Berkeley, and his colleagues have observed, “The foregone benefits from these otherwise feasible production technologies are irreversible, both in the sense that past harvests have been lower than they would have been if the technology had been introduced and in the sense that yield growth is a cumulative process of which the onset has been delayed.”
The nation has already foregone significant benefits because of the over-regulation and discriminatory treatment of recombinant DNA technology. If we are to avoid repeating those mistakes for newer genetic modification technologies and synthetic biology, we must have more scientifically defensible and risk-based approaches to oversight. We need and deserve better from governmental regulatory agencies and from their congressional overseers.
Henry I. Miller ([email protected]), a physician, is the Robert Wesson Fellow in Scientific Philosophy and Public Policy at Stanford University’s Hoover Institution. He was the founding director of the Office of Biotechnology at the FDA. Drew L. Kershen is the Earl Sneed Centennial Professor of Law (Emeritus), University of Oklahoma College of Law, in Norman, OK.