A Second Act for Risk-Based Chemicals Regulation

The Toxic Substances Control Act amendments put risk assessment at center stage. How will it perform?

At a time when partisan squabbles and election-year politics dominate the headlines, Congress did something remarkable in June 2016: it passed the most significant piece of environmental legislation in a quarter century. The Frank R. Lautenberg Chemical Safety for the 21st Century Act amends and comprehensively overhauls the Toxic Substances Control Act (TSCA), a 40-year-old statute with a disapproval rating potentially higher than that of Congress itself.

Unlike the Clean Air Act or Clean Water Act, which focus on preventing or remedying pollution, the focus of TSCA was upstream. The intent of the legislation was for the US Environmental Protection Agency (EPA) to look at the universe of chemicals in commerce and regulate those chemicals that posed an “unreasonable risk” to human health or the environment. TSCA may also have represented the high-water mark in congressional optimism (or hubris): it turned out there were significant quantities of about 60,000 chemicals in commerce when TSCA was enacted. Over 20,000 more have been added since.

TSCA proved to be a disappointment. In large part this was due to EPA’s grossly inadequate level of productivity. The agency required very few chemicals to be tested and restricted even fewer—none in the past 25 years. (Compare this with the several hundred active ingredients in pesticides that EPA has successfully reviewed since the early 1970s under the Federal Insecticide, Fungicide, and Rodenticide Act.) The US Government Accountability Office lists TSCA among federal programs that are high risk due to their vulnerabilities to fraud, waste, abuse, and mismanagement, or are most in need of transformation.

Frustrated with the lack of federal action, environmental and public health activists turned their energies to state legislatures and agencies, which began banning or restricting chemicals and even establishing their own chemical review processes, as California did after the passage of voter Proposition 65 in 1986. Consumers, meanwhile, began pressuring major retailers and consumer products companies to “deselect” certain chemicals.

These developments were welcomed neither by chemical manufacturers nor firms that use chemicals in industrial processes. Instead of dealing with one national program to control chemicals, they now had to deal with a patchwork of state laws and growing pressures from consumers, all overlaid on the federal program.

Congress ultimately responded by giving each group of stakeholders something: The newly amended TSCA (which we’ll call “amended TSCA” through the rest of this article) addresses environmentalist concerns by granting EPA increased powers (and fewer hurdles) to identify and regulate chemicals in commerce, as well as industry concerns by granting the federal government greater power to preempt state action. Under amended TSCA, once EPA makes a final decision over a chemical, states are limited in their ability to control it. And while EPA evaluates a chemical, states generally must refrain from taking action.

TSCA’s original failings were of two types: those arising from its legal framework and those arising from the science policy issues inherent in risk assessment and the so-called risk paradigm. Although the new amendments largely fix the antiquated legal framework, problems with the risk paradigm are more challenging and the solutions less amenable to legislative action. How EPA addresses these latter problems during the implementation phase will determine the success of the new law.

TSCA’s former legal problems

When it was enacted in 1976, TSCA channeled EPA’s attention toward chemicals that had not yet been introduced into commerce. EPA was required to review, within 90 days, every such new chemical before it was manufactured to determine if it might pose an “unreasonable risk.” EPA had the power, albeit limited, to seek relevant information from the manufacturer. Over time, the new chemicals program was broadly seen as having worked as intended, ensuring safety while not hindering innovation, a key consideration of TSCA. Amended TSCA effectively eliminates any limits on EPA’s power to seek additional information about new chemicals from manufacturers and requires EPA to make affirmative findings of what the bill terms “unreasonable risk” (a term left undefined in the legislation that EPA will effectively have to redefine through its choices under the amended statute).

By contrast, TSCA did not mandate EPA to evaluate chemicals already in commerce. Other statutes required the agency to direct resources toward things such as setting land disposal standards for hazardous waste. It was much harder for EPA to justify throwing indefinite amounts of resources at a subset of chemicals that the agency had to choose, essentially by judgment call.

But even if EPA chose to regulate such chemicals, it could do so only with great difficulty. The agency’s information-gathering powers were subject to a catch-22: EPA needed information to compel testing of a chemical, but testing was required to obtain the necessary information. Even when the agency had enough data to make a finding of unreasonable risk, the statute demanded a “least burdensome” approach to regulation. A 1991 court decision interpreted this to mean calculating and comparing the costs and benefits of “each regulatory option,” and struck down much of EPA’s ban on asbestos-containing products for not doing so adequately. For all these reasons, the existing chemicals program has barely worked at all, and has created a regulatory bias toward continued use of existing chemicals over the creation of new, typically “greener” chemicals. Amended TSCA requires EPA to prioritize and evaluate all existing chemicals currently in commerce and sets deadlines for action. It also empowers EPA to order testing whenever necessary for an evaluation. And it eliminates the “least burdensome” standard, requiring EPA, when setting restrictions, merely to “factor in, to the extent practicable,” considerations such as economic consequences and alternative approaches.

Finally, the “unreasonable risk” standard under prior TSCA required EPA to weigh considerations related to the environment and health against a variety of economic and social factors. The seminal text in risk assessment, the National Academy of Sciences’s (NAS) 1983 “Red Book,” declared that the process of assessing risks should remain separate—though informed by—the process of managing them. TSCA, which predated the Red Book by seven years, thus violated this recommendation by conflating the science-based risk assessment step and the policy-laden risk management step. The daunting challenge of trying to solve both problems at once was another reason EPA was so reluctant to act against existing chemicals. Amended TSCA slices through that Gordian knot by requiring EPA to assess risk without consideration of non-risk factors, with the latter relegated to being a consideration in choosing restrictions.

Chronic problems with chemical regulation

With TSCA’s major legal flaws eliminated, EPA will be forced to confront, prominently, a range of problems that have bedeviled risk assessment for years, albeit in other regulatory contexts. These problems, which can involve dozens of science policy decisions, are compounded by the sheer number of chemicals (and chemical uses) in commerce. EPA’s efforts will also highlight the questionable public health significance of regulating relatively small risks. Each of these issues is worth examining more closely.

For starters, epidemiology is rarely a helpful tool for chemical risk assessment. Even occupational studies, where exposures can be high and prolonged, are plagued by confounding factors such as smoking history, diet, and the tendency of healthy workers to work for more years than sick ones. In population-wide studies, exposures are far lower, harder to estimate, and even more subject to confounding factors. Researchers are forced to make adjustments that critics can challenge as manipulation. Distinguishing the “signal” from the “noise” is thus almost always debatable. Indisputable epidemiological evidence probably means you have a public health disaster.

Regulators have thus relied principally on animal studies to identify the hazards of chemicals and assess their degree of toxicity. But regulatory toxicology is hobbled by its own set of challenges. First, agencies, such as EPA, need to extrapolate from the high doses required to produce effects in small numbers of animals to the lower doses observed in the environment. These “maximum tolerated doses” (the highest amount that doesn’t kill the animals outright) often can cause cancer by processes that would not be produced by ambient exposures (e.g., tissue damage, triggering cell proliferation, which magnifies the chances of mutations). Second, regulators need to extrapolate the findings from animals to people. People may be more sensitive than animals—or they may be less sensitive. The underlying mechanical course of disease may involve metabolic processes that do not even occur in humans. Human relevance is thus a constant subject of dispute. People also vary among themselves in their susceptibilities to various effects—particularly as children or if already challenged by another disease or debility.

Fortunately, animal studies are gradually being displaced by new laboratory techniques for evaluating the toxicity of chemicals using cellular materials (ideally, from humans). For example, toxicogenomics enables experimenters to assess the tendency of a chemical to produce changes in gene expression. Related “-omics” techniques evaluate the effects of chemicals on the structure and function of proteins, on the creation of metabolites, and on the formation of RNA transcripts. High-throughput screening, a technique developed by pharmaceutical companies for testing potential new drugs, involves the use of “gene chips” or “microarrays” containing DNA sequences that can be exposed to a great number of chemicals on a rapid basis. These techniques offer the potential not just to evaluate the toxicity of chemicals in the abstract, but to interpret information gathered from the biomonitoring of populations. The power of all of these techniques is enhanced by the ability of computers and other information technology to find patterns in vast quantities of biological information and to model biological processes.

Yet these new technologies are far from battle-tested. Reliably tying an effect at the cellular or subcellular level to the production of disease in individuals remains problematic. Hot debates can emerge over whether a particular change is adverse—the beginning of a progression toward illness—or just an adaptive effect that is reversible and of no lasting ill effect. Also, the relationships between particular chemical agents, the “toxicity pathways” associated with those chemicals, and specific illnesses (or normal processes, for that matter) are typically highly complex and difficult to elucidate. These issues become especially pressing in the case of chemicals with well-established uses: although it makes sense for a drug company to reject one of thousands of molecules as a potential pharmaceutical based on a possibly-toxic effect detected in the lab, such a result seems insufficient for EPA to justify a ban on a chemical that is already produced in large volumes and used in a wide variety of products and processes.

The old and new approaches to assessing chemical toxicity thus share the problems of uncertainty and contestability. Although the uncertainties may be reducible in the future, agencies are required to make policy-based assumptions in the meantime, typically conservative ones. For example, the Occupational Safety and Health Administration clings to a policy that one study finding that a chemical causes cancer in an animal requires it to be labeled a carcinogen, even in the face of a contradictory finding from more sophisticated mechanistic analyses. Practices such as this make agencies lagging indicators of scientific progress. And risk evaluations, like most other scientific issues, can be manipulated by advocates (and agencies) to present a “fractal” problem, where more studies can be characterized as generating more questions.

Comparisons of risk-reduction efforts across federal agencies show that chemical control regulations are not cost-effective relative to other types of interventions.

Making progress on the foregoing challenges has historically been hampered by the unwillingness of agencies, such as EPA, to follow rigorous analytical frameworks and to “show their work.” In perhaps the most relevant example, EPA issued a TSCA “Work Plan” in 2012 describing how it planned to revive its long-dormant program for regulating existing chemicals. The Work Plan outlined a complex decision logic for winnowing down over 1,200 chemicals to a list of 83 on which EPA would focus. But EPA’s explanation of how it chose the first seven of those chemicals was inscrutable: EPA said it “consider[ed] a number of factors.” As a result, not all of the chemicals with the highest potential risk scores on EPA’s screening approach were among that group. These sorts of “black box” approaches obviously raise the possibility that political considerations affected the final results of what was supposed to be an objective exercise. They also prevent others from replicating the agency’s analysis, which is unfair to outside stakeholders and opens the agency to criticism.

A handful of other issues also pose a threat to EPA’s ability to successfully execute a risk-based TSCA program that can achieve the level of productivity required by Congress (and public expectations) and that will not be debilitated by lawsuits alleging arbitrary and capricious action.

The first is the sheer number of chemicals in commerce—about 80,000—each one of which may have multiple uses, with each use potentially presenting a unique risk subject to analysis.

Second, many of the chemicals that EPA is likely to include on its statutorily mandated list of “high priority” chemicals are those that have been targeted by activist groups and the media (such as brominated flame retardants). Any decision EPA makes involving them will be controversial: activists want nothing short of a ban; manufacturers still believe they are safe. And although controversial chemicals are usually the most-studied, and hence the most data-rich, no amount of data can satisfy the most determined opponents of chemicals. This is especially true of industry-generated data—and most of the data EPA will be evaluating has to be produced by industry. The legacy of industry misinformation campaigns, especially around tobacco, has cast a permanent shadow over all industry-supported science in many people’s minds.

Third, outside of its pesticide-regulatory program, EPA is not used to making the kinds of tradeoffs between risks that are the norm at agencies such as the Food and Drug Administration (which has to balance the reduction in illness or death that a new drug might produce against its potential to create other such risks). Nor does EPA typically consider the availability of substitutes and whether they are more or less beneficial than the regulated chemical—a problem exacerbated by the reality that there are typically less data on the health and environmental effects of substitutes.

A final challenge relates to the public health significance of regulating relatively small risks. Comparisons of risk-reduction efforts across federal agencies show that chemical control regulations are not cost-effective relative to other types of interventions. For example, a 1995 study by Tengs and others showed that regulations aimed at injury reduction were more than 100 times more cost-effective than chemical regulations. Other more recent studies are consistent with this finding. The principal reason is that the threshold that EPA has conventionally established to determine unacceptable risk is relatively low. For example, other EPA programs typically target cancer risks as low as one case of cancer beyond the amount ordinarily expected among a million persons exposed over a lifetime. It is extremely difficult for regulators to control such small risks cost-effectively. And, as just noted, a regulation that results in a substitution of one chemical for another could increase net risk if the regulator is not careful.

These challenges with applying the risk paradigm have inspired a movement toward alternative policy approaches. Most prominently at the federal level, Congress in 1990 abandoned risk as the basis for the Clean Air Act’s hazardous air pollution program, instead requiring maximum achievable control technology for major sources of such pollutants. In recent years, states increasingly have enacted similar hazard-based policies (e.g., toxics use reduction, alternatives assessment) in which formal risk analysis (and its emphasis on exposure) is foregone.

Critics of the risk paradigm decry its technical complexity and slow pace and they rightly point to the speed with which hazard-based approaches can be implemented. But the fundamental drawback of hazard-based policies is the absence of knowledge about baseline risk or the resulting change in risk that comes from reducing hazards without consideration of exposure—a serious deficiency indeed. Without an ability to measure risk reduction, it is impossible to compare the efficacy of two different approaches with reducing risks. One is simply guessing, or worse, just devoting resources to hazards in proportion to the degree they are feared or disliked by the public or to the degree of imbalance in political power between the proponents of regulation and its opponents. And since hazard-based policies can’t be compared for efficacy, they cannot be compared for cost-effectiveness.

Finally, hazard-based policies can’t be evaluated via cost-benefit analysis (CBA). The shortcomings of CBA are widely known. Most jarring is the requirement that “invaluable” things such as life or health be assigned some monetary value. Other inherent limitations include the prospect that benefits and costs will be assessed from the perspective of those doing the valuation, who may discount or outright omit consequences that matter to others. Although these challenges are indisputable, in our view they can be mitigated; for example, objectivity can be increased through the development of best practices and the application of external peer review. Every regulatory program involves the expenditure of resources for the production of benefits. CBA simply makes explicit what otherwise happens implicitly. And in a regulatory system fundamentally grounded on the notion of rationality—as the US system is—it is incumbent on us to seek to understand how much we are devoting to risk reduction and how much it is accomplishing. CBA is the standard tool for doing so.

Needed solutions

Applying the risk paradigm to thousands of chemicals in commerce was a much more daunting task in 1976 than it is today. Indeed, it may be that Congress chose to focus EPA on the relatively small universe of new chemicals as a logical reaction to the nascent discipline of risk assessment. Since TSCA was enacted, however, the practice of risk analysis has evolved, and the science supporting that practice has grown dramatically. Although that science will remain uncertain to some degree, we believe that such uncertainty has been reduced overall. And tools for characterizing risk have also advanced, so that identification and characterization of major uncertainties is a well-accepted principle. Thus, challenges that once seemed Herculean are more tractable today.

A successful approach to risk-based chemical regulation—one that addresses the major challenges discussed previously—starts with a risk-based prioritization exercise across the universe of existing chemicals to separate the bigger potential threats to public health from the smaller threats. Many different sources of information, including high-throughput screening, will be used to determine whether a substance is a priority for an in-depth risk evaluation. By screening substances based on relative hazard and exposure, EPA can greatly narrow the universe of substances to those that should be subject to closer scrutiny.

After identifying the highest priority substances, the risk assessment process should follow recommendations made in a series of canonical publications by the NAS and other blue-ribbon advisory bodies (such as the Presidential Commission on Risk Assessment and Risk Management established under the 1990 Clean Air Act Amendments) over the past few decades. These important recommendations include:

    • Bringing risk managers into the assessment process early to help with problem formulation to ensure that risk assessment resources are directed toward known decision points;
    • Consistently and rigorously following transparent and replicable frameworks for literature review;
    • Drawing scientific conclusions across multiple studies based on the weight of evidence; and
    • Advancing mechanistic research, toxicogenomics, and other new and emerging toxicological techniques.

During prioritization and the subsequent in-depth risk assessment, the agency should not ignore substances lacking basic information on hazard and exposure. In other words, “data-rich” chemicals should not be penalized relative to “data-poor” chemicals. Some chemical substances, such as bisphenol A, have been the subject of hundreds of studies, whereas others have been subject of very few. To generate new data, which can be an expensive undertaking, the agency should employ a value-of-information approach to assess whether additional data generation is likely to be worthwhile given limited resources, including laboratory capacity. For example, a substance that comes up positive on a low-cost screening test can then be subject to a more specific (and more expensive) test to minimize the possibility of a false positive.

After identifying substances that pose an unacceptable risk based on a health-based standard, the agency should identify a manageable number of regulatory alternatives (including the alternative of not regulating at all) based on economic principles. For example, the presence of a market failure, such as an information asymmetry, might best be addressed through a mandate to provide information to consumers. When comparing among the alternative regulatory options, EPA should consider both costs and benefits. Particular attention should be paid to potential countervailing risks consistent with the “no net harm” regulatory maxim.

Perhaps most important, EPA must be given realistic but action-forcing deadlines for conducting risk assessment and risk management. Experience under TSCA and other statutes, such as the Clean Air Act hazardous air pollutant program discussed earlier, has proven that, absent such mandates, the agency’s incentives are to avoid making controversial choices and to postpone decisions by playing up the significance of potential new, unanswered questions. Achievable, yet ambitious, deadlines will also better ensure sufficient appropriations through the congressional budget process.

How does amended TSCA compare?

Amended TSCA largely enacts these recommendations. The new law sets up a three-step process (prioritization, risk evaluation, and regulation) for existing chemicals. EPA must look across the universe of chemicals and compare them based on hazard and exposure. Chemicals that are identified as high priority will then go through a risk evaluation exercise to determine if the substance meets the health-based standard. Substances determined to pose an unreasonable risk must be regulated after consideration (but not detailed analysis) of the costs and benefits of at least two regulatory alternatives.

Although Congress justifiably refrained from being prescriptive about risk assessment in the new act, it did require EPA to follow general principles consistent with past NAS recommendations, including use of the best available science and making decisions based on the weight of the evidence—concepts for which EPA must now provide fleshed-out regulatory guidance that will likely direct, but not dictate, the agency’s action. It also requires EPA to consider the relevance of information to a decision, how clearly and completely information is presented, and the extent to which it has been peer-reviewed. Amended TSCA further requires EPA to reduce and replace vertebrate animals as toxicology models “to the extent practicable, scientifically justified, and consistent with the policies of [TSCA],” and requires EPA to develop a strategic plan, with notice and comment, to accelerate the development and validation of nonanimal techniques. Key rule making on how the agency will conduct risk prioritization and risk evaluation must be completed in a year and all other policies, procedures, and guidance must be finalized the following year. Amended TSCA also requires EPA to establish a Science Advisory Committee on Chemicals.

The new law creates a regulatory regime that erases artificial distinctions among chemicals based on non-risk factors, such as economic considerations. The agency must make an affirmative determination on whether a substance is likely or not to present an unreasonable risk. This requirement for an affirmative declaration (previously absent from TSCA), coupled with new legal tools to compel data generation by industry, will help to level the playing field between data-rich and data-poor chemicals. The application of the new health-based standard to all chemicals will level the playing field between new and existing substances.

Amended TSCA maintains prior deadlines (a 90-day review period, plus one additional 90-day extension) for EPA to assess new chemicals. More importantly, it specifies deadlines for each step of the prioritization-evaluation-regulation process for existing chemicals. It also establishes minimum throughput requirements (to start with, EPA must have begun risk evaluations of 10 chemicals by December 2016 and three years later is to have started evaluations of 20 more “high priority” chemicals and to have identified 20 low-priority chemicals requiring no further action). With any luck, these deadlines, and the prospect of judicial review, will work together to lead EPA to make adequately protective decisions regarding clear risks and to avoid regulating tiny risks or making highly precautionary, policy-driven decisions in the face of scientific uncertainty. And, crucially, the deadlines should also override EPA’s reluctance to make controversial decisions.

As noted, amended TSCA replaced the law’s previous “least burdensome” standard with a softer mandate to consider (but not weigh) costs and benefits before choosing restrictions. As a practical matter, EPA will still have to undertake a cost-benefit analysis to comply with an executive order issued by President Clinton in 1993 requiring “regulatory review” of rules that may have significant economic or policy impacts. Amended TSCA also requires the agency to explicitly consider the risk posed by chemical substitutes, which arguably is the same as requiring a “no net harm” test.

Some have predicted that the pace of the program is likely to remain an issue because the sheer number of existing chemicals is so large and the prioritization-evaluation-regulation process can take up to six years for a single chemical.

As to the first issue, amended TSCA requires EPA to “reset” its inventory of existing chemicals into “active” and “inactive” chemicals (depending on whether a chemical was manufactured or processed commercially within the 10 years preceding enactment) and to prioritize from the active inventory; it seems likely that the list of active chemicals in substantial use may be considerably lower in number. EPA can also improve matters by leveraging the risk analysis expertise of external parties. For example, the act requires the agency to develop guidance for any interested third party that wishes to submit a risk evaluation for agency consideration, which should reduce the agency’s workload, although EPA would retain discretion regarding whether and how to use such evaluations. But there is no reason to limit this idea to just risk evaluation. EPA could do the same with respect to the risk analysis component of prioritization and the substitution risk component of regulation. To make this work, the agency must first specify transparent and replicable procedures that it plans to follow and then indicate a willingness to consider external submissions that follow the agency’s own internal procedures. EPA could also provide incentives for such crowdsourcing of risk analysis through reduced user fees.

The TSCA amendments require the agency to be transparent in its procedures and methods. The degree to which EPA lays out replicable methods and follows objective frameworks in each step of risk analysis (formulating problems, reviewing and analyzing the literature, conducting a weight-of-evidence analysis across multiple toxicological studies, and identifying and characterizing uncertainty) will go a long way toward determining whether EPA can achieve the objectives of the new law. Such transparency and consistency will enable manufacturers and processors to forecast how their own products will fare under EPA review and to take actions to minimize regulatory consequences. Anticipating regulatory agency behavior will improve the cost-effectiveness of the regulatory program and increase net public benefits.

By its sweeping amendment of TSCA, Congress has addressed the major legal deficiencies of the statute while leveraging modern techniques of risk analysis to create a level playing field for all chemicals in commerce. The success or failure of the new law will depend on the policy choices the agency makes during the first two to three years of implementation. Only after this implementation phase is over will we begin to see how a risk paradigm bolstered by emerging scientific capabilities fares against the considerable expectations of stakeholders who supported the new law.

Keith B. Belton is Principal of Pareto Policy Solutions LLC (www.paretopolicysolutions.com), a policy analysis and advocacy firm with a focus on making federal regulation more efficient and effective. James W. Conrad Jr. is Principal of Conrad Law & Policy Counsel (www.conradcounsel.com), through which he represents a range of associations and companies in environmental, security, and other legal areas.

Recommended reading

James W. Conrad, Jr., “Reconciling the Scientific and Regulatory Timetables,” in Institutions and Incentives in Regulatory Science, Jason S. Johnston, ed. (Plymouth, UK: Lexington Books, 2012), 150-159.

National Research Council, Review of the Environmental Protection Agency’s Draft IRIS Assessment of Formaldehyde (Washington, DC: National Academies Press, 2011), 112-123.

National Research Council, Science and Decisions:  Advancing Risk Assessment (Washington, DC: National Academies Press, 2009), 240-256. 

National Research Council, Toxicity Testing in the 21st Century: A Vision and Strategy (Washington, DC: National Academies Press, 2007), 35-52, 98-113.

Daniel Sarewitz, “How Science Makes Environmental Controversies Worse,” Environmental Science & Policy, no. 7 (2004): 396-97.

Cass R. Sunstein, “Your Money or Your Life,” The New Republic (March 15, 2004); available online: https://newrepublic.com/article/63184/your-money-or-your-life

US Environmental Protection Agency, “The Frank R. Lautenberg Chemical Safety for the 21st Century Act.”

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Belton, Keith B., and James W. Conrad Jr. “A Second Act for Risk-Based Chemicals Regulation.” Issues in Science and Technology 33, no. 1 (Fall 2016).

Vol. XXXIII, No. 1, Fall 2016