Reputation and Power: Organizational Image and Pharmaceutical Regulation at the FDA by Daniel Carpenter. Princeton, NJ: Princeton University Press, 2010, 856 pp.
Henry I. Miller
Daniel Carpenter’s magnum opus about the origins, operations, and organizational nuances of the Food and Drug Administration (FDA) represents in many respects superb historical scholarship. A former chief counsel for the regulatory agency praised it as “the best pure history of the development of FDA regulation of new drugs ever written.” Yet I found it to be less useful in offering insight into what makes regulators tick, and to harbor some surprising omissions.
Carpenter, a professor of government at Harvard University, emphasizes the importance of the “organizational reputation” of entities ranging from military and diplomatic bodies to disaster relief groups and regulatory agencies. The central concept of his perspective on regulation is the influence of “audiences,” the entities that either affect the regulatory agency (the judiciary, Congress, nongovernmental organizations, the media, and the regulators’ political masters) or are affected by it (regulated industry and the public). This formulation views the FDA as a receptacle in which patients, pharmaceutical companies, the media, and legislators deposit their trust or mistrust.
Carpenter dissects the various components of reputation: “performative (did the agency get it right?), technical (does the agency have the know-how and methods to get it right?), procedural (does the FDA follow accepted procedures suggested by law and science?), and moral (does the agency show compassion to those affected by its decisions? Is it captured or inappropriately influenced?).”
What is the benefit to a regulatory agency of enjoying a good reputation? In Carpenter’s view it “supports” or legitimatizes power, which he subdivides into three species: directive, gatekeeping, and conceptual.
Directive power is the ability to require that a company or researcher do—or refrain from doing—something. Gatekeeping power, which is enjoyed by few federal agencies, derives from the need for regulators to approve a product (a drug or medical device, for example) before it can be legally sold. (The other major gatekeepers for similar kinds of products are the Environmental Protection Agency, which licenses pesticides; and the Department of Agriculture’s Animal and Plant Health Inspection Service, which oversees animal vaccines and genetically engineered plants.) Conceptual power is the ability to mold the ambient methods, practices, and jargon associated with drug development and its regulation; in other words, many of the terms used by regulators (“new drug application,” “new molecular entity,” and “food additive,” for example) have become legal “terms of art.”
Carpenter believes that in order to enhance their reputations, bureaucrats seek others’ esteem, which is enhanced by association with organizations of good repute and vice versa. In his opinion, the FDA has had the advantage of employees dedicated to enhancing its reputation for scientific competence. This model postulates incentives for bureaucrats both to arrogate power and to do good work.
I’ll buy that there’s an inclination to arrogate power—FDA Commissioner Frank Young used to quip that “dogs bark, cows moo, and regulators regulate”—but during my decade and a half at the FDA, I discovered potent incentives for regulators’ actions that are not best for individual patients or public health but serve only bureaucratic self-interest; self-interest that is best realized by policies and actions that give rise to new responsibilities, bigger budgets, and more expansive bureaucracies.
Other critical incentives derive from the asymmetry of outcomes from the two types of mistakes that regulators can make. A regulator can err by permitting something bad to happen, such as approving a harmful product; or by preventing something good from becoming available by not approving or by delaying a beneficial product. Both outcomes are bad for the public, but the consequences for the regulator are very different. The FDA’s approval process for new drugs has long struggled with this dichotomy.
The first kind of error is highly visible, making the regulators susceptible to attacks by the media and patient groups and to congressional investigations; for the individuals involved, it can be a career killer. But the second kind of error—keeping a potentially important product out of consumers’ hands—is usually a nonevent and elicits little attention, let alone outrage. As a result, regulators introduce highly risk-averse policies and make decisions defensively, avoiding approvals of potentially harmful products at all costs and tending to delay or reject new products ranging, in the FDA’s case, from fat substitutes to vaccines, painkillers, and prosthetic joints. If a regulator does not understand or is vaguely uneasy about a new product or technology, his or her instinct is to delay or interdict. In Carpenter-speak, regulators feel that they enhance their reputations and gain colleagues’ esteem by not approving a potentially harmful product.
Carpenter believes that “the reputation and the power of the Food and Drug Administration in the governance of pharmaceuticals have waned appreciably,” which he ascribes to a political “rightward shift” that dates from the passage of the Food and Drug Administration Modernization Act (FDAMA) of 1997. He goes so far as to characterize the (unsuccessful) bills that preceded the 1997 act as flirting “with the evisceration and privatization of FDA gatekeeping power—by authorizing experiments with ‘third party’ review.”
These assertions are puzzling for several reasons. First, it is difficult to comprehend how legislation as tepid as FDAMA could be considered any sort of milestone or inflection point. For the most part, it merely codified longstanding practices and procedures that were already in place; there was virtually nothing in the legislation that regulators didn’t want or couldn’t live with. Second, third-party review had actually been tried successfully several years earlier. In a two-year pilot program (1992–1994) undertaken at the urging of President George H. W. Bush’s Council on Competitiveness, the FDA contracted out reviews of new drug application (NDA) supplements and compared the results of these evaluations to in-house analyses. The contractor was the Mitre Corporation, a nonprofit technical consulting company. In all five of the supplements reviewed by Mitre, the recommendations were completely congruent with the FDA’s own evaluations. Moreover, the time required for the reviews was two to four months, and the cost ranged from $20,000 to $70,000—fast and cheap compared to federal regulators.
That experiment was hardly unprecedented. Except for the final sign-off of marketing approval, the FDA has at one time or another delegated virtually every part of its various review and evaluation functions to outside expert advisors, consultants, or other entities. Far from savaging the powers of the FDA, third-party review was merely intended to make regulation more efficient and less expensive, using an approach that is similar to pharmaceutical regulation in the European Union and that had been experimented with successfully in the United States.
Third, most FDA watchers, myself included, would argue that the agency’s power has increased, not decreased, in recent years. During the past decade, legislative expansion of the FDA’s discretion permits regulators to dictate the content of drug labeling (instead of being arrived at through a process of discussion and negotiation), to require post-marketing (phase 4) clinical trials as a condition of approval, and to impose various “risk evaluation and mitigation strategies” that can profoundly limit the ultimate sales of a drug. These developments represent a marked enhancement of the FDA’s directive and gatekeeper powers.
Moreover, the FDA’s leadership has unilaterally and sometimes extralegally introduced what amount to new requirements for the approval of drugs in addition to the statutory ones of safety and efficacy. These new requirements include a demonstration of superiority to existing therapies, which is often a far more difficult (and expensive) standard to meet. Also, the FDA’s budgets have seen stunning, unprecedented increases in recent years. Nevertheless, Carpenter postulates that “the rise of libertarian models and conservative politics in the United States, the accretion of power to the global pharmaceutical industry, and the globalization of economic regulation have all weakened the authority and force of the Administration’s capacities and actions.”
I do agree that the FDA’s reputation has withered, but for reasons different from those cited by Carpenter. He notes that the FDA has been widely criticized for being too lax and overly collaborative with industry (and seems to agree with this assessment), but much of the quantitative evidence suggests that the agency has become increasingly risk-averse, hyperregulatory, and problematic to industry during the past 20 years. Metrics that support this view include increases in the number of clinical trials, patients, and procedures reported in NDAs; lengthening of the time required for the average clinical trial; and, especially, skyrocketing costs to bring a drug to market.
For a work of this magnitude, there are surprising omissions. One is the phenomenon of information cascade, the way in which incorrect ideas gain acceptance by being parroted until eventually we assume they must be true even in the absence of persuasive evidence. Obviously, it is intimately related to reputation, and arguably many of the misconceptions about the FDA stem from the constant drumbeat of dubious accusations from media, congressional, and advocacy group that regulators are insufficiently risk-averse and are too lenient and collaborative toward the drug industry. The information cascade concept was popularized by Carpenter’s Harvard colleague Cass Sunstein, who now heads the regulatory side of President Obama’s Office of Management and Budget.
Another surprise is the incompleteness of Carpenter’s discussion of FDA leaders’ willingness to accede to undue political influence on product-specific regulatory decisions. Justifiably, he excoriates then-Commissioner Lester Crawford for contravening both the data and agency consensus by overruling a decision to approve the day-after contraceptive pill, known as Plan B, for over-the-counter status, but he neglects the equally egregious interference by then-Commissioner David Kessler, who obeyed orders from above about which products should be expedited and which delayed by regulators. For example, the agency approved a dubious female condom after being informed by the secretary of Health and Human Services that it was a “feminist product” and that delay was not acceptable; and at Kessler’s direction, FDA officials went to extraordinary lengths to look for reasons not to approve biotechnology-derived bovine somatotropin, a veterinary drug, because Vice President Al Gore considered it to be politically incorrect. These omissions and Carpenter’s repeated caviling about the actions of Dan Troy, the brilliant FDA general counsel during the administration of George W. Bush, raise the specter of a political agenda.
I found Carpenter to be overly sympathetic in his characterization of FDA epidemiologist David Graham, who appears to harbor an idée fixe about the supposed dangers of many widely prescribed and FDA-approved medicines. FDA managers permit Graham to publicly contradict agency policy (and the consensus of the extragovernmental medical community), presumably because he is regarded by some members of Congress and their staffs as a whistleblower. Talk about compromising the agency’s reputation.
Finally, Carpenter observes that in constructing this magnum opus he has “incorporated methods and insights from many disciplines—history, pharmacology, political science, law, medicine, public health, mathematical finance and economics, sociology, mathematical statistics, and anthropology. To be frank, I have mastered none of the trades, and this book represents a highly imperfect combination of research methods.” He is correct, and in the end he will please few practitioners from any one discipline. Carpenter’s polymathic approach added more weight than insight to his 800-page tome.
Henry I. Miller ([email protected]stanford.edu), a physician and molecular biologist, is a fellow at Stanford University’s Hoover Institution and at the Competitive Enterprise Institute. He was formerly a reviewer, manager, and office director at the FDA.