A New System for Moving Drugs to Market
Today’s system for developing and approving drugs is fundamentally flawed. Fixing it will require new technological tools and new regulatory approaches.
The pharmaceutical industry is one of the most successful components of the U.S. economy. In recent years, however, critics have increasingly blamed the industry for setting prices too high, for earning too much profit, and for developing more “me too” drugs than truly innovative therapies. High prices have led private citizens, organizations, municipalities, and states to purchase prescription drugs from Canada, and they have prompted Congress to consider legalizing the reimportation of drugs, a serious threat to the future viability of the industry.
The industry justifies its product prices in several ways. First, the industry points out that its R&D costs are enormous. Its trade organization, the Pharmaceutical Research and Manufacturers of America, estimates that bringing the average drug to market costs more than $800 million. Second, the industry says that getting a new drug to market takes a long time, typically 12 years to 15 years, which leaves companies with only 2 to 5 years of patent life remaining before competition from generic drugs begins. Thus, the initial return on investment and the bulk of profits must be made during a relatively short period; profits that in turn are used to fund more R&D. These factors are often cited as pushing companies to invest mainly in drugs that have a good chance of success (that is, drugs in a therapeutic class that already has demonstrated clinical value and large profit potential) rather than to explore untested therapeutic areas.
Where the industry sees good business sense, however, we see fundamental flaws in the process by which drugs are developed. Moreover, these problems are due, in large measure, to flaws in how the federal government currently regulates drug development and the introduction of new drugs into the market.
Today’s drug development process, which has come to be characterized by high costs and slow output, has evolved during the past 50 years. The process is built on the best of intentions: providing the highest standards for assessing the efficacy and safety of drugs. But it is not appropriately structured for the way drugs are marketed and used today. Its framework rests on the principle that the U.S. Food and Drug Administration (FDA) should require drug companies to conduct the entire scope of work necessary to establish the absolute safety and efficacy of a drug before it is marketed. In practice, however, this approach has not always worked and is inconsistent with our current understanding of the biologic diversity of humans. Experience has shown that some investigational drugs that appeared safe and effective before they were approved turned out to have unacceptable toxicity after they reached the market and were used by millions of people.
The framework also relies on the principle that providing warnings about a drug’s potential risks, either by printing such warnings on the product label or in package inserts, will protect people from being harmed. But here again, experience has shown otherwise. Consider three prescription drugs once in common use: Seldane, Hismanal, and Propulsid. Each carried warnings on their product labels that they should not be taken along with certain other drugs, such as erythromycin, because the combination could trigger life-threatening arrhythmias. But some health care providers either did not read the warnings or ignored them, and many patients died as a result. The manufacturers finally removed the drugs from the mass market. Consider another three widely prescribed drugs that were known to carry a risk of liver toxicity: Rezulin, Duracet, and Trovafloxacin. All were removed from the open market because physicians were failing to adhere to warning labels indicating that patients should have their liver functions monitored during therapy. The drugs, when used as directed on the label, were considered safe.
R&D up; drugs down
FDA has now identified problems with the drug development process. In March 2004, the agency issued a white paper, Innovation/Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products, which concludes that there is stagnation in the development process. Known as the Critical Path report, it declares that the drug pipeline, as measured by the number of applications submitted to the FDA for new medical therapies, is not growing in proportion to the national investment in pharmaceutical R&D.
The national investment comprises three parts: pharmaceutical industry investment, National Institutes of Health (NIH) investment, and venture investment. Industry and the NIH account for the lion’s share. Both sectors have increased their investments dramatically in recent years, with their combined totals rising from roughly $30 billion in 1998 to nearly $60 billion in 2003. (This total is expected to top $60 billion in 2004.) The NIH budget for research has doubled during this period, while industry expenditures for R&D have risen 250 percent during the past 10 years. However, venture investment has not kept pace and has even declined during the past four years, for reasons that will soon become apparent.
In light of such large investments, many people expected to see a host of exciting new treatments for human illness emerge. But the number of applications to the FDA for new chemical drugs has not changed, and the number of applications for new biological drugs has actually declined. Some observers have blamed this shortfall on a slow review of applications by the FDA. But this is not the case. Since Congress authorized the FDA to charge a “user fee” in the early 1990s, the agency has been able to hire more staff to review applications and has cleared the backlog that had accumulated. Review times now stand at an average of eight months for important new drugs.
Further evidence also points to problems in the drug development process. Despite a host of technological and scientific advances in such areas as drug discovery, imaging, genome sequencing, biomarkers, and nanotechnology, failure rates in drug development have increased. In fact, new drugs entering early clinical development today have only an 8 percent chance of reaching the market, compared with a 14 percent chance 15 years ago. In addition, failure rates during final clinical development are now as high as 50 percent, compared to 20 percent a decade ago.
The long, expensive, and risky development process helps explains the declining investment by the venture sector. Venture capitalists typically invest in smaller companies. But smaller drug companies cannot promise returns on investment for a decade or more, and this feature makes them less attractive as investment opportunities. Problems are greatest for companies working on drugs for small medical markets, because most large pharmaceutical companies typically will license products from venture firms only when those products have a market potential of at least half a billion dollars. Thus, many companies are left to fail, which discourages further investment in new drugs.
Promoting partnerships
The FDA’s Critical Path report calls for innovations to speed the development of new drugs, with the agency declaring: “We must modernize the critical development path that leads from scientific discovery to the patient.” Modernization, the report adds, will require conducting research to develop a new product development toolkit. This kit should contain, among other things, powerful new scientific and technical methods, such as animal- or computer-based predictive models, biomarkers for safety and effectiveness, and new clinical evaluation techniques, to improve predictability and efficiency along the path from laboratory concept to commercial product. Toward such goals, the FDA has invited the pharmaceutical industry and academia to join with the agency in conducting research that will provide “patients with more timely, affordable, and predictable access to new therapies.”
This idea is not without precedent. As early as the 1980s, the FDA began working closely with the pharmaceutical industry on innovative ways to develop new drugs for HIV and AIDS. These efforts resulted in average development times as short as 2 to 3 years, while the average for all drugs was growing to 12 years. This experience clearly demonstrates the feasibility of accelerating drug development without taking dangerous shortcuts. In fact, if drug developers are thoroughly innovative, accelerated drug development could be more informative than the current process and lead to greater understanding of the safety and effectiveness of marketed drugs.
The FDA also has joined in partnership with the food industry and the University of Illinois to create the Center for Food Safety Technology, and with the industry and the University of Maryland to create the Joint Institute for Food Safety and Nutrition. Based at the respective universities, these centers are intended to serve as neutral ground where the partners can participate in research of common value to the food industry and the public. Today, the FDA and SRI International (formerly Stanford Research Institute) are joining with the University of Arizona in developing the first partnership aimed specifically at accelerating the development of new drugs by creating the innovative tools called for in the Critical Path report. The new Critical Path to Accelerate Therapies Institute, or C-PATH Institute, will serve as a forum where the partners can discuss how to shorten drug development times without increasing the risk of harm to patients, and then set about bringing these plans to fruition.
Fast—and safe
How will safety be addressed when drug development is accelerated? The answer may not be one that many people expect.
Recent experiences with drugs that were found to cause serious adverse events after they had entered the market, such as Vioxx, which was linked to cardiovascular problems, have convinced many people that a major weakness in the current system is its failure to adequately assess safety before drugs are marketed. In the past eight years, 16 drugs have been removed from the market. Were these drugs inadequately tested? Not by current standards. Today, more is known about a new drug reaching the market than was known about any previously approved drug in its class. However, the current drug development system mistakenly assumes that drug safety can be adequately ascertained during development.
To better explain, it is first necessary to look at the drug development process. After a prototypical new drug spends several years in the discovery process and preclinical testing, it enters the first of three phases of clinical development. Phase I is intended to determine how well humans tolerate the drug and whether it is generally safe. The drug is given to volunteers in single doses, beginning at low dosages and increasing to higher dosages, and then in multiple doses. Most companies choose to use healthy volunteers in these trials, because this approach is easier and far less expensive than enrolling patients with the target illness, and the trials typically include only a few dozen people. Such constraints limit the amount of information that can be gained in this phase.
Phase II is conducted in patients with the target indication. This phase often lasts one to three years and involves a few hundred patients. Doses are increased over the anticipated clinical range, and the trials provide the first substantial evidence of pharmacologic activity and demonstrate that the drug has the desired efficacy. The trials are followed by an “end of phase II” meeting between FDA reviewers and representatives of the sponsoring company, in which the parties agree on a tentative plan for phase III trials. The conundrum that the FDA reviewers face is deciding how much more safety data they should require be obtained in phase III, recognizing that every requirement they impose further delays patient access to what may be a valuable new therapy.
Phase III often lasts 8 to 10 years and involves 1,000 to 3,000 patients. The trials serve as a proof of concept, in which patients are treated under conditions that more closely resemble the real world of clinical medicine. Data are examined to be sure that the drug will continue to demonstrate the type of efficacy observed in phase II and maintain an acceptable safety profile. The FDA’s goal is to determine whether the drug’s benefits outweigh any known or suspected risks. In recent years, this also has become the phase in which the company is expected to verify the possible consequences of drug interactions and determine whether dosage adjustments will be required in patients with concomitant diseases, such as renal failure, liver disease, and heart failure.
If all goes well in phase III, the FDA approves the drug and it enters the marketplace, often in a major way. With today’s aggressive marketing and direct-to-consumer advertising, new drugs frequently are being taken by millions of people early after launch, often in ways not anticipated during development or intended in the labeling. This sudden increase in usage leaves little time for the FDA and the manufacturer to detect any serious medical risks that might arise before the number of people affected has grown quite high.
And when millions are exposed, risks are almost certain to arise. The current drug approval system assumes that safety can be adequately ascertained during clinical trials that typically test a drug on several thousand people at most. This is simply not a valid assumption, because of the biologic diversity that exists in humans and the fact that marketed drugs are not always used in the same way as when they were being developed. The types of adverse reactions that result in drugs being removed from the market occur at a rate of less than 1 per 10,000 patients treated. Only if the investigational database for a new drug included more than 30,000 people could such rare events have a 95 percent chance of being detected before approval. No drug company could afford to conduct development programs of such magnitude. Asking companies to increase their investment in phase III without addressing the flaws that exist in the overall development process will only further delay development and increase the price of drugs. Furthermore, doing so is unlikely to detect adverse events that are relatively rare.
Further, even when adverse events are recognized and the FDA issues warnings, the lack of response by many health care providers often fails to effectively limit the harm. This means that the FDA’s only realistic option is to request that the drug be removed from the market. There also have been recent examples in which investigational drugs (ximelagatran and sorivudine) that demonstrated adverse events due to drug interactions during clinical trials were not approved by the FDA because the agency could not be assured that the manufacturer would be able to effectively manage the risk once the drugs were on the market.
Blueprint for action
What is needed is an alternative approach to developing and regulating new drugs. We propose a system that provides earlier approval for new prescription drugs, but requires more gradual growth in their use and comprehensive assessment of their safety as they spread through the marketplace. These changes will allow time for more complete real-world safety testing and for assimilating drugs into the daily practicie of medicine before millions of people are exposed to them.
The first change suggested is in phase II clinical trials. The trials would be expanded to include more complete characterization of the drug’s dose-response relationship in the intended population and subpopulations (for example, the very elderly, people with renal insufficiency, and people with co-morbid conditions) and to include more thorough drug interaction studies. Such studies would make use of modern computing techniques, biomarkers, adaptive trial design, and other advanced tools as suggested in the Critical Path report. Trials typically would take about four years, at which time the drug could be approved for marketing to a carefully defined population of patients. This approach is similar to the way in which several AIDS drugs, such as the protease inhibitors, were developed and translated into clinical practice in two to four years.
To make the early release of a drug rational, it will be essential to have an intensive plan for post-marketing safety assessment and risk management. Here, academic groups may have an important role to play. Groups such as the Centers for Education and Research on Therapeutics (CERTs), funded by the Agency for Healthcare Research and Quality, can develop risk management programs and conduct outcomes research on large databases and registries to confirm the efficacy and safety predicted from phase II trials. The groups also can use similar methods to confirm efficacy and evaluate the potential efficacy of the drug in new indications. This would be a very appropriate use of the CERTs, whose congressional authorization includes the mandate to improve the outcome from prescription drugs and other therapeutics.
In most cases, newly approved drugs should be given to a defined population under observed conditions, perhaps in a manner similar to that used in the “yellow card” system in the United Kingdom, in which physicians report the outcome of therapy (on a yellow card, of course) in each patient receiving a specific drug. Indeed, modern electronic medical record systems available in many health care delivery systems should make it possible to track the outcome of every treated patient in that system. The FDA and the pharmaceutical manufacturer would have to employ measures to assure that the drug is initially used as directed in labeling. Manufacturers could be encouraged to follow the lead of at least one innovative company that pays commissions to sales representatives based on how well doctors in their region use the company’s drug instead of how often the drug is prescribed.
This system would enable a company to begin marketing a new product earlier, with less total capital investment, and at a time when much more of the drug’s patent life is still in effect. The system also should make it possible to detect any serious life-threatening problems earlier, and certainly before millions of people have been exposed. In addition, for companies using this track, serious consideration should be given to offering indemnification from lawsuits filed for adverse events in return for the company paying for any medical expenses resulting from such adverse reactions. This would provide drug companies and patients alike with some relief from the harm caused by a new drug and would recognize the inevitable nature of adverse drug reactions.
After a period of careful observation, drugs that appear safe and effective could be approved for expanded markets, with fewer or no restrictions on their use. This situation would effectively be the same as the current market in which licensed physicians can prescribe a marketed drug for any indication, as long as the physician has evidence that such use has a scientific basis. If a marketed prescription drug is found to be relatively safe and used for a condition that can be self-diagnosed by the patient, it has been customary for it to be given nonprescription or “over-the-counter” (OTC) status. But this is a significant change in status and therefore poses a difficult challenge for regulators. Canada and many other countries have introduced an intermediate status that allows for a more gradual transition. These countries often move from prescription-only to “behind-the-counter” status, in which a patient must ask the pharmacist for the drug. The pharmacist can then perform prescreening or counseling that could make it more likely that the drug will be used safely. This additional step could widen the therapeutic benefit to patients, better utilize the important role of pharmacists, and minimize the risk of therapy. After a period of safe use in this status, a drug may be recommended for full OTC status when justified.
Unfortunately, many people in the pharmaceutical industry and the FDA may be reluctant to change the system that has evolved. But in today’s rapidly changing scientific environment, the current rigid and unidimensional system does not well serve the FDA, industry, patients, or society. Not only must the FDA be given a better opportunity to protect the public from unsafe drugs, but it must be given the tools to expedite the availability of new therapies. This process must be transparent and take place in an environment of openness, risk sharing, and scientific excellence that is in the best interest of everyone. Only in this way can the FDA become a full partner in developing the critical path for new drug approvals.