News & Events
Scott Gottlieb, MD -- Washington, DC
This text contains Dr. Gottlieb's prepared remarks. It should be used with the understanding that some material may have been added or deleted during actual delivery.
2006 Conference on Adaptive Trial Design, Washington, DC
Scott Gottlieb, MD
Deputy Commissioner for Medical and Scientific Affairs
Food and Drug Administration
July 10, 2006
In the span of a few years, we have made major strides in advancing the science of drug development, incorporating sophisticated tools for screening drugs for effectiveness and safety, tools that were unimaginable even a few decades ago -- things such as toxicogenomic assays that mimic the activity of human liver or heart cells or proteomic panels that can measure minute changes in the regulation and activity of genes, to mention just a few advances.
It’s clear that all of these advances, made just a short time ago, could in the near future have significant, practical benefits not only for the public health but on peoples’ health—real help for real patients.
These new scientific advances and, in particular, the tools they enable allow those engaged in development to learn more about the safety and potential benefits of new medicines earlier in the development process, exposing fewer patients to experimental treatments in the process. These tools also allow drug developers to spend less time and money discovering that a new molecule didn’t work or that it had some troubling side effect. This latter benefit, while not readily evident, is particularly crucial. The ability to fail faster is an important advance in science.
An initiative we announced over two years ago, called the Critical Path initiative and led by FDA’s Dr. Janet Woodcock in the Office of the Commissioner, and Dr. Shirley Murphy in our drug center, is aimed at catalyzing the creation of these tools and in finding effective ways to incorporate them into the development process. Drug developers all have a common bottom line in mind: FDA approval. If new scientific tools and approaches could help us learn more about the safety and efficacy of new medicines perhaps more quickly and earlier in the development process, FDA wants to make sure that our own regulatory requirements are recognizing these tools and providing the appropriate flexibility so sponsors can make use of them.
A lot of the early work under our Critical Path initiative has been focused on developing better markers and evaluative tools for measuring safety and toxicity of new medicines. In fact, one of the very first initiatives that we announced under the Critical Path was the formation of the Predictive Safety Testing Consortium between C-Path and -- at the time we announced it in March -- five of America’s largest pharmaceutical companies.
Under the consortium, firms made a commitment to share internally developed laboratory methods to predict the safety of new treatments before they are tested in humans. Companies committed to sharing knowledge and resources to determine which of the lab tests that they have developed individually should be recommended by the FDA to screen drugs so everyone can better understand the potential side effects before the drugs enter clinical testing in humans. The results of the comparison will be collected and summarized by the non-profit C-Path Institute, which is led by Dr. Ray Woosley. The data will be developed for submission to the FDA.
Now, getting better drug development tools from advances in proteomics and genomics is one thing, and a very good thing, too. But harnessing those tools for drug testing, for the new drug approval process, and for delivery to patients is another matter entirely. The end goal of all of these efforts is the development and more rapid delivery to the clinics of better treatments themselves. That requires us to not only develop better tools and approaches but to also develop the flexibility and scientific approaches that allow us to incorporate them into the development process, and into the FDA’s product approval benchmarks.
A big part of that step requires us to work not only on developing the measurement tools themselves, but also on developing better approaches to the design of clinical trials that can be adapted based on new and improved scientific information that these tools generate. This is what I want to focus on today. How we design trials that generate and incorporate information that helps us guide the more effective use of medicines.
First, we need to agree on where things stand right now. Right now, clinical trials are generally highly empirical. By that, I mean that we test drugs on general populations and then we look for a clinical response and a treatment effect that is -- statistically speaking -- not likely to be a chance result.
So for example, when drug developers are testing a new cancer drug, they will define the cancer anatomically. A new drug – more often than not -- will be tested on everyone with a specific anatomical tumor type – lung cancer, colon cancer, breast cancer. Then, generally speaking, we will look for overall response rates that tie or beat existing therapy, without creating many new side effects. A new lung cancer drug may be deemed safe and effective if it shrinks tumors in 20 percent of patients while having a side effect profile that is not much more severe than other existing drugs that shrink tumors by about the same amount.
This traditional, highly empiric statistical approach has had the dominant, and often exclusive, role in drug development. The empiric approach is rigorous and focused. But a side effect of these characteristics is inflexibility, which in turn limits innovation in the design and analysis of clinical trials and our ability to incorporate pre-formed scientific information that would help us learn more about which new treatments have a better chance of benefiting which patients.
Because of this, clinical trials tend to be overly large, which increases the cost of developing new therapeutic approaches. Some patients are unnecessarily exposed to inferior experimental therapies. In some cases they are also exposed to control arm therapies already strongly suspected to be substantially inferior to the experimental therapy.
Another problem with the empirical approach is that it yields statistical information about how large populations with the same or similar conditions are likely to respond to a treatment. But doctors don’t treat populations, they treat individual patients. Doctors need information about the characteristics that predict which patients are more likely to respond well, or suffer certain side effect. The empirical approach doesn’t tell doctors how to personalize their care to their individual patients.
As a result, doctors are forced to take a highly empiric approach to their medical practice. After all, the large part of the practice of medicine can only be as smart and as finely tuned to patients’ needs as the information generated during drug development and post market drug research. Doctors prescribe treatments knowing full well that only a certain percentage of their patients will receive a benefit from any given medicine. The information they have only tells them about how populations respond to a treatment. Often when a patient shows no signs of benefit from a particular drug, doctors will assume that their patient fell into the percentage that don’t benefit, and will switch to another treatment.
Getting back to the case of a highly empirical trial of a new cancer drug, when a placebo controlled, blinded trial demonstrates that a new cancer drug shrinks tumors well in 20 percent of patients with lung cancer, doctor’s are often given little information about which 20 percent of their lung cancer patients are going to respond. Establishing that a placebo pill doesn’t treat a cancer better than an active drug in our highly empiric approach often doesn’t maximize our learning about a new medicine, and it surely doesn’t help the patient who gets the sugar pill. If there are better, scientifically rigorous alternatives to these kinds of trials, we are obligated by science and by ethics to try and pursue them.
There are potentially better alternatives, by enabling more trials to be adapted based on knowledge about gene and protein markers or patient characteristics that can help predict whether patients will respond well to a new medicine.
These new approaches to clinical trials can result in trial designs that tell us more about safety and benefits of drugs, in potentially shorter time frames, exposing fewer people to experimental treatments, and resulting in clinical trials that may not only be more efficient but are more attractive to patients and their physicians to enroll in. FDA is taking some new steps right now to help facilitate the continued development of these newer, more adaptive clinical trials, and the opportunity couldn’t be riper.
If you go back 40 years ago we seldom had a good understanding of drug mechanisms. Today that kind of knowledge is commonplace and even a pre-requisite for funding a new molecule or advancing a drug into development. Technology is the great enabler of process change. This growing mastery of the molecular basis of disease and of how drugs intervene on biological processes is allowing us to approach how we test molecules in new ways. In this case, these process changes are the ability to use more adaptive sampling designs, including response-adaptive designs for statistical experiments, where the accruing data from experiments – the observations -- are used to adjust the experiment as it is being run. Advances in computational techniques and power are also making possible the more appropriate and rigorous application of these methods.
Typically, decisions such as how to sample during an experiment are made and fixed in advance. In a classical clinical trial, patients are allocated to one of two different treatment options with half being assigned to each therapy. At the end of the experiment a decision is made as to which treatment is more effective.
In contrast, in an adaptive clinical trial, patient outcomes can be used as they become available to adjust the allocation of future patients or some other aspect of the study design. This allows researchers to improve expected patient outcomes during the experiment, while still being able to reach good statistical decisions in a timely fashion.
Adaptive procedures can offer significant ethical and cost advantages over standard fixed procedures. A situation in which adaptive procedures have become particularly relevant are AIDS clinical trials where interim analyzed trials have been common, since many new drugs come to market, yet classical randomized studies may pose ethical dilemmas.
One well-known form of adaptive trial design allows a scientifically predetermined outcome to be measured and allows randomization to be allocated proportionally toward patient populations that are enriched by the characteristics that are likely to predict a positive outcome – in the case of cancer; for example, this might involve a type of tumor, or a specific tumor marker.
What does this mean for a patient with cancer? The benefits could be that fewer patients are exposed to the less effective therapy. Presumably more safety information can be collected from the more effective therapy. It may also require that fewer patients be studied overall before determining the statistical and clinical significance of a therapy.
It could also mean that products make it to the market more efficiently, and more people benefit from effective therapies earlier than today. It could mean that at the end of the trial, doctors have more information to guide treatments to those patients more likely to experience a benefit.
A second type of adaptive trial design involves ongoing assessment of the sample size, to avoid under- or over-allotment of patients. For example, if the statistical power of a trial is based upon a particular variable and an estimate of its variance, it is easy to see how an increase or decrease in the variability of the sample could affect the power. By continuously monitoring such a critical factor, it is possible to adjust the sample size of a trial for the power that is desired.
Clinical trials often take years to recruit. Adequate follow up of patients can take additional years. Even the best knowledge from a carefully planned phase II program can leave uncertainty at the beginning of phase III concerning important aspects of design or analysis.
There is much interest, therefore, in being able to carry out interim assessments of long running trials to ensure that the design is still appropriate to meet its needs or that its safety and efficacy data do not indicate that the trial should be modified or even stopped. This involves much more than the now traditional concept of predetermined stopping point criteria.
Adaptive designs could have other benefits. They can help us fill in the frustrating white spaces between phases, enabling seamless designs that allow learning to be more iterative and less method-limited. That allow continuous discovery that isn’t defined by phases but rather by what we learn as we go. An adaptive design can also be more effective than standard designs at identifying 'the right dose'. And it usually identifies the right dose with a smaller sample size. Another advantage is that many more doses can be considered in an adaptive design, even though some may be little used or even never used.
Clearly, adaptive trials designs are especially appealing when a credible response can be observed at an early stage. This is especially true of diseases such as cancer in which there are both a burgeoning number of ways to assess early response and a growing number of ways to assess tumor genomic or proteomic features that might identify subsets of patients who may be more likely to respond. These biomarkers could allow the study to be "enriched" with such patients or allow the primary endpoint to become the effect in those patients.
Even if responses cannot be predicted by such genomic or proteomic markers, an early screen for responses may allow patients with or without responses to be separately randomized, with the possibility that the responder subset could be shown to benefit in a much smaller study than could an unselected population. But adaptive approaches are not a panacea to all of our challenges, and enabling them is not a sure thing. Adaptive procedures are more complicated to design and to analyze, and in some settings are more difficult to implement. Sponsors need consensus and clarification on pivotal scientific questions related to when adaptive trial design is most appropriate. This is where I think we need to work together.
Currently, the result of this uncertainty is trepidation about the use of adaptive features and reluctance to consider a variety of enrichment and adaptive designs. In many cases, researchers are still unaware of the option to use adaptive designs because standard statistical courses and packages do not include them, or FDA has not laid out clear guidelines on how to use them. Another reason for the trepidation may be that product developers believe that FDA has not yet demonstrated its own readiness for adaptive trial designs, endpoints and approval criteria based on data from adaptive trials.
It's true that the flexibility of these approaches can lead to complicated trial decisions and uncertainty about the best approach for data analysis. It may also be true that making many decisions during a trial's course can increase the rate of making an erroneous decision. But the advantages of these approaches, rigorously designed, are becoming more evident, including among the ranks of our experts at FDA. It’s essential that we at the FDA do all we can to facilitate their appropriate use in modern drug development.
To encourage the use of these newer trial methodologies, FDA leadership, including Drs. Doug Throckmorton, Bob Temple, Shirley Murphy, ShaAvhree Buckman, Bob O’Neill, Bob Powell, and many others inside FDA’s drug center, are working on a series of guidance documents – up to five in all – that will help articulate the pathway for developing adaptive approaches to clinical trials.
The guidance documents we are developing include one to help guide sponsors on how to look at multiple endpoints in the same trial. This guidance document is currently being drafted and we hope to be able to discuss that work as soon as January. Another guidance document that we are also working on now deals with enrichment designs, designs that can help increase the power of a trial to detect a treatment effect, potentially with fewer subjects.
Among the other three guidance documents, which will take a little longer to draft, is guidance on non-inferiority trial design, one that deals with adaptive designs, and one on how to deal with missing clinical trial data. For this latter effort, we have been collaborating with a working group established by the National Academy of Sciences to better develop these scientific approaches to dealing with missing data through adaptive designs.
FDA will also be participating in a two day workshop this November to discuss all of the many issues around adaptive designs. One question regarding the use of adaptive design strategies is how to best empower the data safety monitoring board or some kind of expert panel to do an interim look at results and make decisions about how best to adapt the trial.
Companies are also beginning to blend the concept of adaptive designs with combining the goals of Phase 2b (dose-ranging in patients) and Phase 3 confirming trials to what is being called ‘seamless adaptive’ trial designs. This type of trial is attempting to be even more time-efficient, but may have enhanced risk for drugs where less prior knowledge exists (e.g., 1st in class).
We hope to have additional public meetings to help us with the development of these guidances. Incorporating these guidances into the practice of drug development will require flexibility from FDA and from sponsors, as well as additional scientific work on the part of the larger community.
Finally, inside FDA, we have also established special teams to provide consults to divisions contemplating an adaptive trial design. We are also looking at the previous work we have done in earlier drug development programs. For instance, we are working on compiling examples inside FDA of where adaptive approaches have been used in the design of existing or previous clinical trials—where they have been useful and where there have been challenges.
This review will help us better centralize knowledge about when these approaches make the most sense, and what models have succeeded in the past.
The good news is we are already seeing a lot of interest in adaptive approaches, especially in early-stage clinical trials. While we don’t have accumulated statistics, it is also clear that adaptive designs are becoming a more common element in Special Protocol Assessments, which are agreements reached between the FDA and sponsors early in the clinical development process that lays out the likely requirements for later stages of development.
The bottom line is this: We are open to new scientific advances in clinical trial design that enable us to learn more about how to safely guide clinical decisions. Clearly we will need help from the broader community, including many of you here today. We also need help from patients, who have grappled with the uncertainty of the empirical model and have grappled with the uncertainty of a healthcare delivery system that does not have the right information to guide treatment more effectively based on the characteristics of their individual diseases.
We need help from product developers, who in many cases will need to take the chance on doing more work up front in order to develop trial designs that could pay off in the end by yielding more useful information to guide clinical decisions.
Finally, these goals will also require more scientific work on the part of all of us to make sure these models are fully qualified. The re-authorization of the Prescription Drug User Fee Act may provide on important vehicle for enabling the resources and management focus to continue to develop the infrastructure necessary to enable the adoption of more qualified adaptive approaches.
The goal of all of these efforts is to allow sponsors to develop more approaches to trial design that utilize information on mechanistic properties of patient response to medicines, where new medicines are tested on patients based on characteristics that are likely to heighten good responses and limit bad ones.
If we are successful in better adapting scientifically rigorous information that helps predict response into the clinical trials themselves, the results will be clinical development programs that tell doctors a lot more about which patients are likely to benefit from a new medicine. It will enable the day when we are able to reliably deliver the right drug in the right dose to the right patient.
Indications could be defined more by what we know about how a drug is targeted against a specific disease rather than narrow approval endpoints that sponsors sometimes pursue to streamline their way through FDA’s statistical hurdles -- endpoints that sometimes are not tied closely enough to how the drug actually works or how it is likely to be used.
We have learned a great deal about the mechanistic basis of disease, and more about how medicines interact in disease processes, and leveraging this kind of information into medical decision making is not just a goal, but a necessity.
Ultimately, technology enables advances in trial design, but it is the creativity of people that really moves things forward. And so we are going to need your help to better develop these scientific approaches, and look forward to pursuing academic partnerships that can help advance the science of clinical trial design.
By working together, I am confident that we can develop better science about how to test new drugs, and in turn, better science on how to prescribe them.