CDER investigators are exploring clinical trial designs that allow investigators to make use of Real-World Data in a rigorous and transparent fashion in situations where it may not be possible to conduct a pivotal randomized trial to determine efficacy.
Well conducted, randomized, controlled clinical trials (RCTs) are the gold standard for evaluation of a new drug treatment and are designed to eliminate systematic biases when comparing drug treatments. However, for rare diseases or for various subtypes of disease defined by specific genetic differences or other factors, it may not be feasible to recruit the numbers of patients needed to conduct trials with adequate statistical power. And there may be scenarios, for example a rapidly progressing form of cancer for which there is promising evidence that a drug candidate could be effective, where it would be unethical to assign patients to no treatment. Even when patients can be recruited, for many rare diseases it may take years for large RCTs to be completed.
Box 1. Definition of Real-World Data (RWD) and Real-World Evidence (RWE). RWD are data relating to patient health status and/or the delivery of health care routinely collected from a variety of sources. These sources may include EHRs, claims or billing activities, medical product or disease registries, patient-generated data including in home-use settings, health data gathered from other sources including mobile devices. RWE is the clinical evidence about the usage and potential benefits or risks of a medical product derived from analysis of RWD. (See guidance)
Because of these difficulties and other obstacles to conducting traditional RCTs, there is strong interest in leveraging Real-World Data as evidence to support the evaluation of potential drug treatments. These data might include information on disease outcomes collected from a patient medical record during routine care or produced in the course of medical transactions like prescriptions and insurance claims. Also, data from already completed trials are also being considered by investigators as information that can inform drug evaluation in a new trial. However, there are important challenges to using clinical data that is external to a randomized trial, most notably the bias introduced when there are differences in patient characteristics in the groups being compared that are not mitigated by random assignment. For example, investigators might seek to infer disease progression from a natural history study and compare this progression to what is observed in a small single arm-study, because available patients were few. Yet patients in the new study may differ in important ways from the patients in the natural history study. If one had information on factors such as patient age, disease stage when treatment was initiated, the presence of co-morbidities, one could adjust for these variables in the analyses, but there may also be many important variables that were not collected (for example information about the standard of care in a particular institution). Real-World Data can also introduce biases because the data were collected in a different way, for example, using somewhat different tests and standards, or the data were somehow misclassified as they were transported from one electronic data system to another.
FDA engages in a broad effort to help the research community make use of Real-World Data and real world evidence to support clinical trials, especially in situations where typical trials are not feasible. It has recently issued a draft guidance that clarifies the Agency’s expectations concerning clinical studies using RWD to support the effectiveness and safety of a new drug.
CDER statisticians are actively conducting research to develop innovative trial designs that address critical obstacles to clinical evaluation and to understand how to rigorously and transparently incorporate real world evidence in the clinical evaluation of new drug candidates. Recent examples of this research include the following:
The feasibility of assembling an external control arm from historical data
To provide insight into how developers could use data in previously completed clinical trials to develop a control arm for a single arm study (for example for a rare disease), CDER researchers made use of control subjects in several recently completed trials for non-small cell lung cancer. For each of the patients in these trials, they developed an estimate of the probability that they would be assigned to the control therapy of a specific target trial based on their baseline characteristics (for example, age, sex, disease stage, years from diagnosis, stage of cancer, and smoking). This probability, or propensity score, was used as the metric for selecting (using an algorithm called greedy nearest neighbor matching) a similar control patient from those enrolled in other trials for NSCLC for each of the patients on experimental arm of the target trial. The goal was to create a cohort of patients that were similar to the control patients in the target trial in terms of baseline characteristics that could be associated with disease outcomes. The survival outcomes using this external control arm (Figure 1) were very similar to what had been observed in the target trial where balance was achieved by random assignment, suggesting that an external control may be feasible using previously completed clinical trials if important baseline characteristics were well measured and characterized. The researchers suggested that future research should include determining whether similar results can be obtained with in other disease areas, evaluating other methods for balancing patient baseline characteristics, and investigating the properties of hybrid trials that use a combination of randomized patients and external control patients.
Designing a trial for an ultra-rare and rapidly progressing disease
CDER statisticians considered the scenario of a rare, rapidly progressing disease for which there was no available effective therapy and where there exist some promising data to suggest that a drug treatment could be effective. They described how real world evidence can be used to determine in advance of a single arm study a reasonable performance goal (e.g., in terms of percent survival of patients treated with the drug) and how using a Bayesian approach with a prior probability distribution (either a noninformative prior or an informative distribution based on real world evidence) for survival rate can be constructed and updated based on results of the new trial. With the help of computer simulations, they illustrated how trial success rate depends on the assumed survival rate and the numbers of patients enrolled in the single-arm study.
Dose finding in a small patient population
CDER investigators demonstrated how to design a dose finding trial in a pediatric population to maximize efficiency and potentially minimize the assignment of pediatric patients to doses that are not effective. (Enrollment of a sufficient number of pediatric patients to adequately power clinical studies can be difficult due to factors such as low incidence of disease, resistance to enrolling pediatric patients in trials, and availability of treatment outside of a clinical trial.) Prior information was used to model for successful response to treatment among the control and patients receiving different doses of the drug are constructed, with the option of basing the parameters of these models on real world evidence (for example from adult trials of the same drug). In an approach called Bayesian adaptive randomization, only small numbers of patients are allocated to the control group and each of the different doses. The patient responses are used to update the probability that a given dose is efficacious using a computational approach relying on random sampling (Markov process Monte Carlo sampling). In subsequent assignments, more patients are assigned to doses that are found more likely to be effective. The trial stops when a predefined threshold for success or trial futility are reached or when all patients have been assigned to a treatment.
Simulations conducted by the researchers under various scenarios for the relationship between dose demonstrated and response indicated that when reliable real world evidence on dose response is available, the Bayesian adaptive randomization approach is much more efficient at arriving at the best effective dose than a conventional dose finding trial in general.
Addressing the problem of high placebo response rates
In evaluating drugs to treat chronic conditions (e.g., psychiatric diseases), investigators may have to cope with high rates of placebo response as well as difficulties recruiting patients. CDER statisticians explored the usefulness of a “single parallel comparison design” platform trial in which the control arm is shared by two trial sponsors. The patients assigned placebo who are found to be non-responders are further re-randomized at stage 2 to either treatment A, treatment B or the placebo arm, and information from historical controls can be incorporated using meta-analytic priors to add additional information about the response rate in the new control arm (Figure 2). The calculated treatment effect is a weighted sum of the effects determined for each of the two trial stages. Simulations conducted by the researchers show that the platform design can increase power, even in the absence of historical information, and when information from historical trials is used to identify true non responders, further increases in power can be obtained.
Figure 2. CDER researcher proposed a two-stage trial design to address the problem of high placebo response in many psychiatric trials as well as recruitment obstacles.
In this platform trial, sponsors of potential treatments (A and B) pool their placebo control patients. Non-responders are identified based on outcomes in the first stage of the trial as well as historical data used to develop prior distributions for placebo response. The non-responders are randomly assigned to treatment or placebo in stage two. In this design, estimated treatment effect would be a weighted average of results from both stages of the trial.
FDA recognizes that it is scientifically challenging to design and analyze data of novel trials, especially for rare diseases and pediatric populations. When conducting a randomized and adequately powered clinical trial is not possible, a single arm or hybrid study with both a concurrent control and an external control utilizing real world data may be an option. However, many challenges regarding comparability between the study data and the real-world data need to be addressed. The agency continues to explore potential uses of RWD and is committed to providing clear and transparent feedback to sponsors on proposed designs and analyses. More detailed information regarding proper use of real-world data in conducting clinical trials is provided in published FDA guidance on use of real-world data from registries to support regulatory decision-making.
The agency continues to explore potential uses of RWD and is committed to providing clear and transparent feedback to sponsors on proposed designs and analyses. More detailed information regarding proper use of real-world data in conducting clinical trials is provided in published FDA guidance on use of real-world data from registries to support regulatory decision-making.
How does this research advance drug development? CDER research on innovative trial designs and how they can incorporate real world evidence can help us overcome critical obstacles that arise when traditional randomized studies are not feasible and help to ensure that the evaluation of potential drug treatments in these scenarios is statistically sound and as informative as possible.
1Yin, X., Mishra-Kalyan, P.S., Sridhara, R., Stewart, M.D., Stuart, E.A. and Davi, R.C., 2022. Exploring the Potential of External Control Arms created from Patient Level Data: A case study in non-small cell lung cancer. Journal of biopharmaceutical statistics, pp.1-15.
2Jiao, F., Chen, Y.F., Min, M. and Jimenez, S., 2022. Challenges and potential strategies utilizing external data for efficacy evaluation in small-sized clinical trials. Journal of Biopharmaceutical Statistics, pp.1-13.