Harnessing the Curative Potential of Genomic Technologies
September 27, 2018
- Speech by
Scott Gottlieb, M.D.
The Cost of a Cure: Creating Sustainable Solutions for Gene and Cell Therapies
Symposium at the Leonard Davis Institute, University of Pennsylvania
(Remarks as prepared for delivery)
A decade ago, people lamented the idea that we weren’t seeing a public health payoff from our investment in genomics and proteomics and the mapping of the human genome.
Now the common lament is that the targeted treatments that can alter the underlying course of intractable illness are coming so fast that it’s overwhelming our payment and delivery systems. And yet we’re just in the early innings of this revolution.
A lot has changed. But in most cases, what hasn’t changed nearly as fast is our system for generating the clinical evidence needed to evaluate these novel technologies, or the payment system into which these therapies are being delivered.
We’re stretching these challenges by stacking the rising uncertainties associated with very novel discovery, with the uncertain development paths that can sometimes accompany very new science, and with payment approaches that aren’t yet suited toward these technologies. We need to focus on how we can innovate in all three of these endeavors to make the underlying scientific opportunities more accessible to patients.
These issues are some of the biggest potential obstacles to the widespread deployment of genomic medicine.
Efficient product development through innovative clinical trial designs in both the pre- and postmarket setting can lower the cost of bringing new products to market.
They can unlock opportunities to make more effective use of proteomic and genomic technologies. And they can do all these things -- and more -- while strengthening the FDA’s gold standard for demonstrating safety and efficacy.
Increased efficiency in how we develop new medicines can also make it more economical for companies to launch products that compete with an entrenched competitor and enable manufactures to offer new drugs at a lower price point.
We’ll be publishing data soon that shows that when it comes to these targeted drug categories, it’s taking much longer to see the second or third to market innovation in these rare disease drug categories. It means that first to market drugs are enjoying monopolies for much longer periods of time. And they’re not facing the price pressure that comes from competition. And in more and more cases, competition isn’t coming to the market at all. This is keeping costs to patients high.
In some of these areas of significant unmet medical need, especially in inherited disorders, we see companies pull out of development programs if they think they’ll be third or even second to market. Once the prevalence population is captured by a first to market drug – these are patients who already are diagnosed with a disease -- then the economic returns from being a second to market drug that competes to treat the incidence population – those who will be newly diagnosed in the future –may not be enough to support the costs of developing that second to market medicine in the first place. This can hurt patient care. It means there’s less therapeutic variety. But it also means prices stay higher much longer, since competition emerges much more slowly, if it enters the market at all.
It’s making access more difficult. If we can make the development of these drugs more efficient, especially when it comes to follow-on innovation, it can increase price competition without reducing incentives to innovate.
Payment schemes also need to change to provide proper incentives for these technologies, while making sure that these new medical advances remain accessible to patients.
Innovative payment vehicles for curative therapies can accelerate patient access, and lower long-term societal costs.
When the kinds of technologies that can enable substantial improvements in care are not aligned with the approaches we use to develop medicines, and the systems to pay for them, then the pace of innovation can suffer. And opportunities can go unrealized. We need to make sure that the framework we use to develop drugs is efficient, and that we’re not remaining wed to outdated methods that may not be the most effective ways of developing the new generations of drugs that we’re seeing. How we develop drugs can also impact how much they cost.
Now I know drugs aren’t priced to the cost of developing a new medicine. Any good economist in this room will affirm this basic principle. Drugs are priced to some measure of what the market will bear, and to the value that a new medicine delivers.
But development costs aren’t completely divorced from these economics. These costs have a direct impact on the amount of competition in the market. That’s because the willingness of entrepreneurs to invest in developing a new drug is directly related to the risk and expense of these endeavors.
Investors need to clear a “hurdle rate,” where the cost of developing a new drug can be justified by the expected return. And that return is based on prevalence and pricing models – how many people will eventually benefit from a new drug you’re trying to develop, and how much can you charge for it.
The risk-adjusted return needs to exceed the expected costs by a return that’s as good or better than the next best opportunity for that capital. And those costs aren’t just the direct cost of development – the money you’ll need to lay out to fund the research and the clinical trials. Those costs are most heavily influenced by the time it takes to undertake these endeavors – the time cost of capital -- and the uncertainty of developing a new medicine. The time cost of capital, and the risk of these efforts, are factors that make these endeavors much costlier than developing other kinds of technology.
Early stage drug development is one of the most high-risk endeavors, marked by lots of dead ends.
Yet the risk and time are also the factors that we can most easily influence if we have clear, modern, efficient policies to govern how we develop new innovations.
We already know that clinical trials are becoming more costly and complex to administer. A 2016 study from Tufts University Center for the Study of Drug Development, sponsored by Medidata, found that biopharmaceutical companies spend an estimated $4 to $6 billion annually in direct costs for non-core procedures. This consumes about 20 percent of trial budgets. And it imposes significant administrative burdens on patients and clinical trial site personnel. Rising trial costs and complexity undoubtedly impacts market competition and drug pricing. It can also be a significant barrier to getting timely competition to newly approved branded innovator drugs. But we also know that there are a lot of chances to use modern science to make this process more efficient.
These are the opportunities that I want to focus on today. How we can adopt modern approaches to designing and conducting clinical trials – and use modern tools to continue to learn about products after they reach the market.
We want to make sure the premarket process is more efficient. And we want to ensure that the postmarket process is more informative, so we can continue to learn how to best target new treatments and inform ourselves about their safety.
And once drugs reach the market, we want to advance more modern ways for paying for the resulting innovations; to make sure new treatments remain affordable and accessible, and that we’re capturing the value they deliver to patients.
First, when it comes to modernizing how we develop new drugs, and structure clinical trials, the FDA is working across its medical product centers to facilitate innovative trial designs that can make trials not only more efficient and less costly, but also more rigorous. Today, we’re releasing two important, new draft guidance documents that’ll advance these policy goals.
A first guidance is titled “Master Protocols: Efficient Clinical Trial Design Strategies to Expedite Development of Oncology Drugs and Biologics.” And then a second guidance is titled “Adaptive Designs for Clinical Trials of Drugs and Biologics.”
We believe these new approaches have the potential to achieve a lot of our goals; making drug development more efficient, lower cost, and much more effective. And we believe these approaches will become increasingly routine.
When it comes to these methods for modernizing the design of clinical trials, a lot of the early focus has been in using these practices in oncology trials. What I’m saying today is that there’s a concerted effort underway to make sure these same scientific principles can be applied consistently in other therapeutic areas. Guidance is one part of that process. So are the steps we’re taking to modernize the overall structure of the drug review process as part of the Office of New Drugs reform. A key part of that reform is to make it more efficient for effective approaches and principles adopted in one clinical setting to be more widely adopted across different therapeutic areas.
Master protocols can sharply increase the efficiency of clinical trials by creating stable, flexible infrastructure that can last for decades. Think of it like a hub and spokes airport serving multiple airlines and destinations. This comes from using a common screening platform for biomarker identification; a common review structures like centralized Institutional Review Board; and a common trial processes, like a common comparator arm or historical control.
Master protocols can take several forms. They can include umbrella trials – an approach where you’re testing a single disease against multiple targeted products in biomarker selected populations. Or they can also take the form of basket trials. With this approach you can test multiple diseases with a common biomarker against a single targeted therapy.
Take chimeric antigen receptor therapy (CAR-T) as one example where these approaches can be effectively applied.
When CAR-T therapies are specific to a cancer antigen, like CD19 in blood cancers, a basket trial could allow multiple rare B-cell malignancies to be tested using a single CAR-T therapy. This enables sponsors to develop evidence that supports approval of the therapy against multiple malignancies in a single trial.
The draft guidance we’re releasing today on master protocols gives recommendations to sponsors who are developing drugs or biologics for the treatment of cancer. We address the design and conduct of clinical trials intended to evaluate more than one investigational drug at the same time, or more than one cancer type or both within the same overall trial structure – the same master protocol. The draft guidance addresses adult and pediatric cancers.
This draft guidance also describes some related considerations. These include the co-development of biomarkers, and considerations on how the information gleaned from these approaches should be evaluated. The new guidance document also provides advice on how sponsors should format and submit the data to FDA and how they can interact with the agency to enable an efficient review.
The second draft guidance we’re releasing today addresses adaptive trial designs. Like master protocols, adaptive trial designs can enable more efficient development programs to make it easier to learn more about genetically targeted drugs.
We know much more about the biological rationale for targeted medicines that are being put into development today. And we know much more about how new drugs intervene in disease. These advances lend themselves to trial designs that allow us to fashion studies in ways that we can test many hypotheses at the same time. As we know more about drugs being developed, we should be able to use clinical trials to learn more about how they can impact disease.
So, the new draft guidance provides recommendations to sponsors on the use of adaptive designs for clinical trials. It provides some key considerations for designing, conducting, and reporting the results from an adaptive trial. It also advises sponsors on the types of information that the FDA anticipates needing to evaluate from the results from clinical trials that use adaptive designs. These include Bayesian adaptive and complex trials that rely on computer simulations for their design.
With adaptive approaches, as the trial evolves, and data accrue, the adaptive design changes based on what’s learned about the safety and benefits of the experimental drug. The accumulating information from the clinical trial is used to modify the ongoing trial. This design has lots of potential advantages.
It can let investigators improve the study power, reduce the sample size and total cost. It can let them treat more patients with more effective treatments, correctly identifying efficacious drugs for specific subgroups of patients based on their biomarker profiles and shortening the time for drug development.
These approaches can make it much more efficient to develop targeted drugs aimed at rare conditions. They can offer a greater chance of detecting a drug’s treatment effect in cases where you have a smaller sample size. They can also offer ethical advantages, like giving researchers the chance to stop a trial early if the data isn’t consistent with a new drug’s expected efficacy, or if there’s convincing evidence of effectiveness.
An adaptive design may be also more attractive to stakeholders than a trial that uses non-adaptive designs because of the added flexibility. Sponsors may be more willing to commit to a trial that allows planned design modifications based on accumulating information. And patients may be more willing to enroll in trials that use adaptive randomization based on response, because these trials can increase the probability that patients will be assigned to the most effective treatment.
Adaptive designs may also be helpful when evaluating interventions for small patient populations. They can allow researchers to leverage additional data sources when randomization may not be practical or ethical; or when strong and reliable prior information is available.
Some of the most innovative trials we’re seeing today combine aspects of master protocols and adaptive designs. These include I-SPY 2 for breast cancer, Lung-MAP for non-small cell lung cancer, and DIAN-TU for Alzheimer’s.
Although these trials can require more planning to set up and launch, they can be far more cost and time efficient than traditional one-drug, one trial designs. And these approaches – which were once used only in cancer -- are now being effectively adapted to many different therapeutic areas.
We have an obligation to patients to make sure trials are efficient and effective. When patients commit to a clinical trial, they assume that their contributions may enable better outcomes for themselves, but also for the patients that follow them. Inefficient, overly complex trials delay our ability to learn from clinical research. Outdated methods can force patients to wait longer for access to potentially better treatment options.
The impetus to make medical product development more efficient doesn’t stop at the time of approval. As we try and make breakthrough drugs available to patients sooner, we need to make sure we have good tools and systems for continuing to learn about risks and benefits after drugs reach the market.
This is the second area I want to briefly discuss: Innovation in the collection of data, and development of postmarket evidence. Here too, new methods and new technologies are making these tasks much more effective and efficient. The ability to more efficiently conduct post-approval safety studies -- or track a product’s safety profile through real world data compiled from insurance claims or patient registries -- can all help provide added assurance to patients, providers, and payors that the benefits of a new product continue to outweigh risks.
I don’t have a lot of time to get into the details of these efforts today. There’s a lot that we’re already doing, and more new efforts that we’ll announce soon. Our tools for capturing reliable data in the postmarket setting are improving. We need to continue to invest in new approaches that’ll make this information more reliable, more robust, and more useful.
I’ve talked about our need to advance how we gather data and develop drugs to accommodate the new technologies that we’re seeing. Technologies that have a higher potential to intervene and arrest the underlying triggers of disease, and perhaps even cure intractable illness.
We’re starting to see the fruits of this science. And it’s starting to have an impact on trends in survival when it comes to diseases like cancer.
But as we advance our approach to developing these drugs, we also need to consider how we’re going to pay for them. These innovations are coming at a high cost. And they’re challenging our conventional model for how we pay for healthcare.
In the past, medicines treated the symptoms of disease, or reduced their ongoing impact. Having a disease often obligated a person to a lifetime of chronic medicines. In a way, developing a medicine was a financial annuity to the entrepreneur. They could recoup the upfront costs of the innovation over the course of many years, and even the lifetime of a patient.
Now, when drugs exist that can be administered in just a few doses, and can reverse or even cure an illness, that payment model doesn’t apply. The challenging question is how we adapt our system to help finance these new opportunities.
Curative technologies can be inherently cost effective if they reduce the burden of mortality, disability, and overall treatment costs facing patients and payors. But they’re also costly under a pay-as-you-go health care financing scheme.
Collapsing even a fraction of decades worth of health care spending for a chronic illness into just a few years -- or even just a few doses of a drug -- can produce extraordinary up-front budget pressures on payors. This is especially true for states that must balance their annual budgets. It’s also true for patients who are facing rising co-pays and high deductibles.
The price for curative therapies can appear especially daunting. And so, without innovation in financial engineering and financial arrangements to overcome the chasm between current patient need and available cash flow, the U.S. will not be able to reap the full benefits of genomic technologies.
The problem is well understood. A clear majority of Americans don’t buy houses with one lump sum payment. Parents don’t finance four years of college with a single check. Companies use a variety of equity or bond offerings to finance large capital investments. And insurers use reinsurance to cover unexpected, high cost events above a given threshold.
We need a similar type of financial approach for curative technologies. Patients, manufacturers, and payors can all benefit from creating financing models for restorative technologies.
Many proposals have already been shared for how we’d pursue these approaches. So, I’m not going to repeat all of them.
We’ve heard proposals for loans from manufacturers to providers, for large multi-year 1115 Medicaid waivers, or for licensing agreements that allow insurers and patients to amortize the costs of drugs over several years at low or even zero-dollar co-pays. All these ideas have been proposed. All these approaches have some challenges. And they all have some attractive features. Different arrangements may be needed for different types of technologies, depending on the performance of a specific medical product, or the patients’ medical need.
But what’s undoubtedly true is that public and private payors, and patients, could reap enormous long-term savings by adopting one or more of these approaches. Manufacturers and their investors are also likely to accept a large discounted up-front payment given the likelihood of rapid competition from rival technologies. These arrangements could be tied to outcomes-based rebates tied to the durability of response.
It seems obvious that the financial arrangements that gave rise to one of the country’s greatest achievements – the development of our biomedical sector – are going to need to evolve. Not only because of the scrutiny on the costs; but also, the nature of the innovative products that are being developed.
For many decades, the model for how the nation financed pharmaceutical innovation allowed entrepreneurs to substantially raise the cost of direct care for a disease with a new drug therapy when those new treatments delivered superior returns that could be captured through reduced morbidity, mortality, and lower long-term healthcare costs.
The system wasn’t set up to let those buying the drugs directly capture those downstream savings. But as a nation, we made a pact that innovators would be able to re-price the direct treatment costs for a debilitating ailment if they could help patients live better lives and achieve truly better outcomes.
That ability to re-price the treatment of a disease is what’s under pressure. We’re reaching a point where payors are rightly pushing back on higher costs, and there are a lot of “good enough” generics. More and more, the aim may be to replace old treatments with new medicines that can deliver superior outcomes at an equivalent or even reduced cost of total care.
That’s been the model for reimbursement in the medical device space for some time. New devices come to market with the promise of reducing the total cost of care, or improving the efficiency of providers, while at the same time delivering superior outcomes for patients. But in this model, we always looked at the total cost of care. In the medical device space, benefits like reduced hospitalizations, faster recovery times, or improved provider efficiency are all factors in how products are priced and reimbursed -- because these benefits are often captured by the hospitals that also pay for the new technology.
This same arrangement is becoming more common when it comes to drugs as well. As the healthcare system consolidates, those buying the drugs are increasingly the same entities paying for the downstream care if the new medicines can reduce long-term morbidity or even cure an intractable ailment. The question is whether the payment and pricing schemes will adapt, and be better integrated, to take advantage of these opportunities.
Now these approaches won’t fit every product. But the benefits associated with having access to curative technologies for even a handful of severely disabling and fatal chronic diseases like Alzheimer’s or Parkinson’s would be transformational – reducing crippling burdens on family caregivers and helping seniors age with dignity, to say nothing of the benefits to Medicare and state Medicaid programs.
We should also expect that as the technology advances, costs should also come down. When you look at a lot of the new technologies like gene therapy or CAR-T – the cost of goods isn’t trivial. The cost of manufacturing a single course of CAR-T therapy could easily top $50,000. While gene editing platforms like TALENs and Zinc-finger have been used for decades, it was a scant five years ago that researchers successfully demonstrated that CRISPR/Cas9 could be used to enable precise and highly efficient edits in eukaryotic cells. And it won’t be the last version of gene editing we’re likely to see in coming years. At the same time, the cost of reading, writing, and interpreting the human genome through next generation sequencing, synthetic biology, and advanced analytics approaches are all falling rapidly – in some cases, faster than the pace of Moore’s Law.
We need to take measure of all these opportunities, and advances, and pursue policies that try to leverage the best of the changes that are underway, so we can develop modern models for how we create incentives for improved innovation.
If we try to constrain profits too much for these curative technologies today, because we don’t have modern payment schemes that are suited to these kinds of curative types of treatments, then we run the risk of limiting their application to last-line, salvage therapies -- keeping prices artificially high for longer periods of time. We have a system now where we know we pay too much for a lot of older drugs that aren’t delivering a value that’s equal to their cost. It’s a system where a lot of drugs that are off patent, and have no other blocking exclusivities, still don’t face generic competition. But at the other end of that barbell, there are curative treatments that have the potential to reap significant long-term healthcare savings and to reduce a lot of human suffering. I don’t think the system we have right now is dealing well with either of these challenges.
That’s what we should all be focused on fixing.
To help advance creative payment options, the FDA finalized our Payors Guidance last June, which addressed drug and device manufacturers communications with payors. Payors often seek a range of information on the effectiveness, safety and cost-effectiveness of approved or cleared medical products.
Our final guidance includes recommendations that are designed to enable truthful, non-misleading and appropriate company communications with insurers across a product’s lifecycle to help advance public health benefits. These benefits can include increased cost savings from informed and appropriate coverage and reimbursement decisions. The goal of this information is to also help companies and payors establish pricing structures that benefit patients as well as health plans.
At the same time, we’re also working closely with federal partners, including the Centers for Medicare and Medicaid Services. The aim here is to ensure that coverage decisions don’t lag behind FDA approval or clearance for drugs and devices.
With more attention to how we can modernize our policies and science to address the risk stacking across the continuum from discovery to development to commercialization; we can seize the opportunity to accelerate more advances to patients.
As we look for new ways to harness the full potential of genomic medicines, we’ll continue to work with our public and private sector partners to find new ways to maintain that balance through improved innovation and market competition. We want to ensure efficient product development programs, and at the same time, pursue together better financial vehicles that can help spread the costs of deploying these technologies over longer periods of time while increasing patient access to important medical advances. And we look forward to working with you to advance these public health efforts.
Before I conclude, I’d like to talk briefly about another topic related to efficient innovation --the FDA’s utilization of global clinical trial data.
The FDA is working diligently with our international partners to harmonize regulatory and technical standards related to drug and device review.
While the regulations governing drug approvals can vary across national boundaries, harmonizing scientific and technical standards can lower the costs of drug development by allowing sponsors to use the same trials and assays – for instance, for bioequivalence testing for generic drugs – for regulatory submission to multiple national regulatory agencies.
We’ll be saying more about this shortly, as we discuss how to increase harmonization with our international regulatory partners in the EU, Japan, Canada, and Australia for data submissions and technical standards related to the review of generic drugs.
But I’d also like to make it clear that, when assessing the safety and effectiveness of drugs and devices, the FDA already may, and frequently does, accept data from clinical trials conducted partially, and sometimes entirely, outside of the United States for drugs, biologics, and medical devices, as long as the data is valid and other applicable approval or clearance criteria for U.S. marketing approval are met.
The FDA also ensures that foreign clinical trials are conducted in accordance with good clinical practice, including ethical principles that ensure the protection of human subjects, and that there are not important differences between the studied patient population or clinical practice as compared to the US that would impact the benefit-risk profile of the product.