U.S. flag An official website of the United States government

On Oct. 1, 2024, the FDA began implementing a reorganization impacting many parts of the agency. We are in the process of updating FDA.gov content to reflect these changes.

  1. Home
  2. News & Events
  3. Speeches by FDA Officials
  4. Remarks by FDA Commissioner Robert M. Califf to the Coalition for Health AI (CHAI) - 03/05/2024
  1. Speeches by FDA Officials

Speech | Mixed

Event Title
Remarks by FDA Commissioner Robert M. Califf to the Coalition for Health AI (CHAI)
March 5, 2024

Speech by
Robert M. Califf, M.D., MACC

(Remarks as prepared for delivery)

Good morning.  I’m delighted to be with you today and support your efforts to work together to ensure the safe, equitable, and effective application of artificial intelligence (AI) to health care.  I want to congratulate Dr. [Brian] Anderson, CHAI’s new CEO, as well as the members of your inaugural Board of Directors.   Your organizational preparation to construct this group has resulted in inclusion of key members of the ecosystem-large and small companies, academic and community healthcare institutions, professional societies and government.  That’s a good start.  But I want to especially congratulate you on including the patient voice, and I’m sure that your next speaker, Ms. Goldsack, will make that point eloquently.  And it’s really nice to see so many old friends joining together in such an important mission.

As you understand, our biomedical science, clinical and public health enterprises are ripe with achievement today, but we are on the verge of revolutionary change. Genetics and genomics and the digitization of information are building blocks upon which much larger advances in biomedicine and health care can occur.  Without a doubt, AI will become a much larger integrating factor in changing the way we think about health and healthcare, the technologies and products we use and the way we regulate the vast industries that are involved.  

AI already has a profound impact on medical product innovation, with the potential to transform how the scientific and commercial communities develop, manufacture, use, and evaluate interventions and how healthcare systems, clinicians, patients and consumers make critical decisions. This will surely expand in the coming months and years, with even more significant advances in pharmaceutical and device manufacturing as a result of AI, including on process measurement, modeling, control of distribution and monitoring systems, and decision support to inform which products and technologies are used, among other issues.  

These changes in our information ecosystem are also profoundly impacting the FDA, as they involve products the Agency is tasked with reviewing for safety and effectiveness.

Eventually our reviewers, compliance assessors and inspectorate, and the industries they regulate, will use AI in everything from data collection to regulatory assessments, coding and postmarket surveillance.  For us to be most effective and do our jobs as protectors of public health, it is essential that we embrace these groundbreaking technologies, not only to keep pace with the industries we regulate, but also to use regulatory oversight to improve the chance that they will be applied effectively, consistently, and fairly.  

We’ve been following the progress of artificial intelligence at the FDA for many years now.  The prescience of Jeff Shuren and our Center for Devices and Radiological Health (CDRH) has been demonstrated by the formation and historical foundation of the Digital Health Center of Excellence.  Well before the Digital Health COE, there was a program in digital health that served as a focal point in FDA for policies in this area.  But the need for an expanded institutional focus quickly became even clearer. The goals of the COE are consistent with the stated mission of CHAI: Foster responsible and high quality digital health innovation and provide new options for facilitating prevention, early diagnosis, and management of chronic conditions outside of traditional care settings.  Establishing the Digital Health COE is part of the FDA’s work to ensure that the most cutting-edge digital health technologies are rapidly developed and reviewed in the U.S. and to provide centralized expertise for digital health technologies and policy for digital health innovators, the public, and FDA staff.

While CDRH and the Digital Health Center of Excellence has a specific responsibility for regulating AI when it a device or a component of a device, all Centers at FDA are reviewing applications that involve a component of AI.  Thus, we have begun a cross-center initiative to create an environment in which knowledge and best practices are shared and guidance occurs at the right level—from specific Centers for specific products, but also from across the broader FDA when the guidance pertains to principles that pertain to multiple types of products across centers.  We are aware that many people are watching us and hoping we will be able to regulate while also supporting innovation.  My comments today are general, and you can count on a lot of collaboration and outreach as all parts of FDA work on their specific issues and our common issues at the same time.

AI is but one element of a digital health strategy, and the general concept of using algorithms to inform decisions and to provide processed information is not new.  But, by design, AI is built on a digital infrastructure, and to the extent the digital infrastructure is sound and connected, AI has tremendous capability.  Given the ubiquity of the digital intersection with human existence inside and outside of the traditional healthcare delivery system, it’s clear that FDA’s policies will need to be made with awareness of regulation of AI across the government as reflected in the President’s Executive Order on the Safe, Secure and Trustworthy Development and Use of AI. 

To give you an idea of the impact that AI is already having on medical products across the span of drugs, biologics and devices, consider that the FDA has received over 300 submissions for drugs and biological products with AI components, and more than 700 submissions for AI-enabled devices.  These submissions have included aspects related to drug discovery and repurposing, enhancing clinical trial design elements, dose optimization, endpoint/biomarker assessment, postmarketing surveillance, and a growing diversity of medical devices that leverage AI that are meant to improve clinical workflows and patient experiences or outcomes. In other words, they are having an impact on the entire medical product development and healthcare delivery systems.

We also need to consider that the benefits of this technology are not confined to medical products; significant impact will also be seen in the area of nutrition and food safety, which is on the verge of a revolutionary improvement due to the combination of digitization, AI and computing power.  And, of course, AI is a basic tool for advertising, a key tool of the tobacco industry in tailored approaches to marketing products that cause harm.  And I probably don’t need to tell this audience about the evolving potential of AI in the $60 billion cosmetics industry.

As AI continues to advance, and as more data become available and algorithms more sophisticated, the FDA’s approach to continuing to enable AI innovation includes the development and execution of a strategy with multiple components focused on building out infrastructure, methods, and tooling to identify safety operating parameters, standards, best practices, risk-based frameworks, and operational tooling for AI lifecycle management, including safety monitoring and management. 

There are a number of ways we’re working to achieve this.  They include consideration of the creation of an assurance lab network to enable AI lifecycle management and governance model.  Stakeholder education, including healthcare professionals, patients, developer community, and regulators, will be essential to support adoption. We are also considering an audit framework.  Many experts have advocated for a flexible certification process that enables evaluation of medical AI products throughout the lifecycle of the product from prior to full market release through post-market performance monitoring. 

For medical devices under our authority, the Agency takes a risk-based approach to regulation. This approach considers the potential risks and benefits associated with adaptive and generative AI applications.  We will explore new regulatory approaches to better suit the needs and pace of current and emerging technologies that are being worked into medical devices, considering them in the context of their intended uses and impact on patients and public health.   Our policies will be based on principles of ethical AI, including, though not limited to transparency, accountability, operational discipline and tooling, and human oversight—all recognized in your mission at CHAI.

While it is important for regulatory frameworks to accommodate new technologies, it is also important for such frameworks to be robust and useful across a broad range of devices and applications. The Agency continues to consider what new approaches or flexibilities may be needed to ensure that regulatory oversight does not inhibit innovation (note that I did not use the word “stifle”) and is in the interest of patients and the public health. Generative AI applications provide a massive example of a technology with novel needs, and we look forward to working with communities like CHAI to inform the Agency’s thinking.

As part of our AI strategy, the Agency is collaborating with public/private partners to develop a framework for assessing the potential risks and benefits of healthcare AI—this issue is too large to be contained within the FDA.  We’re also developing guidelines for the responsible deployment and ongoing monitoring of AI-driven health care solutions, including those using both adaptive and generative AI methods. The aim is to adapt general AI regulation and standards where needed to the unique characteristics of the health care sector. For instance, general AI regulations often stress the importance of accountability and transparency, which are also crucial in the health care domain due to the sensitive nature of health-related data.

As with any promising new technological or scientific development, the promise is accompanied by new and unique challenges. There are plenty of logistical issues, including questions of how to establish testing laboratories and registries and ensure transparency of the processes.  There are also ethical and security considerations, such as improper data sharing or cybersecurity risk.  And, of course, there are the underlying questions of fairness, reliability, and safety in the application of these tools. 

Finally, I want to call your attention to an emerging concern about where all this is headed that extends beyond the primary FDA remit.  At the FDA we have a general responsibility to sort out products that are safe and effective from products for which the risks outweigh the benefits.  And your statement of mission says: “We believe that by working together with multiple stakeholders, including technology innovators, academic research teams, healthcare organizations, government agencies, and patients, we can help to drive the development and broad adoption of approaches that guarantee the safe and effective use of AI.”  A key question is this: how will we determine together what the standard should be for determining that an AI application is safe and effective?

We know that our country is already spending 4.5 trillion dollars a year for an inferior health outcome compared with other high income countries.  Our life expectancy lags Europe, Singapore, and Japan by more than five years, and this gap is widening.  Will we use AI to fine tune the pursuit of margin by our health systems, payers and technology companies or will we use it to optimize the health and longevity of the people we care for?  What will happen when there is a conflict between better outcome and financial margin?  What can we do to clarify organizational and individual decisions so that better health outcomes and financial margin are more aligned?

For medical products, the FDA lives in a special societal bubble in which our charge is to make regulatory decisions based on the balance of benefits and risks of a product without consideration of cost.  The benefit is either a measured health outcome benefit—live longer, feel or function better—or a biomarker or intermediate outcome on the pathway to that health outcome, either proven or reasonably likely to predict a health benefit, depending on the circumstance.

My concern is that our health systems do not have the infrastructure and tools to make the most important determinations about whether an AI application is “effective” for health outcomes.  In order to know whether an algorithm of any kind is truly effective for health, we need two conditions to be supported with a functional infrastructure.  

First, we need to monitor the algorithm and test its operating characteristics over time.  The ability of algorithm to provide accurate assessments will drift if left untended, often in unpredictable, and sometimes dangerous ways.  With traditional medical products in general, this responsibility primarily rests with the manufacturer of the product with the FDA overseeing the process.  With the proliferation of AI applications and the fact that they evolve over time, it is unclear how the performance of the models will be monitored at the scale that will be needed. 

Second, we need complete follow-up on the population to whom the algorithm is applied, at least in a valid sample so that the monitoring of the algorithm is based on a valid inference for its use in a particular population or clinical circumstance.  The lack of an interoperable national approach to enabling follow-up of patients leads to a situation in which none of our health systems have a systematic ability to do the requisite monitoring of the model except for the duration of an acute care hospital admission.  With traditional medical products, when a post-market study is required, the manufacturer pays for an expensive collection of follow-up data as a “one-off” activity.  

Not surprisingly, I’m hearing that the “effectiveness” metric being used by health systems to make decisions about incorporating an AI implementation is a financial metric—will the algorithm improve the bottom line of the part of the health system making the purchase?  I worry that the main use of AI algorithms will be decisions that optimize the bottom line rather than optimizing the longevity and well-being of patients.  This is counter to the mission of the FDA, where effectiveness means an improvement in a health outcome.   

I highly recommend two New England Journal of Medicine articles as context as we work on this issue.  First, the “On Suboptimization—Cadillac Care at the Mecca” authored by Brendan Reilly and secondly, “The Financialization of Healthcare in the United States” by Bruch and colleagues.  The combination of primacy of finance over clinical outcomes and the optimization of those finances in a balkanized manner create a problem that may explain our poor health status in the midst of such financial success.   The compelling need and advantage to sharing data and automating access to analysis could mean that AI could disrupt these unhealthy attributes.  

Imagine an AI system that is connected by a data infrastructure that enables a continuous learning loop between current practice, decisions, and outcomes.  At its best AI could radically change our understanding of what practices—organizational and clinical—would lead to the best outcomes for patients and clinicians—and then the payment system could be aligned to reward best practice and outcomes.

Broad education and collaboration will be needed to ensure that these challenges are met.  This coalition is in an excellent position to make an important difference, and I look forward to working with you to work towards a world in which decisions guided by AI systems use the best practices in testing, deployment, and evaluation, so that we can maximize innovation in health care and so that the application of innovation will be to improve health as a primary goal.

Thank you and I look forward to great accomplishments from CHAI! 

Back to Top