Blog: A Lifecycle Management Approach toward Delivering Safe, Effective AI-enabled Health Care
By: Troy Tazbaz, Director, and John Nicol, PhD, Digital Health Specialist, Digital Health Center of Excellence (DHCoE), Center for Devices and Radiological Health, U.S. Food and Drug Administration
Troy Tazbaz
Director
Digital Health Center of Excellence
John Nicol, PhD
Digital Health Specialist
Digital Health Center of Excellence
As global interest in the transformative potential of artificial intelligence (AI) in health care soars, ensuring the safety and effectiveness of AI-enabled medical devices, as well as their trustworthiness, fairness, and performance, becomes increasingly urgent.
This is particularly challenging because of the nature of AI applications, which are designed to continuously learn and adapt in real-world health care settings. While this adaptability can enhance performance, it also poses significant risks, such as exacerbating biases in data or algorithms, potentially harming patients and further disadvantaging underrepresented populations.
To navigate the complexity and risks associated with AI software in health care, let’s take a look at Lifecycle Management (LCM). Since the 1960s, LCM has been essential to delivering reliable software. Modern Software Development Lifecycles (SDLCs) embody LCM principles, offering a structured framework for planning, designing, implementing, testing, integrating, deploying, maintaining, and eventually retiring software.
As a continuation of what we discussed in our Blog: The Promise Artificial Intelligence Holds for Improving Health Care, in this article, we will focus on the potential of leveraging LCM to address the unique challenges of generative AI in health care, with practices to help ensure these systems meet real-world needs while managing their inherent risks across the software lifecycle.
An AI Lifecycle Concept
The DHCoE initiated an effort to map the phases of a traditional SDLC to the specifics of AI software development, which we are calling AI lifecycle (AILC). Our initial mapping generally identifies key activities, compiled from literature and reviews of consensus standards, covering each phase of the AILC.
In the AILC concept diagram provided below, we highlight systematic methods related to data and model evaluation during Data Collection and Management, and Model Building and Tuning phases. This diagram also illustrates monitoring AI software post-deployment in Operation and Monitoring and Real-World Performance Evaluation phases.
The AI Lifecycle Concept Diagram Illustrating Per-Phase Considerations
(PDF version including descriptive text)
AI Lifecycle Management Could Provide a Powerful Playbook
DHCoE’s review of early AI standards documents identified that they often provide general lifecycle considerations but lack specific details. This AILC example incorporates a broad set of technical and procedural considerations for each phase. One possible use of this AILC model is as a guide, or playbook, to help assess standards, tools, metrics, and best practices for the considerations identified by the boxes in each phase (column).
For instance, in thinking about the “Data Suitability” element within the Data Collection & Management phase, one could work to identify the relevant standards and applicable metrics, like data quality, population coverage, and provenance. Additionally, one could explore operational tools for tasks such as data preprocessing, augmentation, bias detection.
Standards play a role in the AILC by helping to ensure quality, facilitate interoperability, and promote ethical practices. They also help guide development, enhance transparency, support compliance, encourage innovation, and build trust. An AILC concept like this one can help contribute to the promotion and continued development of these standards by the industry and health care community. An AILC lens can also help identify specific gaps for AI standards, especially in the medical devices and health care domain, driving progress in this critical area and contributing to the ongoing quality management of AI models.
We hope this concept of the AILC can help spur development of other activities in this space, such as:
- A comprehensive checklist to aid developers in the systematic creation and evaluation of AI used as a medical device and in health care solutions.
- Establishment of a robust foundation for developing AI models rooted in high-quality, reliable, and ethically sound data and AI practices.
- Development of a systematic approach for evaluating relevant standards, tools, metrics, and best practices that cater to a wide range of stakeholders through the entire AILC – including the post-deployment phase.
- Adoption of a harmonized approach to unify development strategies, techniques, and discipline.
Making it Happen as a Community
We encourage the health care community, who have a vested interest in ensuring the safety and effectiveness of AI in health care, to engage with, iterate, and refine these concepts.
We invite you to consider this AILC concept for standards development efforts in AI. Your involvement will help advocate for safer AI in health care.
Our “virtual” doors at the DHCoE are always open. We welcome your comments and feedback related to AI use in health care. Email us at digitalhealth@fda.hhs.gov, noting, “Attn: AI in health care,” in the subject line.
In our next blog, we’ll discuss the importance of access to high quality data, enabling developers to develop innovative and safe and effective AI models for health care.