logo

QA/RA Consulting, Auditing & Training

logo

Let's get started

Understanding FDA’s 2025 Draft Guidance on AI-Enabled Medical Devices

AI-enabled medical devices
Key Life-Cycle Management and Submission Recommendations for Manufacturers

The rapid advancement of artificial intelligence (AI) in healthcare has led to a surge in AI-enabled medical devices, transforming diagnostics, treatment, and patient monitoring. Recognizing the complexities and unique risks associated with these technologies, the US Food and Drug Administration (FDA) released a draft guidance document in early 2025 titled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” This blog post explores the highlights of the guidance and its implications for medical device manufacturers aiming to bring safe, effective, and trustworthy AI devices to market.

Terminology Matters: Bridging the AI and Regulatory Divide

One of the first issues addressed in FDA’s guidance is the difference in terminology and definitions between the AI community and regulatory standards. For instance, in the AI world, “validation” often refers to data curation or model tuning, typically during training. To ensure clarity and compliance, manufacturers must recognize that the regulatory definition of validation refers to confirming – through objective evidence – that the final device consistently fulfills its specified intended use for the patient or user. Submissions should therefore refrain from using “validation” to describe internal development activities like AI model training or tuning, which must be treated as part of the overall design and development process.
Total Product Life Cycle (TPLC) Approach
FDA advocates for a holistic, life-cycle-based approach to risk management for AI-enabled devices. This means considering risks not just during design and development but also throughout deployment and real-world use. Early integration of risk management strategies, continuous performance monitoring, and proactive updates help ensure ongoing safety and effectiveness.

Transparency, Bias Control, and Data Drift

Transparency

The guidance emphasizes the importance of making critical information about AI systems understandable and accessible to users. Given the complexity and potential “black box” nature of AI, clear communication builds trust and supports safe device use.

Bias Control

AI devices risk amplifying biases, which can lead to inaccurate results, particularly for underrepresented patient populations. FDA recommends addressing bias throughout the life cycle – from data collection to postmarket monitoring – by ensuring that development and test data reflect the intended use population and proactively identifying disparities.

Managing Data Drift

AI models are sensitive to shifts in input data, which can degrade performance over time. FDA suggests that manufacturers implement strategies for detecting and mitigating data drift, including performance monitoring plans and the use of predetermined change control plans (PCCP). The PCCP framework allows certain preapproved software updates without requiring a new marketing submission, thus streamlining the regulatory process.

Submission Essentials: What Manufacturers Need to Know

Device Description

Manufacturers must provide detailed descriptions of the AI-enabled device, including its intended use, inputs and outputs, AI functionalities, user configuration options, intended users, and workflow integration. This helps FDA assess how the device operates and its suitability for the target population.

User Interface and Labeling

The submission should include comprehensive details and representations of the user interface, covering all interactions, controls, displays, alarms, and outputs. Labeling must clearly outline the device’s use, AI’s role, inputs and outputs, risks, intended environment, and any limitations or customization options. For patient-facing devices, instructions should be tailored to the appropriate reading level. Labeling must also account for version control; any modification to the AI functionality that changes the safety, performance, or interpretation of data requires a new device identifier (DI) segment of the UDI.

Risk Assessment

A thorough risk management file is essential, referencing standards like ISO 14971 and AAMI CR34971. Risk assessments should address hazards across the entire life cycle, including user errors, informational risks, and challenges in interpreting AI outputs. Managing these risks ensures safer deployment and use of complex AI algorithms.

Data Management

FDA reviewers require clear documentation on data collection, processing, annotation, storage, security, and independence between training and validation datasets. Diversity and representativeness are vital to support generalizable, effective AI performance. Controls against data leakage and robust external validation provide evidence for safety and effectiveness.

Model Development and Validation

Model Description and Development

Submissions must detail model architecture, input/output features, customization options, quality control methods, training processes, performance metrics, and calibration. For ensemble or pretrained models, manufacturers should explain integration and threshold determination processes.

Validation

Performance validation should use independent datasets, include subgroup analyses, and assess repeatability and reproducibility. Human factors and usability studies are crucial to ensure that users can safely and effectively operate the device. FDA also encourages “human-AI team” performance evaluation, such as reader studies for diagnostic support tools. Manufacturers must document the model version tested, study protocols, and comprehensive results.

Ongoing Performance Monitoring

Postmarket monitoring is vital for AI-enabled devices, given their sensitivity to data shifts and changing clinical environments. Manufacturers should implement continuous monitoring plans, address performance drift, and be prepared to update devices and communicate changes to users. While mandatory for some submissions, voluntary inclusion of monitoring plans is recommended for others.

Cybersecurity: Protecting AI in Healthcare

AI-enabled medical devices face unique cybersecurity threats, such as data poisoning, model inversion, evasion, and performance drift due to malicious attacks. FDA guidance calls for robust cybersecurity risk assessments, threat modeling, and controls tailored to AI components. Continuous monitoring and timely updates are essential to safeguard device performance and patient safety.

Conclusion: Preparing for the Future of AI-Enabled Devices

FDA’s January 2025 draft guidance provides a comprehensive roadmap for manufacturers navigating the complexities of AI-enabled medical device development and submission. By clarifying terminology, emphasizing life-cycle risk management, requiring transparency and bias control, and prioritizing cybersecurity, FDA aims to ensure that AI innovations in healthcare remain safe, effective, and trustworthy. Manufacturers should carefully review the guidance, integrate its recommendations into their development processes, and prepare detailed, transparent submissions to meet regulatory standards and enhance patient outcomes.

To build and maintain trustworthy AI-enabled medical devices, your organization needs a quality system tailored to these unique demands. Future-proof your quality and risk management frameworks by aligning with the new global standard for responsible AI.

Our team is here to help. Contact us online
or
Get answers right now. Call

US OfficeWashington DC

1.800.472.6477

EU OfficeCork, Ireland

+353 21 212 8530

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.