The rapid advancement of artificial intelligence (AI) in healthcare has led to a surge in AI-enabled medical devices, transforming diagnostics, treatment, and patient monitoring. Recognizing the complexities and unique risks associated with these technologies, the US Food and Drug Administration (FDA) released a draft guidance document in early 2025 titled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” This blog post explores the highlights of the guidance and its implications for medical device manufacturers aiming to bring safe, effective, and trustworthy AI devices to market.
The guidance emphasizes the importance of making critical information about AI systems understandable and accessible to users. Given the complexity and potential “black box” nature of AI, clear communication builds trust and supports safe device use.
Bias Control
AI devices risk amplifying biases, which can lead to inaccurate results, particularly for underrepresented patient populations. FDA recommends addressing bias throughout the life cycle – from data collection to postmarket monitoring – by ensuring that development and test data reflect the intended use population and proactively identifying disparities.
Managing Data Drift
AI models are sensitive to shifts in input data, which can degrade performance over time. FDA suggests that manufacturers implement strategies for detecting and mitigating data drift, including performance monitoring plans and the use of predetermined change control plans (PCCP). The PCCP framework allows certain preapproved software updates without requiring a new marketing submission, thus streamlining the regulatory process.
Device Description
Manufacturers must provide detailed descriptions of the AI-enabled device, including its intended use, inputs and outputs, AI functionalities, user configuration options, intended users, and workflow integration. This helps FDA assess how the device operates and its suitability for the target population.
User Interface and Labeling
The submission should include comprehensive details and representations of the user interface, covering all interactions, controls, displays, alarms, and outputs. Labeling must clearly outline the device’s use, AI’s role, inputs and outputs, risks, intended environment, and any limitations or customization options. For patient-facing devices, instructions should be tailored to the appropriate reading level. Labeling must also account for version control; any modification to the AI functionality that changes the safety, performance, or interpretation of data requires a new device identifier (DI) segment of the UDI.
Risk Assessment
A thorough risk management file is essential, referencing standards like ISO 14971 and AAMI CR34971. Risk assessments should address hazards across the entire life cycle, including user errors, informational risks, and challenges in interpreting AI outputs. Managing these risks ensures safer deployment and use of complex AI algorithms.
Data Management
FDA reviewers require clear documentation on data collection, processing, annotation, storage, security, and independence between training and validation datasets. Diversity and representativeness are vital to support generalizable, effective AI performance. Controls against data leakage and robust external validation provide evidence for safety and effectiveness.
Model Description and Development
Submissions must detail model architecture, input/output features, customization options, quality control methods, training processes, performance metrics, and calibration. For ensemble or pretrained models, manufacturers should explain integration and threshold determination processes.
Validation
Performance validation should use independent datasets, include subgroup analyses, and assess repeatability and reproducibility. Human factors and usability studies are crucial to ensure that users can safely and effectively operate the device. FDA also encourages “human-AI team” performance evaluation, such as reader studies for diagnostic support tools. Manufacturers must document the model version tested, study protocols, and comprehensive results.
Ongoing Performance Monitoring
Postmarket monitoring is vital for AI-enabled devices, given their sensitivity to data shifts and changing clinical environments. Manufacturers should implement continuous monitoring plans, address performance drift, and be prepared to update devices and communicate changes to users. While mandatory for some submissions, voluntary inclusion of monitoring plans is recommended for others.
FDA’s January 2025 draft guidance provides a comprehensive roadmap for manufacturers navigating the complexities of AI-enabled medical device development and submission. By clarifying terminology, emphasizing life-cycle risk management, requiring transparency and bias control, and prioritizing cybersecurity, FDA aims to ensure that AI innovations in healthcare remain safe, effective, and trustworthy. Manufacturers should carefully review the guidance, integrate its recommendations into their development processes, and prepare detailed, transparent submissions to meet regulatory standards and enhance patient outcomes.
To build and maintain trustworthy AI-enabled medical devices, your organization needs a quality system tailored to these unique demands. Future-proof your quality and risk management frameworks by aligning with the new global standard for responsible AI.
US OfficeWashington DC
EU OfficeCork, Ireland

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
UNITED STATES
1055 Thomas Jefferson St. NW
Suite 304
Washington, DC 20007
Phone: 1.800.472.6477
EUROPE
4 Emmet House, Barrack Square
Ballincollig
Cork, Ireland
Phone: +353 21 212 8530