FDA - AI based SaMD
HomeDocumentationGet started
  • FDA - AI based SaMD
  • Data Governance (DG)
    • DG01 - Define Sets
    • DG02 - Dataset Governance Policies
    • DG03 - Dataset Design Choices
    • DG04 - Dataset Source Information
    • DG05 - Dataset Annotations Information
    • DG06 - Dataset Labels Information
    • DG07 - Dataset Cleaning
    • DG08 - Dataset Enrichment
    • DG09 - Dataset Aggregation
    • DG10 - Dataset Description, Assumptions and Purpose
    • DG11 - Dataset Transformation Rationale
    • DG12 - Dataset Bias Identification
    • DG13 - Dataset Bias Mitigation
    • DG14 - Dataset Bias Analysis Action and Assessment
    • DG15 - Dataset Gaps and Shortcomings
    • DG16 - Dataset Bias Monitoring - Ongoing
    • DG17 - Dataset Bias Special/Protected Categories
  • Technical Documentation (TD)
    • TD01 - Technical Documentation Generated
    • TD02 - Additional Technical Documentation
    • TD03 - Technical Details
    • TD04 - Development steps and methods
    • TD05 - Pre-trained or Third party tools/systems
    • TD06 - Design specification
    • TD07 - System Architecture
    • TD08 - Computational Resources
    • TD09 - Data Requirements
    • TD10 - Human Oversight Assessment
    • TD11 - Pre Determined Changes
    • TD12 - Continuous Compliance
    • TD13 - Validation and Testing
    • TD14 - Monitoring, Function and Control
    • TD15 - Risk Management System
    • TD16 - Changes
    • TD17 - Other Technical Standards
    • TD18 - Ongoing Monitoring System
    • TD19 - Reports Signed
  • Transparency and Provision of Information to Users (TPI)
    • TPI01 - Transparency of the AI System
    • TPI02 - Instructions for Use
  • Human Oversight (HO)
    • HO01 - Human Oversight Mechanism
    • HO02 - Human Oversight Details
    • HO03 - Human Oversight - Biometric Identification Systems
  • Accuracy, Robustness and Cybersecurity (ARC)
    • ARC01 - Accuracy Levels
    • ARC02 - Robustness Assessment
    • ARC03 - Continuous Learning Feedback Loop Assessment
    • ARC04 - Cyber Security Assessment
  • Managing SaMD Lifecycle Support Process - Record Keeping (RK)
    • RK01 - Logging Capabilities
    • RK02 - Logging Traceability
    • RK03 - Logging - Situations that May Cause AI Risk
    • RK04 - Logging - Biometric systems requirements
    • RK05 - Details of Off-the-Shelf Components
    • RK06 - Evaluation Process of Off-the-Shelf Components
    • RK07 - Quality Control Process of Off-the-Shelf Components
    • RK08 - Internal Audit Reports
  • Risk Management System (RMS)
    • RMS01 - Risk Management System in Place
    • RMS02 - Risk Management System Capabilities and Processes
    • RMS03 - Risk Management Measures
    • RMS04 - Testing
    • RMS05 - Residual Risks
    • RMS06 - Full Track of Mitigation Measures
  • Quality Management Principles (QMP)
    • QMP01 - Quality Management System in Place
    • QMP02 - Compliance Strategy stated
    • QMP03 - Design processes
    • QMP04 - Development and QA (Quality Assurance) processes
    • QMP05 - Test and Validation Procedures
    • QMP06 - Technical Standards
    • QMP07 - Data Management Procedures
    • QMP08 - Risk Management System
    • QMP09 - Ongoing Monitoring System
    • QMP10 - Incident Reporting Procedures
    • QMP11 - Communications with Competent Authorities
    • QMP12 - Record Keeping Procedures
    • QMP13 - Resource Management Procedures
    • QMP14 - Accountability Framework
  • Post Market Monitoring System (PMS)
    • PMS01 - Post Market Monitoring System in Place
    • PMS02 - Data Collection Assessment
    • PMS03 - Post Market Monitoring Plan
Powered by GitBook
On this page

Post Market Monitoring System (PMS)

PreviousQMP14 - Accountability FrameworkNextPMS01 - Post Market Monitoring System in Place

Last updated 2 years ago

This compliance category contains details about the Post Market Monitoring System required to be put in place for the FDA AI/ML based SaMD.

This compliance category support the secion 7.5 fo the IMDRF/SaMD N23, which states:

Measurement of quality characteristics of software products and processes is used to manage and improve product realization and use. An effective measurement of key factors, often associated with issues related to risk, can help identify the capabilities needed to deliver safe and effective SaMD. Opportunities to monitor, measure, and analyze for improvement exist before, during, and after SaMD lifecycle processes, activities and tasks, and are completed with the intent to objectively demonstrate the quality of the SaMD. Post market surveillance including monitoring, measurement and analysis of quality data can include logging and tracking of complaints, clearing technical issues, determining problem causes and actions to address, identify, collect, analyze, and report on critical quality characteristics of products developed. For SaMD, monitoring to demonstrate through objective measurement that processes are being followed does not itself guarantee good software, just as monitoring software quality alone does not guarantee that the objectives for a process are being achieved.

Point four in the FDA AI/ML discussion paper revolved around the transparency and real-world performance monitoring of AI/ML-based SaMD. It states:

To fully adopt a TPLC (Total Product Lifecycle) approach in the regulation of AI/ML-based SaMD, manufacturers can work to assure the safety and effectiveness of their software products by implementing appropriate mechanisms that support transparency and real-world performance monitoring. Transparency about the function and modifications of medical devices is a key aspect of their safety. This is especially important for devices, like SaMD that incorporate AI/ML, which change over time. Further, many of the modifications to AI/ML-based SaMD may be supported by collection and monitoring of real-world data. Gathering performance data on the real-world use of the SaMD may allow manufacturers to understand how their products are being used, identify opportunities for improvements, and respond proactively to safety or usability concerns. Real-world data collection and monitoring is an important mechanism that manufacturers can leverage to mitigate the risk involved with AI/ML-based SaMD modifications, in support of the benefit-risk profile in the assessment of a particular AI/ML-based SaMD.

Through this framework, manufacturers would be expected to commit to the principles of transparency and real-world performance monitoring for AI/ML-based SaMD. FDA would also expect the manufacturer to provide periodic reporting to FDA on updates that were implemented as part of the approved SPS (SaMD Pre-Specification) and ACP (Algorithm Change Protocol), as well as performance metrics for those SaMD. This commitment could be achieved through a variety of mechanisms.

From the guiding principles published by the FDA as GMLP, this category support the principle:

Principle 10. Deployed Models Are Monitored for Performance and Re-training Risks Are Managed: Deployed models have the capability to be monitored in “real world” use with a focus on maintained or improved safety and performance. Additionally, when models are periodically or continually trained after deployment, there are appropriate controls in place to manage risks of overfitting, unintended bias, or degradation of the model (for example, dataset drift) that may impact the safety and performance of the model as it is used by the Human-AI team.

This compliance category supports the following controls/checks:

PMS01 - Post Market Monitoring System in Place
PMS02 - Data Collection Assessment
PMS03 - Post Market Monitoring Plan