FDA - AI based SaMD
HomeDocumentationGet started
  • FDA - AI based SaMD
  • Data Governance (DG)
    • DG01 - Define Sets
    • DG02 - Dataset Governance Policies
    • DG03 - Dataset Design Choices
    • DG04 - Dataset Source Information
    • DG05 - Dataset Annotations Information
    • DG06 - Dataset Labels Information
    • DG07 - Dataset Cleaning
    • DG08 - Dataset Enrichment
    • DG09 - Dataset Aggregation
    • DG10 - Dataset Description, Assumptions and Purpose
    • DG11 - Dataset Transformation Rationale
    • DG12 - Dataset Bias Identification
    • DG13 - Dataset Bias Mitigation
    • DG14 - Dataset Bias Analysis Action and Assessment
    • DG15 - Dataset Gaps and Shortcomings
    • DG16 - Dataset Bias Monitoring - Ongoing
    • DG17 - Dataset Bias Special/Protected Categories
  • Technical Documentation (TD)
    • TD01 - Technical Documentation Generated
    • TD02 - Additional Technical Documentation
    • TD03 - Technical Details
    • TD04 - Development steps and methods
    • TD05 - Pre-trained or Third party tools/systems
    • TD06 - Design specification
    • TD07 - System Architecture
    • TD08 - Computational Resources
    • TD09 - Data Requirements
    • TD10 - Human Oversight Assessment
    • TD11 - Pre Determined Changes
    • TD12 - Continuous Compliance
    • TD13 - Validation and Testing
    • TD14 - Monitoring, Function and Control
    • TD15 - Risk Management System
    • TD16 - Changes
    • TD17 - Other Technical Standards
    • TD18 - Ongoing Monitoring System
    • TD19 - Reports Signed
  • Transparency and Provision of Information to Users (TPI)
    • TPI01 - Transparency of the AI System
    • TPI02 - Instructions for Use
  • Human Oversight (HO)
    • HO01 - Human Oversight Mechanism
    • HO02 - Human Oversight Details
    • HO03 - Human Oversight - Biometric Identification Systems
  • Accuracy, Robustness and Cybersecurity (ARC)
    • ARC01 - Accuracy Levels
    • ARC02 - Robustness Assessment
    • ARC03 - Continuous Learning Feedback Loop Assessment
    • ARC04 - Cyber Security Assessment
  • Managing SaMD Lifecycle Support Process - Record Keeping (RK)
    • RK01 - Logging Capabilities
    • RK02 - Logging Traceability
    • RK03 - Logging - Situations that May Cause AI Risk
    • RK04 - Logging - Biometric systems requirements
    • RK05 - Details of Off-the-Shelf Components
    • RK06 - Evaluation Process of Off-the-Shelf Components
    • RK07 - Quality Control Process of Off-the-Shelf Components
    • RK08 - Internal Audit Reports
  • Risk Management System (RMS)
    • RMS01 - Risk Management System in Place
    • RMS02 - Risk Management System Capabilities and Processes
    • RMS03 - Risk Management Measures
    • RMS04 - Testing
    • RMS05 - Residual Risks
    • RMS06 - Full Track of Mitigation Measures
  • Quality Management Principles (QMP)
    • QMP01 - Quality Management System in Place
    • QMP02 - Compliance Strategy stated
    • QMP03 - Design processes
    • QMP04 - Development and QA (Quality Assurance) processes
    • QMP05 - Test and Validation Procedures
    • QMP06 - Technical Standards
    • QMP07 - Data Management Procedures
    • QMP08 - Risk Management System
    • QMP09 - Ongoing Monitoring System
    • QMP10 - Incident Reporting Procedures
    • QMP11 - Communications with Competent Authorities
    • QMP12 - Record Keeping Procedures
    • QMP13 - Resource Management Procedures
    • QMP14 - Accountability Framework
  • Post Market Monitoring System (PMS)
    • PMS01 - Post Market Monitoring System in Place
    • PMS02 - Data Collection Assessment
    • PMS03 - Post Market Monitoring Plan
Powered by GitBook
On this page

Human Oversight (HO)

PreviousTPI02 - Instructions for UseNextHO01 - Human Oversight Mechanism

Last updated 2 years ago

This compliance category contains details on the Human Oversight mechanisms required for FDA AI/ML based SaMD.

High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.

Human oversight shall be ensured through either one or all of the following measures:

  1. identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;

  2. identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user.

The human oversight provisioned by the above integration/provisioning shall enable the individuals to do the following:

  1. fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;

  2. remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;

  3. be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available;

  4. be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system ;

  5. be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure.

This compliance category covers the following principles from the FDA GMLP:

Principle 7. Focus Is Placed on the Performance of the Human-AI Team: Where the model has a “human in the loop,” human factors considerations and the human interpretability of the model outputs are addressed with emphasis on the performance of the Human-AI team, rather than just the performance of the model in isolation.

Principle 8. Testing Demonstrates Device Performance During Clinically Relevant Conditions: Statistically sound test plans are developed and executed to generate clinically relevant device performance information independently of the training data set. Considerations include the intended patient population, important subgroups, clinical environment and use by the Human-AI team, measurement inputs, and potential confounding factors.

Principle 10. Deployed Models Are Monitored for Performance and Re-training Risks Are Managed: Deployed models have the capability to be monitored in “real world” use with a focus on maintained or improved safety and performance. Additionally, when models are periodically or continually trained after deployment, there are appropriate controls in place to manage risks of overfitting, unintended bias, or degradation of the model (for example, dataset drift) that may impact the safety and performance of the model as it is used by the Human-AI team.

Below is the list of the controls that are part of this compliance category:

HO01 - Human Oversight Mechanism
HO02 - Human Oversight Details
HO03 - Human Oversight - Biometric Identification Systems