FDA - AI based SaMD
HomeDocumentationGet started
  • FDA - AI based SaMD
  • Data Governance (DG)
    • DG01 - Define Sets
    • DG02 - Dataset Governance Policies
    • DG03 - Dataset Design Choices
    • DG04 - Dataset Source Information
    • DG05 - Dataset Annotations Information
    • DG06 - Dataset Labels Information
    • DG07 - Dataset Cleaning
    • DG08 - Dataset Enrichment
    • DG09 - Dataset Aggregation
    • DG10 - Dataset Description, Assumptions and Purpose
    • DG11 - Dataset Transformation Rationale
    • DG12 - Dataset Bias Identification
    • DG13 - Dataset Bias Mitigation
    • DG14 - Dataset Bias Analysis Action and Assessment
    • DG15 - Dataset Gaps and Shortcomings
    • DG16 - Dataset Bias Monitoring - Ongoing
    • DG17 - Dataset Bias Special/Protected Categories
  • Technical Documentation (TD)
    • TD01 - Technical Documentation Generated
    • TD02 - Additional Technical Documentation
    • TD03 - Technical Details
    • TD04 - Development steps and methods
    • TD05 - Pre-trained or Third party tools/systems
    • TD06 - Design specification
    • TD07 - System Architecture
    • TD08 - Computational Resources
    • TD09 - Data Requirements
    • TD10 - Human Oversight Assessment
    • TD11 - Pre Determined Changes
    • TD12 - Continuous Compliance
    • TD13 - Validation and Testing
    • TD14 - Monitoring, Function and Control
    • TD15 - Risk Management System
    • TD16 - Changes
    • TD17 - Other Technical Standards
    • TD18 - Ongoing Monitoring System
    • TD19 - Reports Signed
  • Transparency and Provision of Information to Users (TPI)
    • TPI01 - Transparency of the AI System
    • TPI02 - Instructions for Use
  • Human Oversight (HO)
    • HO01 - Human Oversight Mechanism
    • HO02 - Human Oversight Details
    • HO03 - Human Oversight - Biometric Identification Systems
  • Accuracy, Robustness and Cybersecurity (ARC)
    • ARC01 - Accuracy Levels
    • ARC02 - Robustness Assessment
    • ARC03 - Continuous Learning Feedback Loop Assessment
    • ARC04 - Cyber Security Assessment
  • Managing SaMD Lifecycle Support Process - Record Keeping (RK)
    • RK01 - Logging Capabilities
    • RK02 - Logging Traceability
    • RK03 - Logging - Situations that May Cause AI Risk
    • RK04 - Logging - Biometric systems requirements
    • RK05 - Details of Off-the-Shelf Components
    • RK06 - Evaluation Process of Off-the-Shelf Components
    • RK07 - Quality Control Process of Off-the-Shelf Components
    • RK08 - Internal Audit Reports
  • Risk Management System (RMS)
    • RMS01 - Risk Management System in Place
    • RMS02 - Risk Management System Capabilities and Processes
    • RMS03 - Risk Management Measures
    • RMS04 - Testing
    • RMS05 - Residual Risks
    • RMS06 - Full Track of Mitigation Measures
  • Quality Management Principles (QMP)
    • QMP01 - Quality Management System in Place
    • QMP02 - Compliance Strategy stated
    • QMP03 - Design processes
    • QMP04 - Development and QA (Quality Assurance) processes
    • QMP05 - Test and Validation Procedures
    • QMP06 - Technical Standards
    • QMP07 - Data Management Procedures
    • QMP08 - Risk Management System
    • QMP09 - Ongoing Monitoring System
    • QMP10 - Incident Reporting Procedures
    • QMP11 - Communications with Competent Authorities
    • QMP12 - Record Keeping Procedures
    • QMP13 - Resource Management Procedures
    • QMP14 - Accountability Framework
  • Post Market Monitoring System (PMS)
    • PMS01 - Post Market Monitoring System in Place
    • PMS02 - Data Collection Assessment
    • PMS03 - Post Market Monitoring Plan
Powered by GitBook
On this page

Accuracy, Robustness and Cybersecurity (ARC)

This compliance category contains details of the accuracy, robustness and Cybersecurity requirements that FDA AI/ML based SaMD must meet. In the context of this category, accuracy and performance and robustness and safety are used interchangeably.

According to the IMDRF/SaMD, the guideline on the ARC includes:

Accuracy and Robustness:

All appropriate SaMD lifecycle support processes, and SaMD realization and use processes should be considered. Maintenance activities should preserve the integrity of the SaMD without introducing new safety, effectiveness, performance, and security hazards.

Within the context of SaMD it is important to understand how systems, software, context of use, usability, data, and documentation might be affected by changes, particularly with regards to safety, effectiveness, and performance.

The SaMD manufacturer should take into account implications and introduction of patient safety risk as a result of changes to architecture and code.

Cybersecurity:

Building quality into SaMD requires that safety and security should be evaluated within each phase of the product lifecycle and at key milestones. Security threats and their potential effect on patient safety should be considered as possible actors on the system in all SaMD lifecycle activities.

The goal is to engineer a system that: a) maintains patient safety and the confidentiality, availability, and integrity of critical functions and data; b) is resilient against intentional and unintentional threats; and c) is fault-tolerant and recoverable to a safe state in the presence of an attack.

This compliance category covers the following principles from the FDA GMLP:

Principle 2. Good Software Engineering and Security Practices Are Implemented: Model design is implemented with attention to the “fundamentals”: good software engineering practices, data quality assurance, data management, and robust cybersecurity practices. These practices include methodical risk management and design process that can appropriately capture and communicate design, implementation, and risk management decisions and rationale, as well as ensure data authenticity and integrity.

Principle 3. Clinical Study Participants and Data Sets Are Representative of the Intended Patient Population: Data collection protocols should ensure that the relevant characteristics of the intended patient population (for example, in terms of age, gender, sex, race, and ethnicity), use, and measurement inputs are sufficiently represented in a sample of adequate size in the clinical study and training and test datasets, so that results can be reasonably generalized to the population of interest. This is important to manage any bias, promote appropriate and generalizable performance across the intended patient population, assess usability, and identify circumstances where the model may underperform.

Principle 6. Model Design Is Tailored to the Available Data and Reflects the Intended Use of the Device: Model design is suited to the available data and supports the active mitigation of known risks, like overfitting, performance degradation, and security risks. The clinical benefits and risks related to the product are well understood, used to derive clinically meaningful performance goals for testing, and support that the product can safely and effectively achieve its intended use. Considerations include the impact of both global and local performance and uncertainty/variability in the device inputs, outputs, intended patient populations, and clinical use conditions.

Principle 7. Focus Is Placed on the Performance of the Human-AI Team: Where the model has a “human in the loop,” human factors considerations and the human interpretability of the model outputs are addressed with emphasis on the performance of the Human-AI team, rather than just the performance of the model in isolation.

Principle 8. Testing Demonstrates Device Performance During Clinically Relevant Conditions: Statistically sound test plans are developed and executed to generate clinically relevant device performance information independently of the training data set. Considerations include the intended patient population, important subgroups, clinical environment and use by the Human-AI team, measurement inputs, and potential confounding factors.

Principle 9. Users Are Provided Clear, Essential Information: Users are provided ready access to clear, contextually relevant information that is appropriate for the intended audience (such as health care providers or patients) including: the product’s intended use and indications for use, performance of the model for appropriate subgroups, characteristics of the data used to train and test the model, acceptable inputs, known limitations, user interface interpretation, and clinical workflow integration of the model. Users are also made aware of device modifications and updates from real-world performance monitoring, the basis for decision-making when available, and a means to communicate product concerns to the developer.

Principle 10. Deployed Models Are Monitored for Performance and Re-training Risks Are Managed: Deployed models have the capability to be monitored in “real world” use with a focus on maintained or improved safety and performance. Additionally, when models are periodically or continually trained after deployment, there are appropriate controls in place to manage risks of overfitting, unintended bias, or degradation of the model (for example, dataset drift) that may impact the safety and performance of the model as it is used by the Human-AI team.

Below is the list of the controls that are part of this compliance category:

PreviousHO03 - Human Oversight - Biometric Identification SystemsNextARC01 - Accuracy Levels

Last updated 2 years ago

ARC01 - Accuracy Levels
ARC02 - Robustness Assessment
ARC03 - Continuous Learning Feedback Loop Assessment
ARC04 - Cyber Security Assessment