Seclea User Documentation
HomeAI RegulationsAI Risk ManagementGet started
  • Seclea User Documentation
  • Introducing Seclea
    • Seclea - Building trust in AI
      • Seclea - Features and services
      • Seclea integrations and supported frameworks
  • Getting Started
    • Create a Seclea account
    • First Project - Example
    • Cloud Platform
    • On-Premises Deployment
  • Seclea Web UI
    • Overview
    • Basic Workflows
    • Creating a Project
    • Project Settings
      • User Access Setting
      • Compliance Settings
      • Risk Management Settings
      • Performance (Internal Policies) Setting
    • User Management
    • AI regulatory compliance
      • Seclea Compliance Dashboard
      • Working with templates for compliance
    • AI risk management
      • Seclea AI Risk Management Dashboard
      • Working with templates for risk management
  • Python API (seclea-ai)
    • Overview
    • API Documentation
  • Supported AI regulations
    • EC Artificial Intelligence Act
    • FDA SaMD with Artificial Intelligence
    • OECD AI Principles
    • Canada Artificial Intelligence and Data Act (AIDA)
    • South Korean - AI-based Medical Devices (SK-AIMD)
    • Saudi Arabia - AI-based Medical Devices (SA-AIMD)
  • Supported risk management frameworks
    • NIST AI risk management
    • ISO AI risk management
    • FDA AI based SaMD Risk Management
  • Seclea INFO
    • Reporting Bugs
    • Error Catalog
Powered by GitBook
On this page
  1. Supported risk management frameworks

FDA AI based SaMD Risk Management

PreviousISO AI risk managementNextReporting Bugs

Last updated 2 years ago

In the United States, the FDA (Food and Drug Administration) regulates medical devices, including software as a medical device (SaMD). Software-as-a-Medical-Device (SaMD) is defined as software intended for one or more medical purposes that perform these functions independently of a hardware medical device.

Regarding AI-based SaMD, the FDA has established risk management guidelines. The guidelines are intended to ensure the safety and efficacy of AI-based SaMD and provide a framework for product designers and testers.

The FDA's risk management approach involves identifying and evaluating potential hazards associated with AI-based SaMD. This includes evaluating the algorithm's accuracy, its performance under various conditions, and the potential impact of output errors or inaccuracies. Additionally, developers must implement measures to mitigate any identified hazards.

The FDA recommends a process of continuous monitoring to ensure that the SaMD continues to function as intended and that any new hazards are promptly identified and addressed. This includes continuous testing and evaluation of the SaMD and monitoring user and medical professional feedback.

In conclusion, the FDA's AI-based SaMD risk management guidelines emphasise the significance of identifying and mitigating potential risks associated with AI-based SaMD and implementing a continuous monitoring process to ensure ongoing safety and efficacy. By adhering to these guidelines, developers can ensure that their AI-based SaMD products comply with FDA regulations and provide patients with safe and effective medical solutions.

The FDA Guidance on what to submit with a 510(k) titled, Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices includes the following:

More details on FDA AI based SaMD Risk Management can be found .

here
Level of Concern Document
Software Description
AI Expertise (AIE)
Device Hazard Analysis
Software Requirement Specifications (SRS)
Training and Test Dataset (TTD)
Software Design Specifications (SDS)
AI Fairness (AIF)
AI Transparency and Explainability (ATE)
Traceability Analysis
Verification and Validation (V&V) Documentation
Revision Level History