Seclea User Documentation
HomeAI RegulationsAI Risk ManagementGet started
  • Seclea User Documentation
  • Introducing Seclea
    • Seclea - Building trust in AI
      • Seclea - Features and services
      • Seclea integrations and supported frameworks
  • Getting Started
    • Create a Seclea account
    • First Project - Example
    • Cloud Platform
    • On-Premises Deployment
  • Seclea Web UI
    • Overview
    • Basic Workflows
    • Creating a Project
    • Project Settings
      • User Access Setting
      • Compliance Settings
      • Risk Management Settings
      • Performance (Internal Policies) Setting
    • User Management
    • AI regulatory compliance
      • Seclea Compliance Dashboard
      • Working with templates for compliance
    • AI risk management
      • Seclea AI Risk Management Dashboard
      • Working with templates for risk management
  • Python API (seclea-ai)
    • Overview
    • API Documentation
  • Supported AI regulations
    • EC Artificial Intelligence Act
    • FDA SaMD with Artificial Intelligence
    • OECD AI Principles
    • Canada Artificial Intelligence and Data Act (AIDA)
    • South Korean - AI-based Medical Devices (SK-AIMD)
    • Saudi Arabia - AI-based Medical Devices (SA-AIMD)
  • Supported risk management frameworks
    • NIST AI risk management
    • ISO AI risk management
    • FDA AI based SaMD Risk Management
  • Seclea INFO
    • Reporting Bugs
    • Error Catalog
Powered by GitBook
On this page
  1. Supported risk management frameworks

ISO AI risk management

PreviousNIST AI risk managementNextFDA AI based SaMD Risk Management

Last updated 2 years ago

ISO 23894 is an international standard for the risk management of artificial intelligence (AI) systems. The standard was published in November 2020 under "ISO/IEC 23894:2020 - Risk management of AI systems."

By providing a framework for risk management, ISO 23894 aims to assist organisations in managing the risks associated with AI systems. The standard offers guidance on the following critical areas:

  • Risk assessment: This entails the identification and evaluation of potential AI system risks, such as bias, errors, and security vulnerabilities.

  • Risk treatment: Once hazards have been identified, organisations must determine the most effective course of action to manage them. This could involve mitigating the hazards, transferring them to another party, or accepting them.

  • Risk communication: Organizations must inform relevant stakeholders, such as consumers, customers, and regulators, of the risks associated with AI systems.

  • Risk monitoring and review: To ensure that the risk management framework remains effective, organisations must monitor the performance of AI systems and conduct regular reviews of the risk management framework.

Principal advantages of implementing ISO 23894 include:

  • Enhanced risk management: The standard provides a comprehensive framework for managing the risks associated with AI systems, allowing organisations to identify and mitigate potential risks more effectively.

  • Increased transparency: By communicating to stakeholders the risks associated with AI systems, organisations can increase transparency and establish trust with customers, regulators, and other stakeholders.

  • Compliance with regulatory requirements: Implementing ISO 23894 can assist organisations in meeting regulatory requirements for AI risk management.

  • Better decision-making: Enhanced decision-making: Organizations can make more informed decisions regarding the use and deployment of AI systems if they comprehensively comprehend the associated risks.

Overall, ISO 23894 provides organisations with a valuable framework for managing the risks associated with AI systems, thereby facilitating these technologies' safe and efficient application.

The Seclea Risk Management template for ISO AI Risk Management (ISO 23894) is structured around these core categories/sub-categories - along with relevant checks and controls (if and when appropriate).

More details on ISO risk management can be found .

here
General Risk (GER)
AI Accountability (ACC)
AI Expertise (AIE)
Training and Test Dataset (TTD)
Environmental Impact (ENI)
AI Fairness (AIF)
AI Maintainability (AIM)
AI Privacy (AIP)
AI Robustness (AIR)
AI Safety (AIS)
AI Security (ASE)
AI Transparency and Explainability (ATE)