OECD AI Principles
HomeDocumentationGet started
  • OECD AI Principles
  • 1. Inclusive growth, sustainable development and well-being (ISW)
    • ISW01 - AI Governance
    • ISW02 - Responsible AI Policy
    • ISW03 - AI Oversight Process
  • 2. Human-centred values and fairness (HVF)
    • HVF01 - Define Sets
    • HVF02 - Human Oversight Mechanism
    • HVF03 - Human Oversight - Biometric Identification Systems
    • HVF04 - Human Oversight Details
    • HVF05 - Dataset Governance Policies
    • HVF06 - Dataset Design Choices
    • HVF07 - Dataset Source Information
    • HVF08 - Dataset Annotations Information
    • HVF09 - Dataset Labels Information
    • HVF10 - Dataset Cleaning
    • HVF11 - Dataset Enrichment
    • HVF12 - Dataset Aggregation
    • HVF13 - Dataset Description, Assumptions and Purpose
    • HVF14 - Dataset Transformation Rationale
    • HVF15 - Dataset Bias Identification
    • HVF16 - Dataset Bias Mitigation
    • HVF17 - Dataset Bias Analysis Action and Assessment
    • HVF18 - Dataset Gaps and Shortcomings
    • HVF19 - Dataset Bias Monitoring - Ongoing
    • HVF20 - Dataset Bias Special/Protected Categories
  • 3. Transparency and Explainability (TAE)
    • TAE01 - Technical Documentation Generated
    • TAE02 - Additional Technical Documentation
    • TAE03 - Technical Details
    • TAE04 - Development steps and methods
    • TAE05 - Pre-trained or Third party tools/systems
    • TAE06 - Design specification
    • TAE07 - System Architecture
    • TAE08 - Computational Resources
    • TAE09 - Data Requirements
    • TAE10 - Human Oversight Assessment
    • TAE11 - Pre Determined Changes
    • TAE12 - Continuous Compliance
    • TAE13 - Validation and Testing
    • TAE14 - Monitoring, Function and Control
    • TAE15 - Risk Management System
    • TAE16 - Changes
    • TAE17 - Other Technical Standards
    • TAE18 - Ongoing Monitoring System
    • TAE19 - Reports Signed
    • TAE20 - Transparency of the AI System
    • TAE21 - Instructions for Use
  • 4. Accuracy, Robustness and Cybersecurity (ARC)
    • ARC01 - Accuracy Levels
    • ARC02 - Robustness Assessment
    • ARC03 - Continuous Learning Feedback Loop Assessment
    • ARC04 - Cyber Security Assessment
  • 5. Accountability (ACC)
    • ACC01 - Logging Capabilities
    • ACC02 - Logging Traceability
    • ACC03 - Logging - Situations that may cause AI Risk
    • ACC04 - Logging - Biometric systems requirements
    • ACC05 - Details of Off-the-Shelf AI/ML Components
    • ACC06 - Evaluation Process of Off-the-Shelf Components
    • ACC07 - Quality Control Process of Off-the-Shelf Components
    • ACC08 - Internal Audit Reports
    • ACC09 - Risk Management System in Place
    • ACC10 - Risk Management System capabilities and processes
    • ACC11 - Risk Management Measures
    • ACC12 - Testing
    • ACC13 - Residual Risks
    • ACC14 - Full Track of Mitigation Measures
    • ACC15 - Quality Management System in Place
    • ACC16 - Compliance Strategy stated
    • ACC17 - Design Processes
    • ACC18 - Development and QA (Quality Assurance) processes
    • ACC19 - Test and Validation Procedures
    • ACC20 - Technical Standards
    • ACC21 - Data Management Procedures
    • ACC22 - Risk Management System
    • ACC23 - Ongoing Monitoring System
    • ACC24 - Post Market Monitoring System in Place
    • ACC25 - Data Collection Assessment
    • ACC26 - Post Market Monitoring Plan
    • ACC27 - Incident Reporting Procedures
    • ACC28 - Communications with Competent Authorities
    • ACC29 - Record Keeping Procedures
    • ACC30 - Resource Management Procedures
    • ACC31 - Accountability Framework
Powered by GitBook
On this page

4. Accuracy, Robustness and Cybersecurity (ARC)

PreviousTAE21 - Instructions for UseNextARC01 - Accuracy Levels

Last updated 2 years ago

This OECD principle deals with the accuracy, robustness and Cybersecurity requirements. According to the OECD, AI systems must function robust, secure and safe throughout their lifetimes, and potential risks should be continually assessed and managed.

  • AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose an unreasonable safety risk.

  • To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable an analysis of the AI system’s outcomes and responses to inquiry appropriate to the context and consistent with state of the art.

  • AI actors should based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

Addressing the safety and security challenges of complex AI systems is critical to fostering trust in AI. In this context, robustness signifies the ability to withstand or overcome adverse conditions, including digital security risks. This principle further states that AI systems should not pose unreasonable safety risks including physical security, in conditions of normal or foreseeable use or misuse throughout their lifecycle. Existing laws and regulations in areas such as consumer protection already identify what constitutes unreasonable safety risks. Governments, in consultation with stakeholders, must determine to what extent they apply to AI systems.

AI actors can employ a risk management approach (see below) to identify and protect against foreseeable misuse, as well as against risks associated with the use of AI systems for purposes other than those for which they were originally designed. Issues of robustness, security and safety of AI are interlinked. For example, digital security can only affect the safety of connected products such as automobiles and home appliances if risks are appropriately managed.

The Recommendation highlights two ways to maintain robust, safe and secure AI systems:

  1. traceability and subsequent analysis and inquiry, and

  2. applying a risk management approach.

Like explainability (see 3 Transparency and Explainability), traceability can help analysis and inquiry into the outcomes of an AI system and is a way to promote accountability. Traceability differs from explainability in that the focus is on maintaining records of data characteristics, such as metadata, data sources and data cleaning, but not necessarily the data themselves. In this, traceability can help to understand outcomes, prevent future mistakes, and to improve the trustworthiness of the AI system.

Below is the list of controls/checks part of the Accuracy, Robustness and Cybersecurity (ARC):

The source material for this section is

ARC01 - Accuracy Levels
ARC02 - Robustness Assessment
ARC03 - Continuous Learning Feedback Loop Assessment
ARC04 - Cyber Security Assessment
https://oecd.ai/en/dashboards/ai-principles/P8