OECD AI Principles
HomeDocumentationGet started
  • OECD AI Principles
  • 1. Inclusive growth, sustainable development and well-being (ISW)
    • ISW01 - AI Governance
    • ISW02 - Responsible AI Policy
    • ISW03 - AI Oversight Process
  • 2. Human-centred values and fairness (HVF)
    • HVF01 - Define Sets
    • HVF02 - Human Oversight Mechanism
    • HVF03 - Human Oversight - Biometric Identification Systems
    • HVF04 - Human Oversight Details
    • HVF05 - Dataset Governance Policies
    • HVF06 - Dataset Design Choices
    • HVF07 - Dataset Source Information
    • HVF08 - Dataset Annotations Information
    • HVF09 - Dataset Labels Information
    • HVF10 - Dataset Cleaning
    • HVF11 - Dataset Enrichment
    • HVF12 - Dataset Aggregation
    • HVF13 - Dataset Description, Assumptions and Purpose
    • HVF14 - Dataset Transformation Rationale
    • HVF15 - Dataset Bias Identification
    • HVF16 - Dataset Bias Mitigation
    • HVF17 - Dataset Bias Analysis Action and Assessment
    • HVF18 - Dataset Gaps and Shortcomings
    • HVF19 - Dataset Bias Monitoring - Ongoing
    • HVF20 - Dataset Bias Special/Protected Categories
  • 3. Transparency and Explainability (TAE)
    • TAE01 - Technical Documentation Generated
    • TAE02 - Additional Technical Documentation
    • TAE03 - Technical Details
    • TAE04 - Development steps and methods
    • TAE05 - Pre-trained or Third party tools/systems
    • TAE06 - Design specification
    • TAE07 - System Architecture
    • TAE08 - Computational Resources
    • TAE09 - Data Requirements
    • TAE10 - Human Oversight Assessment
    • TAE11 - Pre Determined Changes
    • TAE12 - Continuous Compliance
    • TAE13 - Validation and Testing
    • TAE14 - Monitoring, Function and Control
    • TAE15 - Risk Management System
    • TAE16 - Changes
    • TAE17 - Other Technical Standards
    • TAE18 - Ongoing Monitoring System
    • TAE19 - Reports Signed
    • TAE20 - Transparency of the AI System
    • TAE21 - Instructions for Use
  • 4. Accuracy, Robustness and Cybersecurity (ARC)
    • ARC01 - Accuracy Levels
    • ARC02 - Robustness Assessment
    • ARC03 - Continuous Learning Feedback Loop Assessment
    • ARC04 - Cyber Security Assessment
  • 5. Accountability (ACC)
    • ACC01 - Logging Capabilities
    • ACC02 - Logging Traceability
    • ACC03 - Logging - Situations that may cause AI Risk
    • ACC04 - Logging - Biometric systems requirements
    • ACC05 - Details of Off-the-Shelf AI/ML Components
    • ACC06 - Evaluation Process of Off-the-Shelf Components
    • ACC07 - Quality Control Process of Off-the-Shelf Components
    • ACC08 - Internal Audit Reports
    • ACC09 - Risk Management System in Place
    • ACC10 - Risk Management System capabilities and processes
    • ACC11 - Risk Management Measures
    • ACC12 - Testing
    • ACC13 - Residual Risks
    • ACC14 - Full Track of Mitigation Measures
    • ACC15 - Quality Management System in Place
    • ACC16 - Compliance Strategy stated
    • ACC17 - Design Processes
    • ACC18 - Development and QA (Quality Assurance) processes
    • ACC19 - Test and Validation Procedures
    • ACC20 - Technical Standards
    • ACC21 - Data Management Procedures
    • ACC22 - Risk Management System
    • ACC23 - Ongoing Monitoring System
    • ACC24 - Post Market Monitoring System in Place
    • ACC25 - Data Collection Assessment
    • ACC26 - Post Market Monitoring Plan
    • ACC27 - Incident Reporting Procedures
    • ACC28 - Communications with Competent Authorities
    • ACC29 - Record Keeping Procedures
    • ACC30 - Resource Management Procedures
    • ACC31 - Accountability Framework
Powered by GitBook
On this page
  1. 3. Transparency and Explainability (TAE)

TAE21 - Instructions for Use

AI application shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users.

  • Detail or upload a document detailing the Instructions for Use that will be provided to the users and how they will be accessed.

  • Detail or upload a document detailing the identity and contact details of the provider and, where applicable, of its authorised representative

  • Detail or upload a document detailing the expected lifetime of the AI system.

  • Detail or upload a document detailing the necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates

  • Describe or upload a document describing in detail the purpose of the application

  • Detail or upload a document detailing your assessment of the accuracy of the AI system and if it reaches an appropriate level given its intended purpose. List any known or foreseeable circumstances that may have an impact on that.

  • Detail or upload a document detailing your assessment of the extent to which the AI system is resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular, due to their interaction with natural persons or other systems. List any known or foreseeable circumstances that may have an impact on that.

  • Detail or upload a document detailing your assessment of the measures in place to ensure that the AI system is resilient as regards attempts by unauthorised third parties to alter its use or performance by exploiting the system vulnerabilities and whether they are appropriate to the relevant circumstances and the risks. List any known or foreseeable circumstances that may have an impact on that.

  • Detail or upload a document detailing your assessment of the measures in place to address AI-specific vulnerabilities. This will include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws. List any known or foreseeable circumstances that may have an impact on that.

  • Detail or upload a document detailing the sources of risks to health and safety in view of the intended purpose of the AI system. This should include any known or foreseeable circumstances, including under conditions of reasonably foreseeable misuse.

  • Detail or upload a document detailing the sources of risks to fundamental rights in view of the intended purpose of the AI system. This should include any known or foreseeable circumstances, including under conditions of reasonably foreseeable misuse.

  • Detail or upload a document detailing the capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose.

  • Detail or upload a document detailing the specifications of the input data for the AI system

  • Detail or upload a document detailing any changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, including how they affect its performance.

PreviousTAE20 - Transparency of the AI SystemNext4. Accuracy, Robustness and Cybersecurity (ARC)

Last updated 2 years ago