EC Artificial Intelligence Act
HomeDocumentationGet started
  • EC Artificial Intelligence Act
  • EC AIA - Compliance Requirements
  • Article 09 - Risk Management System (ART09)
    • 09.01 - Risk Management System in Place
    • 09.02 - Risk Management System Capabilities and Process
    • 09.03 - Risk Management Measures
    • 09.04 - Testing
    • 09.05 - Residual Risks
    • 09.06 - Consideration of Children
    • 09.07 - Credit Institutions
  • Article 10 - Data Governance (ART10)
    • 10.01 - Define Sets
    • 10.02 - Dataset Governance Policies
    • 10.03 - Dataset Design Choices
    • 10.04 - Data Source Information
    • 10.05 - Dataset Annotations Information
    • 10.06 - Dataset Labels Information
    • 10.07 - Dataset Cleaning
    • 10.08 - Dataset Enrichment
    • 10.09 - Dataset Aggregation
    • 10.10 - Dataset Description, Assumptions and Purpose
    • 10.11 - Dataset Transformation Rationale
    • 10.12 - Dataset Bias Identification
    • 10.13 - Dataset Bias Mitigation
    • 10.14 - Dataset Bias Analysis Action and Assessment
    • 10.15 - Dataset Gaps and Shortcomings
    • 10.16 - Dataset Bias Monitoring - Ongoing
    • 10.17 - Dataset Bias Special/Protected Categories
  • Article 11 - Technical Documentation (ART11)
    • 11.01 - Technical Documentation Generated
    • 11.02 - Additional Technical Documentation
    • 11.03 - Technical Details
    • 11.04 - Development Steps and Methods
    • 11.05 - Pre-trained or Third Party Tools/Systems
    • 11.06 - Design Specification
    • 11.07 - System Architecture
    • 11.08 - Computational Resources
    • 11.09 - Data Requirements
    • 11.10 - Human Oversight Assessment
    • 11.11 - Pre Determined Changes
    • 11.12 - Continuous Compliance
    • 11.13 - Validation and Testing
    • 11.14 - Monitoring, Function and Control
    • 11.15 - Risk Management System
    • 11.16 - Changes
    • 11.17 - Other Technical Standards
    • 11.18 - Ongoing Monitoring System
    • 11.19 - Reports Signed
    • 11.20 - Declaration of Conformity
  • Article 12 - Record Keeping (ART12)
    • 12.01 - Logging Capabilities
    • 12.02 - Logging Traceability
    • 12.03 - Logging - Situations that may cause AI Risk
    • 12.04 - Logging - Biometric Systems Requirements
  • Article 13 - Transparency and provision of information to user (ART13)
    • 13.01 - Transparency of the AI System
    • 13.02 - Instructions for Use
  • Article 14 - Human Oversight (ART14)
    • 14.01 - Human Oversight mechanism
    • 14.02 - Human Oversight details
    • 14.03 - Human Oversight - Biometric Identification Systems
  • Article 15 - Accuracy, Robustness and Cybersecurity (ART15)
    • 15.01 - Accuracy Levels
    • 15.02 - Robustness Assessment
    • 15.03 - Continuous Learning Feedback Loop Assessment
    • 15.04 - Cyber Security Assessment
  • Article 17 - Quality Management System (ART17)
    • 17.01 - Quality Management System in Place
    • 17.02 - Compliance Strategy Stated
    • 17.03 - Design processes
    • 17.04 - Development and QA processes
    • 17.05 - Test and Validation Procedures
    • 17.06 - Technical Standards
    • 17.07 - Data Management Procedures
    • 17.08 - Risk Management System
    • 17.09 - Ongoing Monitoring System
    • 17.10 - Incident Reporting Procedures
    • 17.11 - Communications with Competent Authorities
    • 17.12 - Record Keeping Procedures
    • 17.13 - Resource Management Procedures
    • 17.14 - Accountability Framework
  • Article 61 - Post Market Monitoring System (ART61)
    • 61.01 - Post Market Monitoring System in Place
    • 61.02 - Data Collection Assessment
    • 61.03 - Post Market Monitoring Plan
Powered by GitBook
On this page

Article 13 - Transparency and provision of information to user (ART13)

Previous12.04 - Logging - Biometric Systems RequirementsNext13.01 - Transparency of the AI System

Last updated 2 years ago

High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately ().

High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users (). The information should include:

  1. the identity and the contact details of the provider and, where applicable, of its authorised representative;

  2. the characteristics, capabilities and limitations of performance of the high-risk AI system, including:

    1. its intended purpose;

    2. the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;

    3. any known or foreseeable circumstance, related to the use of the high- risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights;

    4. its performance as regards the persons or groups of persons on which the system is intended to be used;

    5. when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system.

  3. the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any

  4. the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users;

  5. the expected lifetime of the high-risk AI system and any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates.

Below is the list of controls/checks part of Article 13.

13.01
13.02
13.01 - Transparency of the AI System
13.02 - Instructions for Use