EC Artificial Intelligence Act
HomeDocumentationGet started
  • EC Artificial Intelligence Act
  • EC AIA - Compliance Requirements
  • Article 09 - Risk Management System (ART09)
    • 09.01 - Risk Management System in Place
    • 09.02 - Risk Management System Capabilities and Process
    • 09.03 - Risk Management Measures
    • 09.04 - Testing
    • 09.05 - Residual Risks
    • 09.06 - Consideration of Children
    • 09.07 - Credit Institutions
  • Article 10 - Data Governance (ART10)
    • 10.01 - Define Sets
    • 10.02 - Dataset Governance Policies
    • 10.03 - Dataset Design Choices
    • 10.04 - Data Source Information
    • 10.05 - Dataset Annotations Information
    • 10.06 - Dataset Labels Information
    • 10.07 - Dataset Cleaning
    • 10.08 - Dataset Enrichment
    • 10.09 - Dataset Aggregation
    • 10.10 - Dataset Description, Assumptions and Purpose
    • 10.11 - Dataset Transformation Rationale
    • 10.12 - Dataset Bias Identification
    • 10.13 - Dataset Bias Mitigation
    • 10.14 - Dataset Bias Analysis Action and Assessment
    • 10.15 - Dataset Gaps and Shortcomings
    • 10.16 - Dataset Bias Monitoring - Ongoing
    • 10.17 - Dataset Bias Special/Protected Categories
  • Article 11 - Technical Documentation (ART11)
    • 11.01 - Technical Documentation Generated
    • 11.02 - Additional Technical Documentation
    • 11.03 - Technical Details
    • 11.04 - Development Steps and Methods
    • 11.05 - Pre-trained or Third Party Tools/Systems
    • 11.06 - Design Specification
    • 11.07 - System Architecture
    • 11.08 - Computational Resources
    • 11.09 - Data Requirements
    • 11.10 - Human Oversight Assessment
    • 11.11 - Pre Determined Changes
    • 11.12 - Continuous Compliance
    • 11.13 - Validation and Testing
    • 11.14 - Monitoring, Function and Control
    • 11.15 - Risk Management System
    • 11.16 - Changes
    • 11.17 - Other Technical Standards
    • 11.18 - Ongoing Monitoring System
    • 11.19 - Reports Signed
    • 11.20 - Declaration of Conformity
  • Article 12 - Record Keeping (ART12)
    • 12.01 - Logging Capabilities
    • 12.02 - Logging Traceability
    • 12.03 - Logging - Situations that may cause AI Risk
    • 12.04 - Logging - Biometric Systems Requirements
  • Article 13 - Transparency and provision of information to user (ART13)
    • 13.01 - Transparency of the AI System
    • 13.02 - Instructions for Use
  • Article 14 - Human Oversight (ART14)
    • 14.01 - Human Oversight mechanism
    • 14.02 - Human Oversight details
    • 14.03 - Human Oversight - Biometric Identification Systems
  • Article 15 - Accuracy, Robustness and Cybersecurity (ART15)
    • 15.01 - Accuracy Levels
    • 15.02 - Robustness Assessment
    • 15.03 - Continuous Learning Feedback Loop Assessment
    • 15.04 - Cyber Security Assessment
  • Article 17 - Quality Management System (ART17)
    • 17.01 - Quality Management System in Place
    • 17.02 - Compliance Strategy Stated
    • 17.03 - Design processes
    • 17.04 - Development and QA processes
    • 17.05 - Test and Validation Procedures
    • 17.06 - Technical Standards
    • 17.07 - Data Management Procedures
    • 17.08 - Risk Management System
    • 17.09 - Ongoing Monitoring System
    • 17.10 - Incident Reporting Procedures
    • 17.11 - Communications with Competent Authorities
    • 17.12 - Record Keeping Procedures
    • 17.13 - Resource Management Procedures
    • 17.14 - Accountability Framework
  • Article 61 - Post Market Monitoring System (ART61)
    • 61.01 - Post Market Monitoring System in Place
    • 61.02 - Data Collection Assessment
    • 61.03 - Post Market Monitoring Plan
Powered by GitBook
On this page
  1. Article 09 - Risk Management System (ART09)

09.04 - Testing

Testing of the AI applications should be robust and designed in a manner that help reduce the AI risks. The testing mechanism shall ensure that:

  • High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter 2 of the EC AIA.

  • Testing procedures shall be suitable to achieve the intended purpose of the AI system and do not need to go beyond what is necessary to achieve that purpose.

  • The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.

Seclea Platform provides a manual check 'Testing Procedures' that documents the testing procedures adopted by an organisation.

Previous09.03 - Risk Management MeasuresNext09.05 - Residual Risks

Last updated 2 years ago