Article 15 - Accuracy, Robustness and Cybersecurity (ART15)

High-risk AI systems shall achieve an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle. The level of accuracy and relevant accuracy metrics (15.01) shall be in the accompanying instructions of use (Article 13).

High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures (15.02).

High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities (15.03).

The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks (15.04).

Below is the list of controls/checks part of Article 15.

Last updated