QMS20 - Cyber Security Assessment

Resilience against attempts by unauthorised third parties to alter the use or performance of a High-Risk AI system

  1. The technical solutions to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.

  2. The technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent and control attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws.

Last updated