QMS20 - Cyber Security Assessment
Resilience against attempts by unauthorised third parties to alter the use or performance of a High-Risk AI system
The technical solutions to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
The technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent and control attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws.
Last updated