14.02 - Human Oversight details

Document detailing the requirements of the human oversight monitoring system:

  1. Signs of anomalies, dysfunctions and unexpected performance can be detected and notified to the authorised (human oversight) personnel. Requires full understanding of the capabilities and limitations of the system

  2. Remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons

  3. Output of the High-Risk AI systems is able to be correctly interpreted, taking into account in particular the characteristics of the system and the interpretation tools and methods available

  4. Ability to disregard, override or reverse the output of the high-risk AI systems

  5. Ability to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure.

Last updated