MRE 2.5 - Regular Evaluation of AI Systems

AI system is evaluated regularly for safety. Deployed product is demonstrated to be safe and can fail safely and gracefully if it is made to operate beyond its knowledge limits. Safety metrics implicate system reliability and robustness, real-time monitoring, and response times for AI system failures.

NIST AI RMF (in the playbook companion) has not defined MRE 2.5; however, Seclea Platform defines the relevant checks to this control requiring:

  • A detailed description of the organisation's regular evaluation of the AI systems. If the organisation uses the Seclea Platform, additional checks can be added to track the system evaluation in risk management.

Last updated