09.04 - Testing

Testing of the AI applications should be robust and designed in a manner that help reduce the AI risks. The testing mechanism shall ensure that:

  • High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter 2 of the EC AIA.

  • Testing procedures shall be suitable to achieve the intended purpose of the AI system and do not need to go beyond what is necessary to achieve that purpose.

  • The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.

Seclea Platform provides a manual check 'Testing Procedures' that documents the testing procedures adopted by an organisation.

Last updated