AI Transparency and Explainability (ATE)
Both the organisational characteristics of a company that operates AI systems and the characteristics of those systems themselves are relevant to the concept of transparency. Organizations may or may not be transparent regarding how they apply such systems, how they use collected data (such as consumer and user data, public data, and other collected data sets), which measures they put in place to manage AI systems, understand and control their risks, and other relevant information. For artificial intelligence (AI) systems, transparency means providing stakeholders with the appropriate information about a system (such as its capabilities and limitations), which enables those stakeholders to evaluate the development, operation, and use of AI systems in comparison to their goals. In addition to this, explainability in artificial intelligence systems refers to the capacity to rationalise and assist in the understanding of how a particular system's outcome has been generated.
Controls related to this risk category are listed as below:
ATE 01 - Evaluation of AI Models
ATE 02 - Measurement Approaches for Trustworthiness
ATE 03 - Transparency of the AI System
Last updated