OECD AI Principles
HomeDocumentationGet started
  • OECD AI Principles
  • 1. Inclusive growth, sustainable development and well-being (ISW)
    • ISW01 - AI Governance
    • ISW02 - Responsible AI Policy
    • ISW03 - AI Oversight Process
  • 2. Human-centred values and fairness (HVF)
    • HVF01 - Define Sets
    • HVF02 - Human Oversight Mechanism
    • HVF03 - Human Oversight - Biometric Identification Systems
    • HVF04 - Human Oversight Details
    • HVF05 - Dataset Governance Policies
    • HVF06 - Dataset Design Choices
    • HVF07 - Dataset Source Information
    • HVF08 - Dataset Annotations Information
    • HVF09 - Dataset Labels Information
    • HVF10 - Dataset Cleaning
    • HVF11 - Dataset Enrichment
    • HVF12 - Dataset Aggregation
    • HVF13 - Dataset Description, Assumptions and Purpose
    • HVF14 - Dataset Transformation Rationale
    • HVF15 - Dataset Bias Identification
    • HVF16 - Dataset Bias Mitigation
    • HVF17 - Dataset Bias Analysis Action and Assessment
    • HVF18 - Dataset Gaps and Shortcomings
    • HVF19 - Dataset Bias Monitoring - Ongoing
    • HVF20 - Dataset Bias Special/Protected Categories
  • 3. Transparency and Explainability (TAE)
    • TAE01 - Technical Documentation Generated
    • TAE02 - Additional Technical Documentation
    • TAE03 - Technical Details
    • TAE04 - Development steps and methods
    • TAE05 - Pre-trained or Third party tools/systems
    • TAE06 - Design specification
    • TAE07 - System Architecture
    • TAE08 - Computational Resources
    • TAE09 - Data Requirements
    • TAE10 - Human Oversight Assessment
    • TAE11 - Pre Determined Changes
    • TAE12 - Continuous Compliance
    • TAE13 - Validation and Testing
    • TAE14 - Monitoring, Function and Control
    • TAE15 - Risk Management System
    • TAE16 - Changes
    • TAE17 - Other Technical Standards
    • TAE18 - Ongoing Monitoring System
    • TAE19 - Reports Signed
    • TAE20 - Transparency of the AI System
    • TAE21 - Instructions for Use
  • 4. Accuracy, Robustness and Cybersecurity (ARC)
    • ARC01 - Accuracy Levels
    • ARC02 - Robustness Assessment
    • ARC03 - Continuous Learning Feedback Loop Assessment
    • ARC04 - Cyber Security Assessment
  • 5. Accountability (ACC)
    • ACC01 - Logging Capabilities
    • ACC02 - Logging Traceability
    • ACC03 - Logging - Situations that may cause AI Risk
    • ACC04 - Logging - Biometric systems requirements
    • ACC05 - Details of Off-the-Shelf AI/ML Components
    • ACC06 - Evaluation Process of Off-the-Shelf Components
    • ACC07 - Quality Control Process of Off-the-Shelf Components
    • ACC08 - Internal Audit Reports
    • ACC09 - Risk Management System in Place
    • ACC10 - Risk Management System capabilities and processes
    • ACC11 - Risk Management Measures
    • ACC12 - Testing
    • ACC13 - Residual Risks
    • ACC14 - Full Track of Mitigation Measures
    • ACC15 - Quality Management System in Place
    • ACC16 - Compliance Strategy stated
    • ACC17 - Design Processes
    • ACC18 - Development and QA (Quality Assurance) processes
    • ACC19 - Test and Validation Procedures
    • ACC20 - Technical Standards
    • ACC21 - Data Management Procedures
    • ACC22 - Risk Management System
    • ACC23 - Ongoing Monitoring System
    • ACC24 - Post Market Monitoring System in Place
    • ACC25 - Data Collection Assessment
    • ACC26 - Post Market Monitoring Plan
    • ACC27 - Incident Reporting Procedures
    • ACC28 - Communications with Competent Authorities
    • ACC29 - Record Keeping Procedures
    • ACC30 - Resource Management Procedures
    • ACC31 - Accountability Framework
Powered by GitBook
On this page

3. Transparency and Explainability (TAE)

PreviousHVF20 - Dataset Bias Special/Protected CategoriesNextTAE01 - Technical Documentation Generated

Last updated 2 years ago

According to the OECD, this principle is about transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.

AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:

  • to foster a general understanding of AI systems,

  • to make stakeholders aware of their interactions with AI systems, including in the workplace,

  • to enable those affected by an AI system to understand the outcome, and,

  • to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

The term transparency carries multiple meanings. In the context of this Principle, the focus is first on disclosing when AI is being used (in a prediction, recommendation or decision, or that the user is interacting directly with an AI-powered agent, such as a chatbot). Disclosure should be made with proportion to the importance of the interaction. The growing ubiquity of AI applications may influence the desirability, effectiveness or feasibility of disclosure in some cases.

Transparency further means enabling people to understand how an AI system is developed, trained, operates, and deployed in the relevant application domain, so that consumers, for example, can make more informed choices. Transparency also refers to the ability to provide meaningful information and clarity about what information is provided and why. Thus transparency does not in general extend to the disclosure of the source or other proprietary code or sharing of proprietary datasets, all of which may be too technically complex to be feasible or useful to understanding an outcome. Source code and datasets may also be subject to intellectual property, including trade secrets.

An additional aspect of transparency concerns facilitating public, multi-stakeholder discourse and the establishment of dedicated entities, as necessary, to foster general awareness and understanding of AI systems and increase acceptance and trust.

Explainability means enabling people affected by the outcome of an AI system to understand how it was arrived at. This entails providing easy-to-understand information to people affected by an AI system’s outcome that can enable those adversely affected to challenge the outcome, notably – to the extent practicable – the factors and logic that led to an outcome. Notwithstanding, explainability can be achieved in different ways depending on the context (such as, the significance of the outcomes). For example, for some types of AI systems, requiring explainability may negatively affect the accuracy and performance of the system (as it may require reducing the solution variables to a set small enough that humans can understand, which could be suboptimal in complex, high-dimensional problems), or privacy and security. It may also increase complexity and costs, potentially putting AI actors that are SMEs at a disproportionate disadvantage.

Therefore, when AI actors provide an explanation of an outcome, they may consider providing – in clear and simple terms, and as appropriate to the context – the main factors in a decision, the determinant factors, the data, logic or algorithm behind the specific outcome, or explaining why similar-looking circumstances generated a different outcome. This should be done in a way that allows individuals to understand and challenge the outcome while respecting personal data protection obligations, if relevant.

The requirements of this OECD principle concern the technical documentation that must be recorded for transparency purposes and explainable AI measures to ensure an AI application's decisions are explainable.

Below is the list of controls/checks part of the Transparency and Explainability (TAE):

The source material of this section is

TAE01 - Technical Documentation Generated
TAE02 - Additional Technical Documentation
TAE03 - Technical Details
TAE04 - Development steps and methods
TAE05 - Pre-trained or Third-party Tools/systems
TAE06 - Design Specification
TAE07 - System Architecture
TAE08 - Computational Resources
TAE09 - Data Requirements
TAE10 - Human Oversight Assessment
TAE11 - Pre-Determined Changes
TAE12 - Continuous Compliance
TAE13 - Validation and Testing
TAE14 - Monitoring, Function and Control
TAE15 - Risk Management System
TAE16 - Changes
TAE17 - Other Technical Standards
TAE18 - Ongoing Monitoring System
TAE19 - Reports Signed
TAE20 - Transparency of the AI System
TAE21 - Instructions for Use
https://oecd.ai/en/dashboards/ai-principles/P7