4. Accuracy, Robustness and Cybersecurity (ARC)
This OECD principle deals with the accuracy, robustness and Cybersecurity requirements. According to the OECD, AI systems must function robust, secure and safe throughout their lifetimes, and potential risks should be continually assessed and managed.
AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose an unreasonable safety risk.
To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable an analysis of the AI system’s outcomes and responses to inquiry appropriate to the context and consistent with state of the art.
AI actors should based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.
Addressing the safety and security challenges of complex AI systems is critical to fostering trust in AI. In this context, robustness signifies the ability to withstand or overcome adverse conditions, including digital security risks. This principle further states that AI systems should not pose unreasonable safety risks including physical security, in conditions of normal or foreseeable use or misuse throughout their lifecycle. Existing laws and regulations in areas such as consumer protection already identify what constitutes unreasonable safety risks. Governments, in consultation with stakeholders, must determine to what extent they apply to AI systems.
AI actors can employ a risk management approach (see below) to identify and protect against foreseeable misuse, as well as against risks associated with the use of AI systems for purposes other than those for which they were originally designed. Issues of robustness, security and safety of AI are interlinked. For example, digital security can only affect the safety of connected products such as automobiles and home appliances if risks are appropriately managed.
The Recommendation highlights two ways to maintain robust, safe and secure AI systems:
traceability and subsequent analysis and inquiry, and
applying a risk management approach.
Like explainability (see 3 Transparency and Explainability), traceability can help analysis and inquiry into the outcomes of an AI system and is a way to promote accountability. Traceability differs from explainability in that the focus is on maintaining records of data characteristics, such as metadata, data sources and data cleaning, but not necessarily the data themselves. In this, traceability can help to understand outcomes, prevent future mistakes, and to improve the trustworthiness of the AI system.
Below is the list of controls/checks part of the Accuracy, Robustness and Cybersecurity (ARC):
The source material for this section is https://oecd.ai/en/dashboards/ai-principles/P8
Last updated