GRN 1.4 - Risk Management Monitoring

NIST AI RMF (in the playbook companion) states:

GOVERN 1.4

Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, with organizational roles and responsibilities clearly defined.

About

AI systems are dynamic and may perform in unexpected ways once deployed. Continuous monitoring is a risk management process for tracking unexpected issues and performance, in real-time or at a specific frequency, across the AI system lifecycle.

Incident response and “appeal and override” are commonly used processes in information technology management that are often overlooked for AI systems. These processes enable real-time flagging of potential incidents, and human adjudication of system outcomes.

Establishing and maintaining incident response plans can reduce the likelihood of additive impacts during an AI incident. Smaller organizations which may not have fulsome governance programs, can utilize incident response plans for addressing system failures, abuse and misuse.

Actions
  • Establish policies and procedures for monitoring AI system performance, and to address bias and security problems, across the lifecycle of the system.

  • Establish policies for AI system incident response, or confirm that existing incident response policies address AI systems.

  • Establish policies to define organizational functions and personnel responsible for AI system monitoring and incident response activities.

  • Establish mechanisms to enable the sharing of feedback from impacted individuals or communities about negative impacts from AI systems.

  • Establish mechanisms to provide recourse for impacted individuals or communities to contest problematic AI system outcomes.

Transparency and Documentation

Organizations can document the following:

  • To what extent does the system/entity consistently measure progress towards stated goals and objectives?

  • Did your organization implement a risk management system to address risks involved in deploying the identified AI solution (e.g. personnel risk or changes to commercial objectives)?

  • Did your organization address usability problems and test whether user interfaces served their intended purposes? Consulting the community or end users at the earliest stages of development to ensure there is transparency on the technology used and how it is deployed.

Last updated