GRN 4.3 - Organisational Information Sharing Mechnism

NIST AI RMF (in the playbook companion) states:

GOVERN 4.3

Organizational practices are in place to enable testing, identification of incidents, and information sharing.

About

Organizations committed to risk management acknowledge the importance of identifying AI system limitations, detecting and tracking negative impacts and incidents, and sharing information about these issues with appropriate AI actors. Building organizational capacity requires policies and procedures connected to testing and inquiry.

Issues such as concept drift, AI bias and discrimination, shortcut learning or underspecification are difficult to identify using standard AI testing processes. Organizations can institute in-house use and testing policies and procedures to identify and manage such issues. Efforts can take the form of pre-alpha or pre-beta testing, or deploying internally developed systems or products within the organization. Testing may entail limited and controlled in-house, or publicly available, AI system testbeds.

Without policies and procedures that enable consistent testing practices, risk management efforts may be bypassed or ignored, exacerbating risks or leading to inconsistent risk management activities.

Information sharing about impacts or incidents detected during testing or deployment can:

  • draw attention to AI system risks, failures, abuses and misuses,

  • allow organizations to benefit from insights based on a wide range of AI applications and implementations, and

  • allow organizations to be more proactive in avoiding known failure modes.

Actions
  • Establish policies and procedures to facilitate and equip AI system testing.

  • Establish organizational commitment to identifying AI system limitations and sharing of insights about limitations within appropriate AI actor groups.

  • Establish policies for incident response.

  • Establish guidelines for handling and access control related to AI system risks and performance.

Transparency and Documentation

Organizations can document the following:

  • Did your organization address usability problems and test whether user interfaces served their intended purposes? Consulting the community or end users at the earliest stages of development to ensure there is transparency on the technology used and how it is deployed.

  • Did your organization implement a risk management system to address risks involved in deploying the identified AI solution (e.g. personnel risk or changes to commercial objectives)?

  • To what extent can users or parties affected by the outputs of the AI system test the AI system and provide feedback?

Last updated