MAP 1.5 - Organisations Risk Tolerance

NIST AI RMF (in the playbook companion) states:

MAP 1.5

Organizational risk tolerances are determined.

chevron-rightAbouthashtag

Risk tolerance reflects the level and type of risk the organization will accept while conducting its mission and carrying out its strategy.

Deployment should not be pre-determined. Rather, it should result from a clearly defined process based on organizational risk tolerances.

Go/no-go decisions should be incorporated throughout the AI system’s lifecycle. For systems deemed “higher risk,” such decisions should include approval from relevant technical or risk-focused executives.

Go/no-go decisions related to AI system risks should take stakeholder feedback into account, but remain independent from stakeholders’ vested financial or reputational interests

chevron-rightActionshashtag
  • Establish risk tolerance levels for AI systems and allocate the appropriate oversight resources to each level.

  • Identify maximum allowable risk thresholds above which the system will not be deployed, or will need to be prematurely decommissioned, within the contextual or application setting.

  • Attempts to use a system for “off-label” purposes should be approached with caution, especially in settings that organizations have deemed as high-risk. Document decisions, risk-related trade-offs, and system limitations.

chevron-rightTransparency and Documentationhashtag

Organizations can document the following:

  • What justifications, if any, has the entity provided for the assumptions, boundaries, and limitations of the AI system?

  • How has the entity identified and mitigated potential impacts of bias in the data, including inequitable or discriminatory outcomes?

  • To what extent are the established procedures effective in mitigating bias, inequity, and other concerns resulting from the system?

Last updated