GRN 4.2 - AI Organisational Documentation

NIST AI RMF (in the playbook companion) states:

GOVERN 4.2

Organizational teams document the risks and impacts of the technology they design, develop, or deploy and communicate about these impacts more broadly.

About

Impact assessments are an approach for driving responsible and ethical technology development practices. And, within a specific use case, these assessments can provide a high-level structure for organizations to frame risks of a given algorithm or deployment. Impact assessments can also serve as a mechanism for organizations to articulate risks and generate documentation for mitigation and oversight activities when harms do arise.

Impact assessments should be applied at the beginning of a process but also iteratively and regularly since goals and outcomes can evolve over time. It is also important to consider conflicts of interest, or undue influence, related to the organizational team being assessed.

Actions
  • Establish impact assessment policies and processes for AI systems used by the organization.

  • Verify that impact assessment policies are appropriate to evaluate the potential negative impact of a system and how quickly a system changes, and that assessments are applied on a regular basis.

  • Utilize impact assessments to inform broader evaluations of AI system risk.

Transparency and Documentation

Organizations can document the following:

  • How has the entity identified and mitigated potential impacts of bias in the data, including inequitable or discriminatory outcomes?

  • How has the entity documented the AI system’s data provenance, including sources, origins, transformations, augmentations, labels, dependencies, constraints, and metadata?

  • To what extent has the entity clearly defined technical specifications and requirements for the AI system?

  • To what extent has the entity documented the AI system’s development, testing methodology, metrics, and performance outcomes?

  • Have you documented and explained that machine errors may differ from human errors?

Last updated