GRN 1.3 - Transparent Risk Management
NIST AI RMF (in the playbook companion) states:
GOVERN 1.3The risk management process and its outcomes are established through transparent mechanisms and all significant risks are measured.
About
Clear policies and procedures are necessary to communicate roles and responsibilities for the Map, Measure and Manage functions across the AI lifecycle.
Standardized documentation can operationalize how organizational AI risk management processes are implemented and recorded. Systematizing documentation can also enhance accountability efforts. By adding their contact information to a work product document, AI actors can improve communication, increase ownership of work products, and potentially enhance consideration of product quality. Documentation may generate downstream benefits related to improved system replicability and robustness. Proper documentation storage and access procedures allow for quick retrieval of critical information during a negative incident.
Actions
Establish and regularly review documentation policies that address information related to:
AI actor contact information
Business justification
Scope and usage
Assumptions and limitations
Description of training data
Algorithmic methodology
Evaluated alternative approaches
Description of output data
Testing and validation results
Down- and up-stream dependencies
Plans for deployment, monitoring, and change management
Stakeholder engagement plans
Verify documentation policies for AI systems are standardized across the organization and up to date.
Establish policies for a model documentation inventory system and regularly review its completeness, usability, and efficacy.
Establish mechanisms to regularly review the efficacy of risk management processes.
Identify AI actors responsible for evaluating efficacy of risk management processes and approaches, and for course-correction based on results.
Transparency and Documentation
Organizations can document the following:
To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders?
What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?
How will the appropriate performance metrics, such as accuracy, of the AI be monitored after the AI is deployed? How much distributional shift or model drift from baseline performance is acceptable?
Last updated