GRN 5.2 - Stakeholder Feedback Integration
NIST AI RMF (in the playbook companion) states:
GOVERN 5.2Mechanisms are established to enable AI actors to regularly incorporate adjudicated stakeholder feedback into system design and implementation.
About
Organizational policies and procedures should be established to ensure that AI actors have the processes, knowledge, and expertise required to inform collaborative decisions about system deployment. These decisions are closely tied to AI system and organizational risk tolerance.
Risk tolerance, established by organizational leadership, reflects the level and type of risk the organization will accept while conducting its mission and carrying out its strategy. When risks arise, resources are allocated based on the assessed risk of a given AI system. Organizations should apply a risk tolerance approach where higher risk systems receive larger allocations of risk management resources and lower risk systems receive less resources.
Actions
Explicitly acknowledge that AI systems, and the use of AI, present inherent costs and risks along with potential benefits.
Define reasonable risk tolerances for AI systems informed by laws, regulation, best practices, or industry standards.
Establish policies that define how to assign AI systems to established risk tolerance levels by combining system impact assessments with the likelihood that an impact occurs. Such assessment often entails some combination of:
Econometric evaluations of impacts and impact likelihoods to assess AI system risk.
Red-amber-green (RAG) scales for impact severity and likelihood to assess AI system risk.
Establishment of policies for allocating risk management resources along established risk tolerance levels, with higher-risk systems receive more risk management resources and oversight.
Establishment of policies for approval, conditional approval, and disapproval of the design, implementation, and deployment of AI systems.
Establish policies facilitating the early decommissioning of an AI system that is deemed risky beyond practical mitigation.
Transparency and Documentation
Organizations can document the following:
Who is ultimately responsible for the decisions of the AI and is this person aware of the intended uses and limitations of the analytic?
Who will be responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?
Who is accountable for the ethical considerations during all stages of the AI lifecycle?
To what extent are the established procedures effective in mitigating bias, inequity, and other concerns resulting from the system?
Does the AI solution provide sufficient information to assist the personnel to make an informed decision and take actions accordingly?
Last updated