GRN 4.1 - AI Risk Organisational Practices
NIST AI RMF (in the playbook companion) states:
GOVERN 4.1Organizational practices are in place to foster a critical thinking and safety-first mindset in the design, development, and deployment of AI systems to minimize negative impacts.
About
A strong risk culture and accompanying practices can help organizations effectively triage the most critical risks. Organizations in some industries implement three (or more) “lines of defense,” where separate teams are held accountable for different aspects of the system lifecycle, such as development, risk management, and auditing. While a traditional three-lines approach may be impractical for smaller organizations, leadership can commit to cultivating a strong risk culture through other means. For example, “effective challenge,” is a culture-based practice that encourages critical thinking and questioning of important design and implementation decisions by experts with the authority and stature to make such changes.
Red-teaming is another risk management approach. This practice consists of adversarial testing of AI systems under stress conditions to seek out failure modes or vulnerabilities in the system. Red-teams are composed of external experts or personnel who are independent from internal AI actors.
Actions
Establish policies that require inclusion of oversight functions (legal, compliance, risk management) from the outset of the system design process.
Establish policies that promote effective challenge of AI system design, implementation, and deployment decisions, via mechanisms such as the three lines of defense, model audits, or red-teaming – to ensure that workplace risks such as groupthink do not take hold.
Establish policies that incentivize safety-first mindset and general critical thinking and review at an organizational and procedural level.
Establish whistleblower protections for insiders who report on perceived serious problems with AI systems.
Transparency and Documentation
Organizations can document the following:
To what extent has the entity documented the AI system’s development, testing methodology, metrics, and performance outcomes?
To what extent has the entity identified and mitigated potential bias—statistical, contextual, and historical—in the data?
Will the dataset be updated? How often and by whom? How will updates/revisions be documented and communicated (e.g., mailing list, GitHub)? Is there an erratum?
Did your organization’s board and/or senior management sponsor, support and participate in your organization’s AI governance?
Does your organization have an existing governance structure that can be leveraged to oversee the organization’s use of AI?
Last updated