NIST AI Risk Management Framework
HomeDocumentationGet started
  • NIST AI Risk Management Framework
  • GRN 1: Risk Management Documentation
    • GRN 1.1 - AI Legal and Regulatory Requirements
    • GRN 1.2 - Trustworthy AI Characteristics
    • GRN 1.3 - Transparent Risk Management
    • GRN 1.4 - Risk Management Monitoring
  • GRN 2: AI Organisation Structure
    • GRN 2.1 - Roles and Responsibilities
    • GRN 2.2 - AI Risk Management Training
    • GRN 2.3 - Executive Responsibility
  • GRN 3: AI Internal Stakeholders
    • GRN 3.1 - AI Risk Decisions Making
  • GRN 4: Organisational Commitments
    • GRN 4.1 - AI Risk Organisational Practices
    • GRN 4.2 - AI Organisational Documentation
    • GRN 4.3 - Organisational Information Sharing Mechnism
  • GRN 5: Stakeholder Engagement
    • GRN 5.1 - External Stakeholder Policies
    • GRN 5.2 - Stakeholder Feedback Integration
  • GRN 6: Managing 3rd-Party Risk
    • GRN 6.1 - 3rd Party Risk Policies
    • GRN 6.2 - 3rd Party Contingency
  • MAP 1: AI Application Context
    • MAP 1.1 - Intended Purpose of AI Use
    • MAP 1.2 - Inter-disciplinary AI Stakeholders
    • MAP 1.3 - AI's Business Value
    • MAP 1.4 - Organisations AI Mission
    • MAP 1.5 - Organisations Risk Tolerance
    • MAP 1.6 - Stakeholder Engagements
    • MAP 1.7 - AI System Requirements
  • MAP 2: AI Application Classification
    • MAP 2.1 - AI Classification
    • MAP 2.2 - AI Usage by Humans
    • MAP 2.3 - TEVV Documentation
  • MAP 3: AI Benefits and Costs
    • MAP 3.1 - AI System Benefits
    • MAP 3.2 - AI Potential Costs
    • MAP 3.3 - AI Application Scope
  • MAP 4: 3rd-Party Risks and Benefits
    • MAP 4.1 - Mapping 3rd-Party Risk
    • MAP 4.2 - Internal Risk Controls for 3rd Party Risk
  • MAP 5: AI Impacts
    • MAP 5.1 - AI Positive or Negative Impacts
    • MAP 5.2 - Likelihood and Magnitude of Each Impact
    • MAP 5.3 - Benefits vs Impacts
  • MRE 1: Appropriate Methods and Metrics
    • MRE 1.1 - Approaches and Metrics
    • MRE 1.2 - Metrics Appropriateness and Effectiveness
    • MRE 1.3 - Stakeholder Assessment Consultation
  • MRE 2: Trustworthy Evaluation
    • MRE 2.1 - Tools for TEVV
    • MRE 2.2 - Evaluations of Human Subjects
    • MRE 2.3 - System Performance
    • MRE 2.4 - Deployment Valid and Reliable
    • MRE 2.5 - Regular Evaluation of AI Systems
    • MRE 2.6 - Evaluation of Computational Bias
    • MRE 2.7 - Evaluation of Security and Resilience
    • MRE 2.8 - Evaluation of AI Models
    • MRE 2.9 - Evaluation of AI Privacy Risks
    • MRE 2.10 - Environmental Impact
  • MRE 3: Risk Tracking Mechanism
    • MRE 3.1 - Risk Tracking and Management
    • MRE 3.2 - Risk Tracking Assessments
  • MRE 4: Measurement Feedback
    • MRE 4.1 - Measurement Approaches for Identifying Risk
    • MRE 4.2 - Measurement Approaches for Trustworthiness
    • MRE 4.3 - Measurable Performance Improvements
  • MGE 1: Managing AI Risk
    • MGE 1.1 - Development and Deployment Decision
    • MGE 1.2 - Risk Mitigation Activities
    • MGE 1.3 - Risk Management of Mapped Risks
  • MGE 2: Managing AI Benefits and Impacts
    • MGE 2.1 - Allocated Resources for Risk Management
    • MGE 2.2 - Sustained Value Mechanism
    • MGE 2.3 - AI Deactivation Mechanism
  • MGE 3: Managing 3rd-Party Risk
    • MGE 3.1 - 3rd Party Risk are Managed
  • MGE 4: Reporting Risk Management
    • MGE 4.1 - Post-Deployment Risk Management
    • MGE 4.2 - Measurable Continuous Improvements
Powered by GitBook
On this page
  1. GRN 5: Stakeholder Engagement

GRN 5.1 - External Stakeholder Policies

NIST AI RMF (in the playbook companion) states:

GOVERN 5.1

Organizational policies and practices are in place to collect, consider, prioritize, and integrate external stakeholder feedback regarding the potential individual and societal impacts related to AI risks.

About

Beyond internal and laboratory-based system testing, organizational policies and practices should also consider AI system fitness-for-purpose related to the intended context of use.

Participatory stakeholder engagement is one type of qualitative activity to help AI actors answer questions such as whether to pursue a project or how to design with impact in mind. The consideration of when and how to convene a group and the kinds of individuals, groups, or community organizations to include is an iterative process connected to the system purpose and its level of risk. Other factors relate to how to collaboratively and respectfully capture stakeholder feedback and insight that is useful, without being a solely perfunctory exercise.

These activities are best carried out by personnel with expertise in participatory practices, qualitative methods, and translation of contextual feedback for technical audiences.

Participatory engagement is not a one-time exercise and should be carried out from the very beginning of AI system commissioning through the end of the lifecycle. Organizations can consider how to incorporate engagement when beginning a project and as part of their monitoring of systems. Engagement is often utilized as a consultative practice, but this perspective may inadvertently lead to “participation washing.” Organizational transparency about the purpose and goal of the engagement can help mitigate that possibility.

Organizations may also consider targeted consultation with subject matter experts as a complement to participatory findings. Experts may assist internal staff in identifying and conceptualizing potential negative impacts that were previously not considered.

Actions
  • Establish AI risk management policies that explicitly address mechanisms for collecting, evaluating, and incorporating stakeholder and user feedback that could include:

    • Recourse mechanisms for faulty AI system outputs.

    • Bug bounties.

    • Human-centered design.

    • User-interaction and experience research.

    • Participatory stakeholder engagement with individuals and communities that may experience negative impacts.

  • Verify that stakeholder feedback is considered and addressed, including environmental concerns, and across the entire population of intended users, including historically excluded populations, people with disabilities, older people, and those with limited access to the internet and other basic technologies.

  • Clarify the organization’s principles as they apply to AI systems – considering those which have been proposed publicly – to inform external stakeholders of the organization’s values. Consider publishing or adopting AI principles.

Transparency and Documentation

Organizations can document the following:

  • What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?

  • To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders?

  • How easily accessible and current is the information available to external stakeholders?

  • What was done to mitigate or reduce the potential for harm?

  • Stakeholder involvement: Include diverse perspectives from a community of stakeholders throughout the AI life cycle to mitigate risks.

PreviousGRN 5: Stakeholder EngagementNextGRN 5.2 - Stakeholder Feedback Integration

Last updated 2 years ago