NIST AI Risk Management Framework
HomeDocumentationGet started
  • NIST AI Risk Management Framework
  • GRN 1: Risk Management Documentation
    • GRN 1.1 - AI Legal and Regulatory Requirements
    • GRN 1.2 - Trustworthy AI Characteristics
    • GRN 1.3 - Transparent Risk Management
    • GRN 1.4 - Risk Management Monitoring
  • GRN 2: AI Organisation Structure
    • GRN 2.1 - Roles and Responsibilities
    • GRN 2.2 - AI Risk Management Training
    • GRN 2.3 - Executive Responsibility
  • GRN 3: AI Internal Stakeholders
    • GRN 3.1 - AI Risk Decisions Making
  • GRN 4: Organisational Commitments
    • GRN 4.1 - AI Risk Organisational Practices
    • GRN 4.2 - AI Organisational Documentation
    • GRN 4.3 - Organisational Information Sharing Mechnism
  • GRN 5: Stakeholder Engagement
    • GRN 5.1 - External Stakeholder Policies
    • GRN 5.2 - Stakeholder Feedback Integration
  • GRN 6: Managing 3rd-Party Risk
    • GRN 6.1 - 3rd Party Risk Policies
    • GRN 6.2 - 3rd Party Contingency
  • MAP 1: AI Application Context
    • MAP 1.1 - Intended Purpose of AI Use
    • MAP 1.2 - Inter-disciplinary AI Stakeholders
    • MAP 1.3 - AI's Business Value
    • MAP 1.4 - Organisations AI Mission
    • MAP 1.5 - Organisations Risk Tolerance
    • MAP 1.6 - Stakeholder Engagements
    • MAP 1.7 - AI System Requirements
  • MAP 2: AI Application Classification
    • MAP 2.1 - AI Classification
    • MAP 2.2 - AI Usage by Humans
    • MAP 2.3 - TEVV Documentation
  • MAP 3: AI Benefits and Costs
    • MAP 3.1 - AI System Benefits
    • MAP 3.2 - AI Potential Costs
    • MAP 3.3 - AI Application Scope
  • MAP 4: 3rd-Party Risks and Benefits
    • MAP 4.1 - Mapping 3rd-Party Risk
    • MAP 4.2 - Internal Risk Controls for 3rd Party Risk
  • MAP 5: AI Impacts
    • MAP 5.1 - AI Positive or Negative Impacts
    • MAP 5.2 - Likelihood and Magnitude of Each Impact
    • MAP 5.3 - Benefits vs Impacts
  • MRE 1: Appropriate Methods and Metrics
    • MRE 1.1 - Approaches and Metrics
    • MRE 1.2 - Metrics Appropriateness and Effectiveness
    • MRE 1.3 - Stakeholder Assessment Consultation
  • MRE 2: Trustworthy Evaluation
    • MRE 2.1 - Tools for TEVV
    • MRE 2.2 - Evaluations of Human Subjects
    • MRE 2.3 - System Performance
    • MRE 2.4 - Deployment Valid and Reliable
    • MRE 2.5 - Regular Evaluation of AI Systems
    • MRE 2.6 - Evaluation of Computational Bias
    • MRE 2.7 - Evaluation of Security and Resilience
    • MRE 2.8 - Evaluation of AI Models
    • MRE 2.9 - Evaluation of AI Privacy Risks
    • MRE 2.10 - Environmental Impact
  • MRE 3: Risk Tracking Mechanism
    • MRE 3.1 - Risk Tracking and Management
    • MRE 3.2 - Risk Tracking Assessments
  • MRE 4: Measurement Feedback
    • MRE 4.1 - Measurement Approaches for Identifying Risk
    • MRE 4.2 - Measurement Approaches for Trustworthiness
    • MRE 4.3 - Measurable Performance Improvements
  • MGE 1: Managing AI Risk
    • MGE 1.1 - Development and Deployment Decision
    • MGE 1.2 - Risk Mitigation Activities
    • MGE 1.3 - Risk Management of Mapped Risks
  • MGE 2: Managing AI Benefits and Impacts
    • MGE 2.1 - Allocated Resources for Risk Management
    • MGE 2.2 - Sustained Value Mechanism
    • MGE 2.3 - AI Deactivation Mechanism
  • MGE 3: Managing 3rd-Party Risk
    • MGE 3.1 - 3rd Party Risk are Managed
  • MGE 4: Reporting Risk Management
    • MGE 4.1 - Post-Deployment Risk Management
    • MGE 4.2 - Measurable Continuous Improvements
Powered by GitBook
On this page
  1. MAP 5: AI Impacts

MAP 5.1 - AI Positive or Negative Impacts

NIST AI RMF (in the playbook companion) states:

MAP 5.1

Potential positive or negative impacts to individuals, groups, communities, organizations, or society are regularly identified and documented.

About

AI systems are socio-technical in nature and can have positive, neutral, or negative implications that extend beyond their stated purpose. Negative impacts can be wide- ranging and affect individuals, groups, communities, organizations, and society, as well as the environment and national security.

The Map function provides an opportunity for organizations to assess potential AI system impacts based on identified risks. This enables organizations to create a baseline for system monitoring and to increase opportunities for detecting emergent risks. Impact assessments also help to identify new benefits and purposes which may arise from AI system use. After an AI system is deployed, engaging different stakeholder groups – who may be aware of, or experience, benefits or negative impacts that are unknown to AI actors – allows organizations to understand and monitor system benefits and impacts more readily.

Actions
  • Establish and document stakeholder engagement processes at the earliest stages of system formulation to identify potential impacts from the AI system on individuals, groups, communities, organizations, and society.

  • Employ methods such as value sensitive design (VSD) to identify misalignments between organizational and societal values, and system implementation and impact.

  • Identify approaches to engage, capture, and incorporate input from system users and other key stakeholders to assist with continuous monitoring for impacts and emergent risks. Incorporate quantitative, qualitative, and mixed methods in the assessment and documentation of potential impacts to individuals, groups, communities, organizations, and society.

  • Identify a team (internal or external) that is independent of AI design and development functions to assess AI system benefits, positive and negative impacts and their likelihood.

  • Develop impact assessment procedures that incorporate socio-technical elements and methods and plan to normalize across organizational culture. Regularly review and refine impact assessment processes.

Transparency and Documentation

Organizations can document the following:

  • If the AI system relates to people, does it unfairly advantage or disadvantage a particular social group? In what ways? How was this mitigated?

  • If the AI system relates to other ethically protected subjects, have appropriate obligations been met? (e.g., medical data might include information collected from animals)

  • If the AI system relates to people, could this dataset expose people to harm or legal action? (e.g., financial social or otherwise) What was done to mitigate or reduce the potential for harm?

PreviousMAP 5: AI ImpactsNextMAP 5.2 - Likelihood and Magnitude of Each Impact

Last updated 2 years ago