NIST AI Risk Management Framework
HomeDocumentationGet started
  • NIST AI Risk Management Framework
  • GRN 1: Risk Management Documentation
    • GRN 1.1 - AI Legal and Regulatory Requirements
    • GRN 1.2 - Trustworthy AI Characteristics
    • GRN 1.3 - Transparent Risk Management
    • GRN 1.4 - Risk Management Monitoring
  • GRN 2: AI Organisation Structure
    • GRN 2.1 - Roles and Responsibilities
    • GRN 2.2 - AI Risk Management Training
    • GRN 2.3 - Executive Responsibility
  • GRN 3: AI Internal Stakeholders
    • GRN 3.1 - AI Risk Decisions Making
  • GRN 4: Organisational Commitments
    • GRN 4.1 - AI Risk Organisational Practices
    • GRN 4.2 - AI Organisational Documentation
    • GRN 4.3 - Organisational Information Sharing Mechnism
  • GRN 5: Stakeholder Engagement
    • GRN 5.1 - External Stakeholder Policies
    • GRN 5.2 - Stakeholder Feedback Integration
  • GRN 6: Managing 3rd-Party Risk
    • GRN 6.1 - 3rd Party Risk Policies
    • GRN 6.2 - 3rd Party Contingency
  • MAP 1: AI Application Context
    • MAP 1.1 - Intended Purpose of AI Use
    • MAP 1.2 - Inter-disciplinary AI Stakeholders
    • MAP 1.3 - AI's Business Value
    • MAP 1.4 - Organisations AI Mission
    • MAP 1.5 - Organisations Risk Tolerance
    • MAP 1.6 - Stakeholder Engagements
    • MAP 1.7 - AI System Requirements
  • MAP 2: AI Application Classification
    • MAP 2.1 - AI Classification
    • MAP 2.2 - AI Usage by Humans
    • MAP 2.3 - TEVV Documentation
  • MAP 3: AI Benefits and Costs
    • MAP 3.1 - AI System Benefits
    • MAP 3.2 - AI Potential Costs
    • MAP 3.3 - AI Application Scope
  • MAP 4: 3rd-Party Risks and Benefits
    • MAP 4.1 - Mapping 3rd-Party Risk
    • MAP 4.2 - Internal Risk Controls for 3rd Party Risk
  • MAP 5: AI Impacts
    • MAP 5.1 - AI Positive or Negative Impacts
    • MAP 5.2 - Likelihood and Magnitude of Each Impact
    • MAP 5.3 - Benefits vs Impacts
  • MRE 1: Appropriate Methods and Metrics
    • MRE 1.1 - Approaches and Metrics
    • MRE 1.2 - Metrics Appropriateness and Effectiveness
    • MRE 1.3 - Stakeholder Assessment Consultation
  • MRE 2: Trustworthy Evaluation
    • MRE 2.1 - Tools for TEVV
    • MRE 2.2 - Evaluations of Human Subjects
    • MRE 2.3 - System Performance
    • MRE 2.4 - Deployment Valid and Reliable
    • MRE 2.5 - Regular Evaluation of AI Systems
    • MRE 2.6 - Evaluation of Computational Bias
    • MRE 2.7 - Evaluation of Security and Resilience
    • MRE 2.8 - Evaluation of AI Models
    • MRE 2.9 - Evaluation of AI Privacy Risks
    • MRE 2.10 - Environmental Impact
  • MRE 3: Risk Tracking Mechanism
    • MRE 3.1 - Risk Tracking and Management
    • MRE 3.2 - Risk Tracking Assessments
  • MRE 4: Measurement Feedback
    • MRE 4.1 - Measurement Approaches for Identifying Risk
    • MRE 4.2 - Measurement Approaches for Trustworthiness
    • MRE 4.3 - Measurable Performance Improvements
  • MGE 1: Managing AI Risk
    • MGE 1.1 - Development and Deployment Decision
    • MGE 1.2 - Risk Mitigation Activities
    • MGE 1.3 - Risk Management of Mapped Risks
  • MGE 2: Managing AI Benefits and Impacts
    • MGE 2.1 - Allocated Resources for Risk Management
    • MGE 2.2 - Sustained Value Mechanism
    • MGE 2.3 - AI Deactivation Mechanism
  • MGE 3: Managing 3rd-Party Risk
    • MGE 3.1 - 3rd Party Risk are Managed
  • MGE 4: Reporting Risk Management
    • MGE 4.1 - Post-Deployment Risk Management
    • MGE 4.2 - Measurable Continuous Improvements
Powered by GitBook
On this page
  1. GRN 1: Risk Management Documentation

GRN 1.2 - Trustworthy AI Characteristics

NIST AI RMF (in the playbook companion) states:

GOVERN 1.2

The characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures.

About

Policies, processes, and procedures are a central component of effective AI risk management and fundamental to individual and organizational accountability.

Organizational policies and procedures will vary based on available resources and risk profiles, but can help systematize AI actor roles and responsibilities throughout the AI model lifecycle. Without such policies, risk management can be subjective across the organization, and exacerbate rather than minimize risks over time.

Individuals and organizations cannot be held accountable to unwritten, unknown or unrecognized policies. Lack of clear information about responsibilities and chains of command will limit the effectiveness of risk management.

Actions

Establish and maintain formal AI risk management policies that address AI system trustworthy characteristics throughout the system’s lifecycle. Organizational policies should:

  • Define key terms and concepts related to AI systems and the scope of their intended use.

  • Address the use of sensitive or otherwise risky data.

  • Detail standards for experimental design, data quality, and model training.

  • Outline and document risk mapping and measurement processes and standards.

  • Detail model testing and validation processes.

  • Detail review processes for legal and risk functions.

  • Establish the frequency of and detail for monitoring, auditing and review processes.

  • Outline change management requirements.

  • Outline processes for internal and external stakeholder engagement.

  • Establish whistleblower policies to facilitate reporting of serious AI system concerns.

  • Detail and test incident response plans.

  • Verify that formal AI risk management policies align to existing legal standards, and industry best practices and norms.

  • Establish AI risk management policies that broadly align to AI system trustworthy characteristics.

  • Verify that formal AI risk management policies include currently deployed and third-party AI systems.

Transparency and Documentation

Organizations can document the following:

  • To what extent do these policies foster public trust and confidence in the use of the AI system?

  • What policies has the entity developed to ensure the use of the AI system is consistent with its stated values and principles?

  • To what extent are the model outputs consistent with the entity’s values and principles to foster public trust and equity?

PreviousGRN 1.1 - AI Legal and Regulatory RequirementsNextGRN 1.3 - Transparent Risk Management

Last updated 2 years ago