NIST AI Risk Management Framework
HomeDocumentationGet started
  • NIST AI Risk Management Framework
  • GRN 1: Risk Management Documentation
    • GRN 1.1 - AI Legal and Regulatory Requirements
    • GRN 1.2 - Trustworthy AI Characteristics
    • GRN 1.3 - Transparent Risk Management
    • GRN 1.4 - Risk Management Monitoring
  • GRN 2: AI Organisation Structure
    • GRN 2.1 - Roles and Responsibilities
    • GRN 2.2 - AI Risk Management Training
    • GRN 2.3 - Executive Responsibility
  • GRN 3: AI Internal Stakeholders
    • GRN 3.1 - AI Risk Decisions Making
  • GRN 4: Organisational Commitments
    • GRN 4.1 - AI Risk Organisational Practices
    • GRN 4.2 - AI Organisational Documentation
    • GRN 4.3 - Organisational Information Sharing Mechnism
  • GRN 5: Stakeholder Engagement
    • GRN 5.1 - External Stakeholder Policies
    • GRN 5.2 - Stakeholder Feedback Integration
  • GRN 6: Managing 3rd-Party Risk
    • GRN 6.1 - 3rd Party Risk Policies
    • GRN 6.2 - 3rd Party Contingency
  • MAP 1: AI Application Context
    • MAP 1.1 - Intended Purpose of AI Use
    • MAP 1.2 - Inter-disciplinary AI Stakeholders
    • MAP 1.3 - AI's Business Value
    • MAP 1.4 - Organisations AI Mission
    • MAP 1.5 - Organisations Risk Tolerance
    • MAP 1.6 - Stakeholder Engagements
    • MAP 1.7 - AI System Requirements
  • MAP 2: AI Application Classification
    • MAP 2.1 - AI Classification
    • MAP 2.2 - AI Usage by Humans
    • MAP 2.3 - TEVV Documentation
  • MAP 3: AI Benefits and Costs
    • MAP 3.1 - AI System Benefits
    • MAP 3.2 - AI Potential Costs
    • MAP 3.3 - AI Application Scope
  • MAP 4: 3rd-Party Risks and Benefits
    • MAP 4.1 - Mapping 3rd-Party Risk
    • MAP 4.2 - Internal Risk Controls for 3rd Party Risk
  • MAP 5: AI Impacts
    • MAP 5.1 - AI Positive or Negative Impacts
    • MAP 5.2 - Likelihood and Magnitude of Each Impact
    • MAP 5.3 - Benefits vs Impacts
  • MRE 1: Appropriate Methods and Metrics
    • MRE 1.1 - Approaches and Metrics
    • MRE 1.2 - Metrics Appropriateness and Effectiveness
    • MRE 1.3 - Stakeholder Assessment Consultation
  • MRE 2: Trustworthy Evaluation
    • MRE 2.1 - Tools for TEVV
    • MRE 2.2 - Evaluations of Human Subjects
    • MRE 2.3 - System Performance
    • MRE 2.4 - Deployment Valid and Reliable
    • MRE 2.5 - Regular Evaluation of AI Systems
    • MRE 2.6 - Evaluation of Computational Bias
    • MRE 2.7 - Evaluation of Security and Resilience
    • MRE 2.8 - Evaluation of AI Models
    • MRE 2.9 - Evaluation of AI Privacy Risks
    • MRE 2.10 - Environmental Impact
  • MRE 3: Risk Tracking Mechanism
    • MRE 3.1 - Risk Tracking and Management
    • MRE 3.2 - Risk Tracking Assessments
  • MRE 4: Measurement Feedback
    • MRE 4.1 - Measurement Approaches for Identifying Risk
    • MRE 4.2 - Measurement Approaches for Trustworthiness
    • MRE 4.3 - Measurable Performance Improvements
  • MGE 1: Managing AI Risk
    • MGE 1.1 - Development and Deployment Decision
    • MGE 1.2 - Risk Mitigation Activities
    • MGE 1.3 - Risk Management of Mapped Risks
  • MGE 2: Managing AI Benefits and Impacts
    • MGE 2.1 - Allocated Resources for Risk Management
    • MGE 2.2 - Sustained Value Mechanism
    • MGE 2.3 - AI Deactivation Mechanism
  • MGE 3: Managing 3rd-Party Risk
    • MGE 3.1 - 3rd Party Risk are Managed
  • MGE 4: Reporting Risk Management
    • MGE 4.1 - Post-Deployment Risk Management
    • MGE 4.2 - Measurable Continuous Improvements
Powered by GitBook
On this page
  1. MAP 4: 3rd-Party Risks and Benefits

MAP 4.2 - Internal Risk Controls for 3rd Party Risk

NIST AI RMF (in the playbook companion) states:

MAP 4.2

Internal risk controls for third-party technology risks are in place and documented.

About

In the course of their work, AI actors often utilize open-source, or otherwise freely available, third-party technologies – some of which have been reported to have privacy, bias, and security risks. Organizations may consider tightening up internal risk controls for these technology sources.

Actions
  • Supply resources such as model documentation templates and software safelists to assist in third-party technology inventory and approval activities.

  • Review third-party material (including data and models) for risks related to bias, data privacy, and security vulnerabilities.

  • Apply controls – such as procurement, security, and data privacy controls – to all acquired third-party technologies.

Transparency and Documentation

Organizations can document the following:

  • Did you ensure that the AI system can be audited by independent third parties?

  • To what extent do these policies foster public trust and confidence in the use of the AI system?

  • Did you establish mechanisms that facilitate the AI system’s auditability (e.g. traceability of the development process, the sourcing of training data and the logging of the AI system’s processes, outcomes, positive and negative impact)?

PreviousMAP 4.1 - Mapping 3rd-Party RiskNextMAP 5: AI Impacts

Last updated 2 years ago