MAP 4.2 - Internal Risk Controls for 3rd Party Risk

NIST AI RMF (in the playbook companion) states:

MAP 4.2

Internal risk controls for third-party technology risks are in place and documented.

About

In the course of their work, AI actors often utilize open-source, or otherwise freely available, third-party technologies – some of which have been reported to have privacy, bias, and security risks. Organizations may consider tightening up internal risk controls for these technology sources.

Actions
  • Supply resources such as model documentation templates and software safelists to assist in third-party technology inventory and approval activities.

  • Review third-party material (including data and models) for risks related to bias, data privacy, and security vulnerabilities.

  • Apply controls – such as procurement, security, and data privacy controls – to all acquired third-party technologies.

Transparency and Documentation

Organizations can document the following:

  • Did you ensure that the AI system can be audited by independent third parties?

  • To what extent do these policies foster public trust and confidence in the use of the AI system?

  • Did you establish mechanisms that facilitate the AI system’s auditability (e.g. traceability of the development process, the sourcing of training data and the logging of the AI system’s processes, outcomes, positive and negative impact)?

Last updated