MAP 1.6 - Stakeholder Engagements

NIST AI RMF (in the playbook companion) states:

MAP 1.6

Practices and personnel for design activities enable regular engagement with stakeholders, and integrate actionable user and community feedback about unanticipated negative impacts.

About

Risk management should include processes for regular and meaningful communication with stakeholder groups. Stakeholders can provide valuable input related to system gaps and limitations. Organizations may differ in the types and number of stakeholders with which they engage.

Participatory approaches such as human-centered design (HCD) and value-sensitive design (VSD) can help AI teams to engage broadly with stakeholder communities. This type of engagement can enable AI teams to learn about how a given technology may cause impacts, both positive and negative, that were not originally considered or intended.

Actions
  • Maintain awareness and documentation of the individuals, groups, or communities who make up the system’s internal and external stakeholders.

  • Verify that appropriate skills and practices are available in-house for carrying out stakeholder engagement activities such as eliciting, capturing, and synthesizing stakeholder feedback, and translating it for AI design and development functions.

  • Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders related to system design or deployment decisions.

  • Define which AI actors, beyond AI design and development teams, will review system design, implementation, and operation tasks. Define which AI actors will administer and implement test, evaluation, verification, and validation (TEVV) tasks across the AI lifecycle.

Transparency and Documentation

Organizations can document the following:

  • What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?

  • To what extent is this information sufficient and appropriate to promote transparency? Promote transparency by enabling external stakeholders to access information on the design, operation, and limitations of the AI system.

  • To what extent has relevant information been disclosed regarding the use of AI systems, such as (a) what the system is for, (b) what it is not for, (c) how it was designed, and (d) what its limitations are? (Documentation and external communication can offer a way for entities to provide transparency.)

  • What metrics has the entity developed to measure performance of the AI system?

  • What justifications, if any, has the entity provided for the assumptions, boundaries, and limitations of the AI system?

Last updated