MAP 2.2 - AI Usage by Humans

NIST AI RMF (in the playbook companion) states:

MAP 2.2

Information is documented about the system’s knowledge limits, and how output will be utilized and overseen by humans.

About

Once deployed and in use, AI systems may sometimes perform poorly, manifest unanticipated negative impacts, or violate legal or ethical norms. These risks and incidents can result from a variety of factors, including developing systems in highly-controlled environments that differ considerably from the deployment context. Regular stakeholder engagement and feedback can provide enhanced contextual awareness about how an AI system may interact in its real-world setting. Example practices include broad stakeholder engagement with potentially impacted community groups, consideration of user interaction and user experience (UI/UX) factors, and regular system testing and evaluation in non-optimized conditions.

Actions
  • Extend documentation beyond system and task requirements to include possible risks due to deployment contexts and human-AI configurations.

  • Follow stakeholder feedback processes to determine whether a system achieved its documented purpose within a given use context, and whether users can correctly comprehend system outputs or results.

  • Document dependencies on upstream data and other AI systems, including if the specified system is an upstream dependency for another AI system or other data.

  • Document connections the AI system or data will have to external networks (including the internet), financial markets, and critical infrastructure that have potential for negative externalities. Identify and document negative impacts as part of considering the broader risk thresholds and subsequent go/no-go deployment as well as post-deployment decommissioning decisions.

Transparency and Documentation

Organizations can document the following:

  • Does the AI solution provides sufficient information to assist the personnel to make an informed decision and take actions accordingly?

  • To what extent is the output of each component appropriate for the operational context?

  • What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?

  • Based on the assessment, did your organization implement the appropriate level of human involvement in AI-augmented decision-making? (WEF Assessment)

  • How will the accountable AI actor(s) address changes in accuracy and precision due to either an adversary’s attempts to disrupt the AI system or unrelated changes in operational/business environment, which may impact the accuracy of the AI system?

Last updated