MAP 3.2 - AI Potential Costs

NIST AI RMF (in the playbook companion) states:

MAP 3.2

Potential costs, including non-monetary costs, which result from expected or realized errors or system performance are examined and documented.

About

Anticipating negative impacts of AI systems is a difficult task. Negative impacts can be due to many factors, such as poor system performance, and may range from minor annoyance to serious injury, financial losses, or regulatory enforcement actions. AI actors can work with a broad set of stakeholders to improve their capacity for assessing system impacts – and subsequently – system risks. Hasty or non-thorough impact assessments may result in erroneous determinations of no-risk for more complex or higher risk systems.

Actions
  • Perform a context analysis to map negative impacts arising from not integrating trustworthiness characteristics. When negative impacts are not direct or obvious, AI actors should engage with external stakeholders to investigate and document:

    • Who could be harmed?

    • What could be harmed?

    • When could harm arise?

    • How could harm arise?

  • Implement procedures for regularly evaluating the qualitative and quantitative costs of internal and external AI system failures. Develop actions to prevent, detect, and/or correct potential risks and related impacts. Regularly evaluate failure costs to inform go/no-go deployment decisions throughout the AI system lifecycle.

Transparency and Documentation

Organizations can document the following:

  • To what extent does the system/entity consistently measure progress towards stated goals and objectives?

  • To what extent can users or parties affected by the outputs of the AI system test the AI system and provide feedback?

  • Have you documented and explained that machine errors may differ from human errors?

Last updated