MAP 1.7 - AI System Requirements

NIST AI RMF (in the playbook companion) states:

MAP 1.7

System requirements (e.g., “the system shall respect the privacy of its users”) are elicited and understood from stakeholders. Design decisions take socio-technical implications into account to address AI risk.

About

AI system development requirements may outpace documentation processes for traditional software. When written requirements are unavailable or incomplete, AI actors may inadvertently overlook business and stakeholder needs, or over-rely on implicit human biases such as confirmation bias and groupthink. To mitigate the influence of these implicit factors, AI actors can seek input from, and develop transparent and actionable recourse mechanisms for, end-users and operators. Engaging external stakeholders in this process integrates broader perspectives on socio-technical risk factors. Incorporating trustworthy characteristics early in the design phase should be a priority – instead of forcing a solution onto existing systems.

Actions
  • Proactively incorporate trustworthy characteristics into system requirements.

  • Consider risk factors related to Human-AI configurations and tasks.

  • Analyze dependencies between contextual factors and system requirements. List impacts that may arise from not fully considering the importance of trustworthiness characteristics in any decision making.

  • Follow responsible design techniques in tasks such as software engineering, product management, and participatory engagement. Some examples for eliciting and documenting stakeholder requirements include product requirement documents (PRDs), user stories, user interaction/user experience (UI/UX) research, systems engineering, ethnography and related field methods.

  • Conduct user research to understand individuals, groups and communities that will be impacted by the AI, their values & context, and the role of systemic and historical biases. Integrate learnings into decisions about data selection and representation.

Transparency and Documentation

Organizations can document the following:

  • What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?

  • To what extent is this information sufficient and appropriate to promote transparency? Promote transparency by enabling external stakeholders to access information on the design, operation, and limitations of the AI system.

  • To what extent has relevant information been disclosed regarding the use of AI systems, such as (a) what the system is for, (b) what it is not for, (c) how it was designed, and (d) what its limitations are? (Documentation and external communication can offer a way for entities to provide transparency.)

  • What metrics has the entity developed to measure performance of the AI system?

  • What justifications, if any, has the entity provided for the assumptions, boundaries, and limitations of the AI system

Last updated