NIST AI Risk Management Framework
HomeDocumentationGet started
  • NIST AI Risk Management Framework
  • GRN 1: Risk Management Documentation
    • GRN 1.1 - AI Legal and Regulatory Requirements
    • GRN 1.2 - Trustworthy AI Characteristics
    • GRN 1.3 - Transparent Risk Management
    • GRN 1.4 - Risk Management Monitoring
  • GRN 2: AI Organisation Structure
    • GRN 2.1 - Roles and Responsibilities
    • GRN 2.2 - AI Risk Management Training
    • GRN 2.3 - Executive Responsibility
  • GRN 3: AI Internal Stakeholders
    • GRN 3.1 - AI Risk Decisions Making
  • GRN 4: Organisational Commitments
    • GRN 4.1 - AI Risk Organisational Practices
    • GRN 4.2 - AI Organisational Documentation
    • GRN 4.3 - Organisational Information Sharing Mechnism
  • GRN 5: Stakeholder Engagement
    • GRN 5.1 - External Stakeholder Policies
    • GRN 5.2 - Stakeholder Feedback Integration
  • GRN 6: Managing 3rd-Party Risk
    • GRN 6.1 - 3rd Party Risk Policies
    • GRN 6.2 - 3rd Party Contingency
  • MAP 1: AI Application Context
    • MAP 1.1 - Intended Purpose of AI Use
    • MAP 1.2 - Inter-disciplinary AI Stakeholders
    • MAP 1.3 - AI's Business Value
    • MAP 1.4 - Organisations AI Mission
    • MAP 1.5 - Organisations Risk Tolerance
    • MAP 1.6 - Stakeholder Engagements
    • MAP 1.7 - AI System Requirements
  • MAP 2: AI Application Classification
    • MAP 2.1 - AI Classification
    • MAP 2.2 - AI Usage by Humans
    • MAP 2.3 - TEVV Documentation
  • MAP 3: AI Benefits and Costs
    • MAP 3.1 - AI System Benefits
    • MAP 3.2 - AI Potential Costs
    • MAP 3.3 - AI Application Scope
  • MAP 4: 3rd-Party Risks and Benefits
    • MAP 4.1 - Mapping 3rd-Party Risk
    • MAP 4.2 - Internal Risk Controls for 3rd Party Risk
  • MAP 5: AI Impacts
    • MAP 5.1 - AI Positive or Negative Impacts
    • MAP 5.2 - Likelihood and Magnitude of Each Impact
    • MAP 5.3 - Benefits vs Impacts
  • MRE 1: Appropriate Methods and Metrics
    • MRE 1.1 - Approaches and Metrics
    • MRE 1.2 - Metrics Appropriateness and Effectiveness
    • MRE 1.3 - Stakeholder Assessment Consultation
  • MRE 2: Trustworthy Evaluation
    • MRE 2.1 - Tools for TEVV
    • MRE 2.2 - Evaluations of Human Subjects
    • MRE 2.3 - System Performance
    • MRE 2.4 - Deployment Valid and Reliable
    • MRE 2.5 - Regular Evaluation of AI Systems
    • MRE 2.6 - Evaluation of Computational Bias
    • MRE 2.7 - Evaluation of Security and Resilience
    • MRE 2.8 - Evaluation of AI Models
    • MRE 2.9 - Evaluation of AI Privacy Risks
    • MRE 2.10 - Environmental Impact
  • MRE 3: Risk Tracking Mechanism
    • MRE 3.1 - Risk Tracking and Management
    • MRE 3.2 - Risk Tracking Assessments
  • MRE 4: Measurement Feedback
    • MRE 4.1 - Measurement Approaches for Identifying Risk
    • MRE 4.2 - Measurement Approaches for Trustworthiness
    • MRE 4.3 - Measurable Performance Improvements
  • MGE 1: Managing AI Risk
    • MGE 1.1 - Development and Deployment Decision
    • MGE 1.2 - Risk Mitigation Activities
    • MGE 1.3 - Risk Management of Mapped Risks
  • MGE 2: Managing AI Benefits and Impacts
    • MGE 2.1 - Allocated Resources for Risk Management
    • MGE 2.2 - Sustained Value Mechanism
    • MGE 2.3 - AI Deactivation Mechanism
  • MGE 3: Managing 3rd-Party Risk
    • MGE 3.1 - 3rd Party Risk are Managed
  • MGE 4: Reporting Risk Management
    • MGE 4.1 - Post-Deployment Risk Management
    • MGE 4.2 - Measurable Continuous Improvements
Powered by GitBook
On this page
  1. MAP 1: AI Application Context

MAP 1.1 - Intended Purpose of AI Use

NIST AI RMF (in the playbook companion) states:

MAP 1.1

Intended purpose, prospective settings in which the AI system will be deployed, the specific set or types of users along with their expectations, and impacts of system use are understood and documented. Assumptions and related limitations about AI system purpose and use are enumerated, documented and tied to TEVV considerations and system metrics.

About

It is not necessarily possible to have advanced knowledge about all potential settings in which a system will be deployed. To help delineate the bounds of acceptable deployment, context mapping may include examination of the following:

  • intended, prospective,and actual deployment setting.

  • specific set or types of users.

  • operator or subject expectations.

  • concept of operations.

  • intended purpose and impact of system use.

  • requirements for system deployment and operation.

  • potential negative impacts to individuals, groups, communities, organizations, and society – or context-specific impacts such as legal requirements or impacts to the environment.

  • unintended, downstream, or other unknown contextual factors.

Actions
  • Pursue AI system design purposefully, after non-AI solutions are considered.

  • Define and document the task, purpose, minimum functionality, and benefits of the AI system to inform considerations about whether the project is worth pursuing.

  • Maintain awareness of industry, technical, and applicable legal standards.

  • Collaboratively consider intended AI system design tasks along with unanticipated purposes.

  • Determine the user and organizational requirements, including business and technical requirements.

  • Determine and delineate the expected and acceptable AI system context of use, including:

    • operational environment

    • impacts to individuals, groups, communities, organizations, and society

    • user characteristics and tasks

    • social environment.

  • Track and document existing AI systems held by the organization, and those maintained or supported by third-party entities.

  • Gain and maintain awareness about evaluating scientific claims related to AI system performance and benefits before launching into system design.

  • Identify human-AI interaction and/or roles, such as whether the application will support or replace human decision making.

  • Plan for risks related to human-AI configurations, and document requirements, roles, and responsibilities for human oversight of deployed systems.

Transparency and Documentation

Organizations can document the following:

  • Which AI actors are responsible for the decisions of the AI and is this person aware of the intended uses and limitations of the analytic?

  • Which AI actors are responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?

  • Who is the person(s) accountable for the ethical considerations across the AI lifecycle?

PreviousMAP 1: AI Application ContextNextMAP 1.2 - Inter-disciplinary AI Stakeholders

Last updated 2 years ago