FDA AI based SaMD Risk Management
HomeDocumentationGet started
  • FDA - AI based SaMD Risk Management
  • Level of Concern Document (LCD)
  • Software Description (SDE)
  • AI Expertise (AIE)
  • Device Hazard Analysis (DHA)
  • Software Requirement Specifications (SRS)
  • Training and Test Dataset (TTD)
  • Software Design Specifications (SDS)
  • AI Fairness (AIF)
  • AI Transparency and Explainability (ATE)
  • Traceability Analysis (TRA)
  • Verification and Validation (VAV)
  • Revision Level History (RLH)
Powered by GitBook
On this page

AI Fairness (AIF)

The application of AI systems for the purpose of making decisions automatically can result in unfair treatment of certain individuals or groups of individuals. A number of factors, such as human biases in training data and providing feedback to systems, can contribute to unfair results. These factors include imbalanced data sets, biassed objective functions, and biassed data sets. Unfairness could also be the result of a problem with bias in the product concept, the problem formulation, or the decisions made regarding when and where to deploy AI systems.

Controls related to this risk category are listed as below:

  • AIF 01 - Dataset Bias Identification

  • AIF 02 - Dataset Bias Mitigation

  • AIF 03 - Dataset Bias Analysis Action and Assessment

  • AIF 04 - Dataset Bias Special/Protected Categories

  • AIF 05 - Evaluation of Computational Bias

PreviousSoftware Design Specifications (SDS)NextAI Transparency and Explainability (ATE)

Last updated 2 years ago