AI Fairness (AIF)
The application of AI systems for the purpose of making decisions automatically can result in unfair treatment of certain individuals or groups of individuals. A number of factors, such as human biases in training data and providing feedback to systems, can contribute to unfair results. These factors include imbalanced data sets, biassed objective functions, and biassed data sets. Unfairness could also be the result of a problem with bias in the product concept, the problem formulation, or the decisions made regarding when and where to deploy AI systems.
Controls related to this risk category are listed as below:
AIF 01 - Dataset Bias Identification
AIF 02 - Dataset Bias Mitigation
AIF 03 - Dataset Bias Analysis Action and Assessment
AIF 04 - Dataset Bias Special/Protected Categories
AIF 05 - Evaluation of Computational Bias
Last updated