AI Fairness 360 - Demo

  • Data
  • Check
  • Mitigate
  • Compare

3. Choose bias mitigation algorithm

A variety of algorithms can be used to mitigate bias. The choice of which to use depends on whether you want to fix the data (pre-process), the classifier (in-process), or the predictions (post-process). Learn more about how to choose.

Weights the examples in each (group, label) combination differently to ensure fairness before classification.


Your browser does not support SVGs

Learns a probabilistic transformation that can modify the features and the labels in the training data.


Your browser does not support SVGs

Learns a classifier that maximizes prediction accuracy and simultaneously reduces an adversary's ability to determine the protected attribute from the predictions. This approach leads to a fair classifier as the predictions cannot carry any group discrimination information that the adversary can exploit.


Your browser does not support SVGs

Changes predictions from a classifier to make them fairer. Provides favorable outcomes to unprivileged groups and unfavorable outcomes to privileged groups in a confidence band around the decision boundary with the highest uncertainty.


Your browser does not support SVGs