AI Fairness 360 Open Source Toolkit
This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. We invite you to use it and improve it.
Not sure what to do first? Start here!
Learn more about fairness and bias mitigation concepts, terminology, and tools before you begin.
Try a Web Demo
Step through the process of checking and remediating bias in an interactive web demo that shows a sample of capabilities available in this toolkit.
Watch videos to learn more about AI Fairness 360.
Read a paper
Read a paper describing how we designed AI Fairness 360.
Step through a set of in- depth examples that introduces developers to code that checks and mitigates bias in different industry and application domains.
Ask a Question
Join our AIF360 Slack Channel to ask questions, make comments and tell stories about how you use the toolkit.
Open a directory of Jupyter Notebooks in GitHub that provide working examples of bias detection and mitigation in sample datasets. Then share your own notebooks!
You can add new metrics and algorithms in GitHub. Share Jupyter notebooks show-casing how you have examined and mitigated bias in your machine learning application.
Learn how to put this toolkit to work for your application or industry problem. Try these tutorials.
These are ten state-of-the-art bias mitigation algorithms that can address bias throughout AI systems. Add more!
Use to mitigate bias in training data. Modifies training data features and labels.
Use to mitgate bias in training data. Modifies the weights of different training examples.
Use to mitigate bias in classifiers. Uses adversarial techniques to maximize accuracy and reduce evidence of protected attributes in predictions.
Reject Option Classification
Use to mitigate bias in predictions. Changes predictions from a classifier to make them fairer.
Disparate Impact Remover
Use to mitigate bias in training data. Edits feature values to improve group fairness.
Learning Fair Representations
Use to mitigate bias in training data. Learns fair representations by obfuscating information about protected attributes.
Use to mitigate bias in classifiers. Adds a discrimination-aware regularization term to the learning objective.
Calibrated Equalized Odds Post-processing
Use to mitigate bias in predictions. Optimizes over calibrated classifier score outputs that lead to fair output labels.
Equalized Odds Post-processing
Use to mitigate bias in predictions. Modifies the predicted labels using an optimization scheme to make predictions fairer.
Meta Fair Classifier
Use to mitigate bias in classifier. Meta algorithm that takes the fairness metric as part of the input and returns a classifier optimized for that metric.
Are individuals treated similarly? Are privileged and unprivileged groups treated similarly? Find out by using metrics like these that measure individual and group fairness.
Statistical Parity Difference
The difference of the rate of favorable outcomes received by the unprivileged group to the privileged group.
Equal Opportunity Difference
The difference of true positive rates between the unprivileged and the privileged groups.
Average Odds Difference
The average difference of false positive rate (false positives/negatives) and true positive rate (true positives/positives) between unprivileged and privileged groups.
The ratio of rate of favorable outcome for the unprivileged group to that of the privileged group.
Measures the inequality in benefit allocation for individuals.
The average Euclidean distance between the samples from the two datasets.
The average Mahalanobis distance between the samples from the two datasets.
The average Manhattan distance between the samples from the two datasets.