fairMLHealth

Healthcare-specific tools for bias analysis

View the Project on GitHub KenSciResearch/fairMLHealth

Publications

ICHI 2021 Tutorial

About

This updated tutorial covers concepts for measuring fairness in machine learning models as as it relates to problems in healthcare: ICHI2021-FairnessInHealthcareML-Slides.pdf. Best used with our tutorial notebooks: Tutorial-EvaluatingFairnessInBinaryClassification.ipynb, and Tutorial-EvaluatingFairnessInRegression.ipynb

Citation

Ahmad, M. A., Allen, C., Eckert, C., Hu, J., Kumar, V., & Teredesai, A. (2021, May). Fairness in Healthcare AI. In Proceedings of the 9th IEEE International Conference on Healthcare Informatics. (https://ichi2021.institute4hi.org/program/tutorial)

PAKDD 2021 Tutorial

About

This updated tutorial covers concepts for measuring fairness in machine learning models as as it relates to problems in healthcare:PAKDD2021-FairnessInHealthcareML-Slides.pdf. Best used with our tutorial notebooks: Tutorial-EvaluatingFairnessInBinaryClassification.ipynb, and Tutorial-EvaluatingFairnessInRegression.ipynb

Citation

Ahmad, M. A., Allen, C., Eckert, C., Hu, J., Kumar, V., Patel, A., & Teredesai, A. (2021, May). Fairness in Healthcare AI. In Proceedings of the 25th Pacific-Asia Conference on Knowledge Discovery and Data Mining. (https://www.pakdd2021.org/Programme/tutorials).

KDD 2020 Tutorial

About

From KDD 2020, this is our first tutorial covering concepts for measuring fairness in machine learning models as as it relates to problems in healthcare (slides: KDD2020-FairnessInHealthcareML-Slides.pdf. Through the associated notebook FairnessInHealthcareML-KDD-2020-TutorialNotebook.ipynb you will review the background introduced in the slides before generating a simple baseline model. This baseline will be used as an example to understand common measures such as Disparate Impact Ratio and Consistency Scores. It will also introduce you to the Scikit-Learn-compatible tools available in AIF360 and FairLearn, two of the most comprehensive and flexible Python libraries for measuring and addressing bias in machine learning models.

The tutorial notebook uses data from the MIMIC III Critical Care database. Note that although the data are freely available, it may take a few days to gain approval. Please save the data with the default directory name (“MIMIC”). The notebook also requires the following Python libraries: AIF360, FairLearn, Scipy, Pandas, Numpy, Scikit, and XGBOOST. Basic knowledge of machine learning implementation in Python is assumed.

Citation

Ahmad, M. A., Patel, A., Eckert, C., Kumar, V., & Teredesai, A. (2020, August). Fairness in Machine Learning for Healthcare. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 3529-3530).

@incollection{FMLH_KDD2020,
    title = {Fairness in Machine Learning for Healthcare},
    author = {Ahmad, M. A. and Patel, A. and Eckert, C. and Kumar, V. and Teredesai, A.},
    year = 2020,
    month = {August},
    booktitle = {Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining},
    pages = {3529--3530}
}