We present a machine learning pipeline for fairness-aware machine learning (FAML) in finance that encompasses metrics for fairness (and accuracy). Whereas accuracy metrics are well understood and the principal ones used frequently, there is no consensus as to which of several available measures for fairness should be used in a generic manner in the financial services industry. We explore these measures and discuss which ones to focus on, at various stages in the ML pipeline, pre-training and post-training, and we also examine simple bias mitigation approaches. Using a standard dataset we show that the sequencing in our FAML pipeline offers a cogent approach to arriving at a fair and accurate ML model. We discuss the intersection of bias metrics with legal considerations in the US, and the entanglement of explainability and fairness is exemplified in the case study. We discuss possible approaches for training ML models while satisfying constraints imposed from various fairness metrics, and the role of causality in assessing fairness.
Fairness measures for machine learning in finance
2021
Research areas