Monday, September 9, 2019
Stephanie Kelley & Anton Ovchinnikov, "(Anti-Discrimination) Laws, AI and Gender Bias"
We use state-of-the-art machine learning models trained on publicly available data to show that the data governance practices imposed by the existing anti-discrimination laws, when applied to automated algorithmic (“AI”) decision-making systems, can lead to significantly less favourable outcomes for the minority classes they are supposed to protect. Our study is set in the domain of non-mortgage credit provision, where the US and the EU laws prohibit the use of Gender variables in training credit scoring models; the US law further prohibits the collection of Gender data. We show that excluding Gender as a predictor has little impact on the model accuracy and on outcomes for males (the majority) but leads to a 30-50% increase in credit rejection rates for females (the minority). We further show that rebalancing the data with respect to Gender, prior to training models can significantly reduce the negative impact on females, without harming males, even when Gender is excluded from the credit scoring models. Taken together, our findings provide insight on the value of transparency and accountability, as opposed to prohibition for ethically managing data and AI systems, as societies and legal systems adapt to the fast advances in automated, AI-driven, decision making. Additionally, we hope that performing the analyses in a verifiable, open-access way, as we did, will facilitate future inquiries from other researchers and interested public into this critically-important societal issue.