Journal of the Association for Information System, Vol. 25, pp. 804-866, 2024

Feedback Loops in Machine Learning: A Study on the Interplay of Continuous Updating and Human Discrimination

Machine learning (ML) models often endogenously shape the data available for future updates, given their role in influencing human decisions, which consequently generates new data points for training. For instance, if an ML prediction results in a loan applicant's rejection, the bank forgoes the opportunity to record that person's actual creditworthiness, thereby impacting the availability of this data point for future model updates and potentially affecting the model's performance. This paper delves into the relationship between the continuous updating of ML models and algorithmic discrimination in environments where predictions endogenously influence the creation of new training data. Using comprehensive simulations based on secondary empirical data, we examine the dynamic evolution of an ML model's fairness and economic consequences in a setting that mirrors sequential interactions, such as loan approval decisions. Our findings indicate that continuous updating can help mitigating algorithmic discrimination and enhances economic efficiency over time. Most importantly, we provide evidence that human decision-makers in the loop, possessing the authority to override ML predictions, may impede the self-correction of discriminatory models and even induce initially unbiased models to become discriminatory with time. These findings underscore the complex socio-technological nature of algorithmic discrimination and highlight the role that humans play in addressing it when ML models undergo continuous updating. Our results carry important practical implications, especially considering impending regulations that require human involvement in ML-supported decision-making processes.