Algorithmic Discrimination

Projekt Start:01/2019
Status:Beendet
Forscher:Benjamin M. Abdel-Karim, Oliver Hinz, Nicolas Winfried Pfeuffer
Bereich: Financial Markets
Finanziert von:LOEWE

Topic and Objectives

Machine-learning (ML) systems increasingly augment human decision-making in a multitude of domains, including marketing, social work, banking, and medicine. Already in 2018, the market for Big Data analytics software in Europe alone reached a revenue of 14.6 billion U.S. dollars. Following projections, this market will grow by about 8 percent annually over the next five years. At their core, these systems transform input information to provide predictions about an uncertain state of the world that can help reduce information asymmetries. The systems learn to make predictions – more or less autonomously – from large amounts of training data. Humans in the loop can leverage predictions to make better decisions. Examples include recidivism predictions judges use to set bail, candidate performance forecasts HR managers use to make hiring decisions, and predictions about applicants' future performance in study programs used by universities to determine admissions. Against the background that machine predictions are faster, cheaper, and frequently more reliable as well as scalable than subjective human forecasting, ML technologies promise to enhance economic efficiency and social welfare.

The central assumption of contemporary ML systems is that the same data generation process underlies the training data and the out-of-sample data for which they generate predictions. However, in dynamically changing, nonstationary environments, the data-generating process can change over time so that the pattern learned from historical data may gradually become inaccurate – a phenomenon referred to as concept drift. When concept drifts start to occur, predictive models require retraining on augmented data sets that encode the changing data distribution. Hence, to prevent ML models to become increasingly inaccurate over time, one typically feeds newly generated data to the system to perform retraining on a regular basis. While research shows that this continued learning approach mitigates concerns about systems’ decreasing predictive performance, only a limited number of papers provide evidence on potential downstream ramifications through feedback loops when there exist discriminatory behaviors. Retraining biased ML systems on data that they helped to create, e.g., by influencing human decision-makers, can lead systems to reinforce previously learned, biased patterns and endogenously reshape the data generation process leading to a decreased performance over time. This self-reinforcing process may be particularly problematic if not only the ML itself, but also humans in the loop discriminate.

With our project, we  explore the dynamics involved in continued learning processes in an empirical setting. We examine how discrimination by ML systems, i.e., algorithmic discrimination, and ongoing discrimination by human users of the system, i.e., human discrimination, affect the outcome of economic transactions over time when the ML system undergoes repeated retraining using novel training data it helps to create.

Key Findings

  • Our exploration reveals that repeated retraining on endogenously created data can spur positive feedback loops that help mitigate algorithmic discrimination over time, however, only if the human in the loop does not perpetuate discriminatory behaviors.
  • We demonstrate the pivotal role of human discrimination that can impede the self-healing process of initially biased models that learn on an ongoing basis and adversely affect the behavior of unbiased ML models in the long run.

Policy Implications

  • Organizations may be well advised to implement a process ensuring the continued collection of new training examples and updating of employed ML systems.
  • The employment of continued learning approaches should be accompanied by security mechanisms that ensure that human decision-makers do not continue discriminatory patterns.

Zugehörige publizierte Papers

Forscher/innenTitelJahrBereichKeywords
Kevin Bauer, Rebecca Heigl, Oliver Hinz, Michael KosfeldThe Terminator of Social Welfare? - The Economic Consequences of Algorithmic Discrimination
forthcoming in Journal of the Association of Information Systems
2024 Financial Markets

Zugehörige Working Papers

Nr.Forscher/innenTitelJahrBereichKeywords
287Benjamin M. Abdel-Karim, Kevin Bauer, Oliver Hinz, Michael Kosfeld, Nicolas Winfried PfeufferThe Economic Consequences of Algorithmic Discrimination: Theory and Empirical Evidence2020 Financial Markets Algorithmic Discrimination, Artificial Intelligence, Game Theory, Economics, Batch Learning
Zurück