Using a novel theoretical framework and data from a comprehensive field study we conducted over a period of three years, we outline the causal effects of algorithmic discrimination on economic efficiency and social welfare in a strategic setting under uncertainty. We combine economic, game-theoretic, and applied machine learning concepts allowing us to overcome the central challenge of missing counterfactuals, which generally impedes showcasing economic downstream consequences of algorithmic discrimination. Using our framework and unique data, we provide both theoretical and empirical evidence on the consequences of algorithmic discrimination. Our unique empirical setting allows us to precisely quantify efficiency and welfare ramifications relative to an ideal world where there are no information asymmetries. Our results emphasize that Artificial Intelligence systems' capabilities in overcoming information asymmetries and thereby enhancing welfare negatively depend on the degree of inherent algorithmic discrimination against specific groups in the population. This relation is particularly concerning in selective-labels environments where outcomes are only observed if decision-makers take a particular action so that the data is selectively labeled. The reason is that commonly used technical performance metrics like the precision measure can be highly deceptive and lead to wrong conclusions. Finally, our results depict that continued learning, by creating feedback loops, can help remedy algorithmic discrimination and associated negative effects over time.
SAFE Working Paper No. 287