SAFE Finance Blog
06 Oct 2020

Algorithmic discrimination in the financial sector

Kevin Bauer: Besides great potential, the use of algorithms for automated decision making also bears risks that can lead to a loss of social welfare

Many public and private organizations use artificial intelligence (AI) systems to support or even to automate human decision making in operational businesses. In the financial sector, too, progress in machine learning (ML) are expected to enable substantial efficiency gains. On the one hand, this trend holds enormous potential for increasing productivity and customer comfort and reducing costs. on the other hand, however, there is a risk that certain groups will be systematically favored or disadvantaged in algorithmically made decisions which can lead to serious, irreversible social upheaval.

In essence, most of today's AI systems consist of ML algorithms based on enormous datasets. Thus, patterns of real relations between different variables are learned more or less independently. Based on available information, they are supposed to make individual predictions about a variable of interest as accurately as possible. Then these predictions can be used to make decisions under uncertainty and in environments of asymmetric information.

Automated processes control bank lending

In the financial sector, for example, AI systems are more and more used to manage risks at various levels. At the individual client level, ML algorithms use client information to predict the credit default risk of applicants and ultimately decide on the granting of credit. Nowadays consumer lending in many banks are controlled by an almost completely automated process. It makes the selection process easier for the customer and more cost-effective for the bank but the process carries the risk of algorithmic discrimination. This refers to the systematic different treatment of individuals based on their belonging to social categories.

Algorithmic discrimination can arise in today's AI systems through different channels. Through automated data-driven model development ML algorithms can learn and reproduce social inequality, marginalization, and discrimination embedded in historical data. In addition, the identified patterns and learned relationships can be incorrect from the beginning and lead to new inequalities if the data which the algorithmic systems were tested and trained with are not sufficiently representative for the populations.

For example, data with a disproportionately high percentage of women who were unable to repay a consumer loan could lead to an ML model systematically predicting lower repayment probability for women. In an automated system of credit allocation, women would then receive a loan less frequently. This kind of an AI system could therefore increase social inequality and significantly reduce the economic welfare of women.

AI systems require a more stringent quality control

In addition to the direct effects, there is a risk that the distortions caused by automated learning of the AI system will increase in the future. This algorithmic feedback effect can always occur when the generated prediction of an AI system endogenously influences the structure or nature of the data available in the future. In the example of consumer lending, the algorithm would result in fewer women receiving a loan and thus the existing data would be enriched with fewer new examples for women since repayment can only be determined if a loan is actually granted. This would further bias the already unequal data set. The quality of the predictions for women would continuously decrease, which could lead to fewer and fewer loans being granted to women.

The prediction of the ML model also influences the possibility of collecting a new training example including corresponding variables (selective labels problem). In addition, the problem arises that the true performance of ML models is difficult to measure precisely and the impression of the effectiveness of the "machine" quickly gets too optimistic. This bears the risk that incorrect and discriminatory systems are not recognized as such and are widely used. As a consequence, AI systems become gatekeepers of economic welfare.

To effectively counteract this dystopian scenario and instead benefit from the societal advantages of AI, effective quality assurance protocols must be developed that thoroughly test new AI systems for their societal consequences before their market launch. This is where politics plays a crucial role. Together with technology developers, comprehensive ethical and technical guidelines must be developed to ensure that AI systems generate progress and social welfare that benefits the society as a whole not just specific groups.


Kevin Bauer holds a PhD in economics from Goethe University Frankfurt and works at SAFE in the field of digitization in the financial industry.

A longer version has been published as a SAFE-Working Paper 287: Bauer, Kevin / Pfeuffer, Nicolas / Abdel-Karim, Benjamin / Hinz, Oliver / Kosfeld, Michael (2020): “The Terminator of Social Welfare? - The Economic Consequences of Algorithmic Discrimination”.