20 May 2021

Poorly trained AI harms certain consumer groups

Artificial intelligence is increasingly being used in the financial sector. However, if it is trained by algorithms using biased data, this can have serious consequences for certain groups of people

Decisions in the financial industry are increasingly being made with the help of artificial intelligence (AI) rather than by humans alone. In this context, advances in the field of machine learning (ML) prove beneficial: While productivity and customer convenience increase, costs decrease. However, a working paper by the Leibniz Institute for Financial Research SAFE shows that the use of AI runs the risk of systematically disadvantaging certain groups of consumers.

In order to target the AI system’s algorithms, they must be trained with large data sets. However, these data may contain systematic biases regarding individuals belonging to a particular group, for example, based on their gender, income, education level, or age. “Historically grown systematic bias against certain populations can potentiate in the age of smart algorithms and have increasingly undesirable social and economic consequences,” says Kevin Bauer, researcher in the department “Financial Intermediation” at SAFE and one of the study's authors.

Biased data leads to biased decisions

If, for example, banks collected data with a disproportionately low percentage of women when giving out loans, that can affect the predictive reliability of the AI used: The algorithm trained on this data would then be systematically worse at adequately determining creditworthiness for women. The algorithm may thus run the risk of predicting, on average, a lower probability for women to repay a loan. Such an automated AI system would then be less likely to attribute creditworthiness to women. “Thus, the bias-trained AI system could reinforce societal inequality and further reduce women's economic welfare,” Bauer explains.

However, the problem can be significantly reduced, and possibly even solved by continuously incorporating feedback loops with representative, non-biased data. Thus, the algorithm frequently continues to learn until it overcomes so-called algorithmic discrimination and functions without prejudice. “To measure the success of the AI system, it is still necessary for humans to monitor the performance and thus the quality of a trained algorithm,” Bauer says.

Up to now, the financial industry has used AI systems to support or even automate human decision-making in operations, as well as to mitigate risk. Applications include chatbots, intelligent assistants for customers, automated high-frequency trading, automated fraud detection, and facial recognition to identify customers. For the SAFE Working Paper, the researchers collected original data via an experiment conducted with more than 3,600 people in the period between 2016 and 2019.

Download the SAFE Working Paper No. 287


Scientific contact

Dr. Kevin Bauer
Advanced Researcher in the SAFE research department Financial Intermediation
Email: bauerwhatever@safe-frankfurt.de
Phone: +49 69 798 30075