Credit decisions require an assessment of creditworthiness. Credit scores play an important role, e.g., reports about the creditworthiness of customers, which in Germany are issued by the General Credit Protection Agency Schufa. The agency uses for example, personal data, payment history, and contractual compliance. Whether the storage of this data is in line with the General Data Protection Regulation (GDPR) is currently the subject of a referral from the Wiesbaden Administrative Court.
With the enormous increase in computing capacity and available data, new business models have emerged. Their sources as to data range from smartphones to payment cards to social networks (“alternative data”). Based on artificial intelligence (AI), correlations are made between different data and a specific target (e.g., low risk of default). Common is the access of banks to the payment behavior of their customers, as far as this can be seen from a current account. Variables such as purchasing behavior or recurring payments are useful to create profiles. The evaluation of alternative data, by contrast, goes far beyond the relationship between banks and customers. Data miners specialize in data collection, scoring agencies create models and profiles. This may reveal, for example, that attending a university increases creditworthiness because the AI arrives at a positive forecast of future income. Conversely, a negative forecast can be calculated because the potential customer takes longer than average to fill out an online form.
New markets, new potential for AI scoring
Such AI applications based on alternative data can be used to refine the processes of traditional lending or to measure the risk of existing credit portfolios. In addition, a market has emerged for borrowers whose profile underperforms in traditional processes. If they turn out to be attractive customers based on alternative data, this makes it possible to build up a loan portfolio that could not have been priced adequately for the risk involved if (only) traditional data had been used. Therein lies not only the opportunity to open new markets but also the inclusive potential of AI scoring.
The strength of such AI scoring methods - namely the variety of alternative data - is at the same time their weakness. First, this concerns the extraction of alternative data. If we are not dealing with standardized financial data but, for example, with the data gathered from social networks, the susceptibility to errors is higher. Another weakness results from the backward-looking view of the models. Data with which the AI is "trained" (i.e., based on which the relevant correlations are developed) necessarily represent the past and infer the future from it.
If certain variables were particularly significant in the past (such as gender or marital status), the AI will attribute a higher weight to these variables than to other variables. If the circumstances in the world change, the variables in question do not any more adequately represent the changed world (“historical bias”). The suggestion to delete the variable in question (e.g., gender) often does not help, because the AI will not only have given higher weight to gender, but also to variables that correlate with it, such as height, first names, Google search terms, and part-time employment. Sorting all these out, in turn, lowers the precision of the model.
AI use in lending as a “high-risk application”
Discrimination risks of this kind have prompted the European Commission in its latest proposal for an AI regulation to classify AI credit scoring and AI-supported assessment of creditworthiness as a “high-risk application”. This goes hand in hand with an extensive catalog of compliance requirements as well as regulatory oversight and significant financial penalties.
The Commission has implictly taken a first step toward regulating credit scoring agencies. Although these are common globally (such as FICO in the USA), they are by no means present in all EU member states. While credit scoring agencies have in the U.S. been regulated since the 1970s, Schufa’s activities in Germany are not explicitly covered. Only the more general, horizontal standards of trade and data protection rulescatch their activities.
The proposed regulation adds another layer of regulation. Again, however, the approach is horizontal, not sectoral. This is consistent with the fact that it does not consolidate a bundle of common risks of credit scoring (these range from data collection, information, and correction rights to explicability and anti-discrimination), but only focuses on risks to fundamental rights that can arise precisely through AI scoring.
Intricate regulatory design
In the future, (newly appointed) AI authorities, anti-discrimination bodies (which will have their information rights), and data protection authorities will be responsible. The regulatory design is made even more complicated by the fact that exceptions apply insofar as banks themselves carry out an AI-based creditworthiness assessment. Banking supervisors, not AI authorities, are responsible for CRR credit institutions.
Furthermore, the Proposal operates with an presumption: Compliance with SREP’s risk and quality management (Art. 74 CRD IV) extends to compliance with the AI Regulation . It seems quite probable, that rules and guidance will be developed by banking supervisory authorities to specify these requirements. However, this regime does not apply to non-banks (i.e., also scoring agencies). AI supervisors will therefore develop their own rules and standards. -Given the Proposal’s horizontal approach, with these supervisory bodies in charge of a variety of AI models (from medical devices over self-driving cars to criminal justice algorithms), it is to be expected that their standards will differ significantly from the standards developed by financial supervisory authorities. This not only leads to considerable legal uncertainty. It also hinders effective law enforcement in consumer protection matters. Taking the EU proposal as an opportunity to regulate scoring agencies could remedy this.
Katja Langenbucher is Professor of Civil Law, Commercial Law, and Banking Law at the House of Finance of Goethe University Frankfurt and coordinates the LawLab – Fintech & AI as SAFE Research Professor.
This contribution was first published in issue 22/2021 of the European Journal of Business Law (in German).
Blog entries represent the authors’ personal opinion and do not necessarily reflect the views of the Leibniz Institute for Financial Research SAFE or its staff.