SAFE Finance Blog
25 Oct 2024

Artificial intelligence: New responsibilities for German financial supervisors?

Katja Langenbucher: The EU AI Act demands independent oversight of high-risk AI in the financial sector – a challenge for national implementation

The image shows a digital profile of a human head with highlighted brain structures, surrounded by blue circuitry and light points, symbolizing advanced technology and artificial intelligence.

The EU Regulation on Artificial Intelligence (AI Act) leads to a series of new compliance requirements for financial market players with “high-risk AI systems”. The responsibility for monitoring them points to financial supervisory authorities, which includes BaFin. At the same time the regulation requires an “independent” authority. This might turn out to be a problem.

The AI Act takes a risk-based approach. This means that many AI applications are subject to no or only minor requirements. However, the regulation classifies certain applications as high-risk. If the risks are unacceptable, for example certain biometric identification procedures, emotion recognition in the work environment or some social scoring procedures, their use is prohibited. Several other applications are not prohibited but are highly risky. These include the use of AI for certain decisions in personnel management, for credit scoring and creditworthiness evaluation of natural persons, and for risk assessment and pricing of life and health insurance for natural persons.

Why is it classified as high-risk AI? 

If a high-risk AI is used, compliance requirements apply, for example regarding risk management, data governance, documentation or cyber security. Per the regulation, the providers and users monitor whether the system meets these requirements. Market surveillance is carried out by an AI authority when needed. It is up to the national legislator to decide which authority is responsible, whether a new authority should be created - as in Spain - or whether existing authorities should be entrusted with this task – as in France. 

What supervisory structure will apply in the financial sector? 

For the use of AI in the financial sector, the EU recommends entrusting the national financial supervisory authority with this task. In Germany, this is BaFin. Bundling general financial supervision and the newly created AI supervision makes sense, where financial institutions are concerned. In this way, an element of sectoral supervision, focusing on specific areas of application, is introduced. This is a welcome departure from the otherwise horizontal approach of the AI Act. In the tradition of the General Data Protection Regulation, the AI Act stipulates uniform requirements across all areas of application and sectors. However, there is one fly in the ointment: financial players such as AI-based credit scoring agencies, which are not subject to financial supervision, remain outside the scope of this allocation. This entails a risk of differing interpretations of a legal text, depending on which authority applies it.

What are the legal challenges for BaFin's jurisdiction regarding AI? 

The AI Act gives national legislators leeway in setting up a suitable supervisory structure. However, the competent authority must be “independent”. This does not just mean independence from the interests of the supervised companies. Following the example of the US independent agencies (which are themselves under fire from the conservative Supreme Court), it is also about independence from the government and from the legislator. 

This design already came under public scrutiny during the investigation into the Wirecard scandal (see SAFE White Paper No.82, BaFin (in)dependence). In May 2022, the Ministry published principles for cooperation with BaFin, which refer to “operational independence” and “own responsibility”. However, legal and technical supervision remains in place. 

From a German constitutional law perspective, BaFin’s supervision by the Federal Ministry of Finance is a necessary means of democratic legitimation. In this way, the citizen's electoral decision leads in a “thread of legitimation”, as it were, via the Bundestag to the government, on to the political leadership of the ministries and, ultimately, provides democratic legitimation of BaFin as well.

The design of some German public agencies has been declared incompatible with European law. Relevant cases concerned data protection supervision and the Bundesnetzagentur (Federal Network Agency). The data protection authorities have subsequently been redesigned in a fully independent structure. With regard to the Bundesnetzagentur, smaller changes were made in terms of policy goal-setting. In its decision on the banking union, the Federal Constitutional Court spelled out its concept of separating ECB supervision from the German legitimacy thread: this would be considered in line with constitutional law requirements, as long as it only concerned systemically important banks. A minimum of democratic legitimacy is maintained through reporting obligations to the Bundestag. Judicial control remains possible.

Conclusion 

BaFin's responsibility for market surveillance of AI applications in the financial sector is a welcome move towards integrating sectoral elements in supervision. However, European law requires an independent authority. The German legislator will have to respond to this. In the short term, the creation of an independent unit within BaFin is recommended. In the long term, consideration should be given to spinning it off as an institutionally separate digital authority.


Katja Langenbucher is Professor of Civil Law, Commercial Law, and Banking Law at the House of Finance of Goethe University Frankfurt and coordinates the LawLab – Fintech & AI as SAFE Bridge Professor.

Blog entries represent the authors’ personal opinion and do not necessarily reflect the views of the Leibniz Institute for Financial Research SAFE or its staff.