A little over a year ago, the SCHUFA tightened the requirements for credit scoring under the EU General Data Protection Regulation (GDPR). On February 27, the European Court of Justice (ECJ) handed down further instructions on providing scored consumers with “meaningful information about the logic involved” as required by Art. 15(1)(h) of the GDPR.
The facts of the case
The case involves an Austrian plaintiff whose request to extend her mobile phone contract was denied. The company justified its decision by citing an automated creditworthiness evaluation conducted by Dun & Bradstreet (D&B), which indicated that her score was too low to support a monthly rate of €10. Initially, the plaintiff sought information from D&B regarding her score. Surprisingly, the score provided to her reflected excellent creditworthiness. Dissatisfied, she escalated the matter to the Austrian Data Protection Authority and pursued legal action in various courts, seeking a detailed explanation of the score that must have been transmitted to the mobile phone company. D&B refused to disclose further details, claiming that the requested information constituted trade secrets.
An information technology expert, appointed by the Austrian court, explained that D&B must provide: (i) personal data that D&B used to build the “features” entering the computation of the score (feature engineering disclosure), (ii) exact insights into the employed model’s transformation of input features into the score (global explanation disclosure), (iii) the concrete value of each feature attributed to the plaintiff (feature value disclosure), and (iv) the concrete intervals of raw information for which the feature engineering process would result in the same feature value (feature engineering disclosure). Additionally, the expert argued, D&B should produce a list of other consumers that D&B had scored, based on the same rules.
What the Court decided
The ECJ first, reaffirms its earlier holding in SCHUFA, maintaining that the calculation of a credit score constitutes automated decision-making under Article 22 of the GDPR. While the mobile phone company might be considered the ultimate decision-maker, the ECJ recognizes that D&B also produces a decision in the form of a score. Arguably, the mobile phone company had a predefined threshold score, automatically suspending continuation of the contract if a consumer’s score fell below that threshold.
Second, the ECJ provides further clarification on the interpretation of Art. 15(1)(h) GDPR, highlighting the limits to a textualist approach. The ECJ acknowledges the variations in terminology across different language versions of the GDPR. While some versions use terms like “useful,” “significant,” or “relevant,” the English and German versions employ “meaningful” (German: “aussagekräftig”). The ECJ reads Article 15(1)(h) as granting consumers access to all information crucial to understanding the “procedures and principles” of automated decision-making that led to a specific result based on their data. Arguably, from a technical standpoint, this includes details on how raw data is turned into feature values and how those feature values translate into scores through mathematical models. The Court stresses the need to use clear and accessible language. Explanations must be precise, transparent und easy to understand.
Access to information and property rights
Third, the ECJ underscores that access to information is crucial for enabling data subjects to exercise their rights under Art. 22(3) GDPR, particularly the right to obtain human intervention and to contest the decision. The ECJ clarifies that providing either a complex mathematical formula (such as an algorithm) or even a detailed step-by-step explanation of the decision-making procedure is insufficient. Instead, the court suggests a pragmatic-sounding method for providing meaningful information. It proposes that data processors could demonstrate to consumers how changes in specific data points would have affected their score (note 62).
Fourth, the Court engages with D&B’s argument that the requested information qualifies as protected trade secrets. The ECJ acknowledges the need to balance the consumer’s right to access information against the scoring company’s intellectual property rights. At the same time, it emphasizes that trade secrets cannot be used as an absolute ground for refusal to provide information. It endorses a possible solution proposed by the Austrian court: D&B should provide the required information to that court, which would then determine the elements that can be disclosed to the plaintiff.
Implications of the decision
The decision follows through with its earlier reasoning, stressing access rights of consumers while also protecting trade secrets of data processors. However, it is unclear whether the Court’s interpretation of “meaningful information” does consumers a favor.
No market for domain experts
The ECJ rejects a right to access a “complex mathematical formula such as an algorithm” as too complicated for the consumer to follow (note 59). Arguably, this line of reasoning goes back to the Court repeatedly referencing guidelines of a European Data Protection Board’s predecessor, the “Art. 29 Working Party” (notes 45, 60). It quotes these guidelines as requiring clear explanations, but “not necessarily details on the algorithm used”. This sounds as if the guidelines were concerned with a trade-secret-protection concerning the algorithm, but the ECJ sends a different message (note 61): If a complex algorithm is used, explanations must remain clear and comprehensible. This does not stem from the algorithm’s status as a trade secret, but from the necessity to ensure the consumer’s understanding.
While it is crucial to ensure human interpretability, and although complex subsymbolic systems (e.g., neural networks) are inherently difficult for humans to understand, mathematical methods are there to help. They can condense these systems into forms that are intelligible to regular consumers or, at least, to domain experts. By denying access to the original underlying formula, the Court makes these simplifications impossible. This neglects the critical role professional intermediaries could play in auditing the algorithm and facilitating litigation.
Counterfactual explanations and trade secrets
The Court emphasizes the need to provide “concrete” explanations, to enable the consumer to double-check how “her data” was processed. In the case at hand, the consumer confronted two instances of automated decision-making: The computation of the numerical score, and the discontinuation of the mobile phone contract, arguably resting on a pre-defined minimum threshold for a consumer’s score. The latter was not part of the proceedings. As to the former, the consumer might be interested in two types of explanations:
(i) Information about how changes to feature values (for instance: the number of credit cards he uses) translate into changes in the score. This could be achieved through “counterfactual explanations”, which show the minimal changes (for instance: one credit card less) to input feature values required to turn decisions, namely: the computation of the score, around. Intuitively, this form of explainability allows to answer “what-if” questions such as “If the consumer had one credit card less, would the mobile phone contract not have been denied?” Methods such as Model-Agnostic Counterfactual Explanations or Diverse Counterfactual Explanations allow to generate such explanations regardless of the type of the underlying scoring model.
(ii) Information about how the score is computed. This is what the court-appointed Austrian expert might have had in mind when he insisted on “the disclosure of the mathematical formula and the valuation functions of all the values used in that formula” (GA opinion, note 17). Along similar lines, the Advocate General (GA) interpreted the GDPR as requiring a description of “the method used and the criteria taken into account and their weighting” (GA opinion, note 76).
Transparency focused on individual data
The Court does not require the disclosure of information of the latter type to the customer. Instead, the solution, which the Court proposes, seems to rest on models that provide counterfactual explanations for the individual score at hand. “It will be transparent enough,” the ECJ holds, “to inform the data subject to what extent a deviation in his data would have led to a different score.”
Modern methods for explaining how a score is derived do not necessarily reveal a company’s “secret sauce.” Instead, it is important to distinguish between global and local explanations. Global explanations may indeed illuminate the overall functioning of software—thereby potentially disclosing a trade secret—but local explanations, such as state-of-the-art counterfactual approaches, show only why a particular score, or decision was made. In that case, reconstructing the software’s entire logic is typically impossible.
A neutral intermediary is needed
Still, a notable tension exists between requiring counterfactual explanations and denying access to the complete model. Contemporary explainability tools allow considerable flexibility in how explanations are generated, especially when it comes to counterfactual explanations. If only the scoring entity has the ability to create these explanations, because this requires access to the underlying model, that company may craft examples that present its decisions in the most favorable light, thereby undermining consumer rights and transparency. Because these entities stand to benefit from the outcomes of their scoring practices, they face a clear conflict of interest: a more candid or comprehensive explanation might reveal shortcomings in their model or expose vulnerabilities to legal challenge. At the same time, the selective nature of counterfactual explanations can be especially difficult to detect. Without direct access to the underlying model, neither consumers nor independent observers can easily verify whether the offered explanation accurately reflects how the model operates or whether key data relationships have been obscured or omitted. This risk underscores the need for a neutral intermediary, arguably a governmental or regulatory body, with the power to view the model directly, produce standardized explanations, and then disclose them to individuals upon request. Such an arrangement would protect the legitimate interests of both model developers and consumers by ensuring that explanations are neither selectively constructed nor manipulated, while also preserving proprietary logic.
This article was originally published on the Compliance & Enforcement Blog by the New York University School of Law Program on Corporate Compliance and Enforcement on 18 March 2025.
Katja Langenbucher is Professor of Civil Law, Commercial Law, and Banking Law at the House of Finance of Goethe University Frankfurt and coordinates the LawLab – Fintech & AI as SAFE Bridge Professor.
Kevin Bauer is a Professor of Game-Theoretic and Causal AI in Business and Economics at Goethe University Frankfurt, which is integrated into Hessian.AI.
Blog entries represent the authors’ personal opinion and do not necessarily reflect the views of the Leibniz Institute for Financial Research SAFE or its staff.