Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion. Invisible prime applicants can perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, a debate over algorithmic discrimination has developed. So far, it has centered on the US with its legal framework dating back to the Civil Rights Act of the 1970s. With the AI Act and the reform of the Consumer Credit Directive, EU lawmakers have been catching on. This paper explores the EU and the US legal framework on anti-discrimination law. It submits that both face fundamental difficulties when fitting algorithmic discrimination into the traditional regime. Against this background, the paper suggests for algorithmic underwriting to reorient the discussion towards a better design of financial regulation.
SAFE Working Paper No. 369