Credit-scoring ruling sets some AI rules ahead of wider EU regime

Credit-scoring ruling sets some AI rules ahead of wider EU regime

A ruling by the EU’s top court on credit scoring and the consumer protections contained within the GDPR against automated decisions has far-reaching implications for artificial intelligence systems. With AI regulation at the top of the news agenda — EU lawmakers have just thrashed out a deal on the world's first AI law — the judgment shows that, for a while at least, the GDPR is the closest thing Europeans have to oversight of AI systems.

Read on for the story from MLex®, or start your 14-day free trial now to access more specialist news and in-depth analysis in real time.

11 December 2023
By Sam Clark

EU negotiators spent days stuck in a room in Brussels last week, wrestling their way late on Friday night to a deal on the AI Act, the world’s first artificial intelligence legislation.

A couple of hours’ drive away in Luxembourg, judges in the EU Court of Justice issued a ruling that will get far less attention but which nonetheless has significant implications for companies operating AI systems or making decisions based on systems from other companies.

The ruling, in a case against German credit-scoring firm Schufa, said that scoring counts as an automated decision, which is prohibited under the General Data Protection Regulation unless specific conditions are met.

Crucially, experts say, the judgment has implications beyond the realm of credit scoring and into the wider world of AI. 

Article 22

Data protection advocates have long argued that AI is to some extent already regulated by the GDPR. Article 22 states that automated decision making that has a legal or “similarly significant” effect on a person is prohibited unless certain conditions are met. 

Last week's court ruling clarified several crucial aspects of this provision. Importantly, it said that issuing the “score” amounts to a decision under the GDPR, even if the credit-rating firm does not make the final decision on whether to issue credit. 

It clarified the relationship between the score and its consequence, saying that scoring constitutes “automated individual decision-making” if a third party — such as a bank — “draws strongly on [that score] to establish, implement or terminate a contractual relationship.”

What’s more, it also sets a framework for the way in which these rules should be applied. Some had argued that individuals needed to proactively exercise their right not to be subject to automated decisions, but the decision affirms a default prohibition unless the relevant conditions are met.

Exceptions and a human in the loop

There are three exceptions to the prohibition on legally significant automated decisions contained in the GDPR: If it is necessary for the performance of a contract; if it is allowed by EU or national law; or if the person in question gives explicit consent.

These exceptions, brought to the fore by the court’s ruling, place fresh significance on the interpretation of GDPR rules on the “performance of a contract” and consent. These have been at the center of long-running enforcement against Meta Platforms over the tech giant’s legal basis to process data for behavioral advertising — enforcement which has led to a split in opinion and a public fight between data protection authorities.

Even in cases where these exceptions apply, there is an extra layer of protection. If they feel that the automated decision is wrong, people have the right to present their point of view and request a human review. 

After the judgment, Thomas Fuchs, the head of the Hamburg state data protection authority, stressed the importance of a human in the loop. “AI systems often resemble a black box in their decision-making and assess people in a way that is not comprehensible,” he said. “The same applies to artificial intelligence as to credit agencies: They must not be trusted blindly. Humans must always have the last word.”

AI implications

These various protections and their implementation based on last week’s ruling appear suddenly of great significance in light of the recent proliferation and increased capability of AI systems; a development that has rapidly accelerated global efforts — many of them reaching the highest elected offices — to regulate the technology.

For all their complexities and advanced capabilities, many of the concerns about AI — from the prosaic to the far-fetched — ultimately boil down to whether and how those systems make automated decisions without human involvement.

Michael Will, the head of Bavaria's state data protection authority for private companies, told MLex that the ruling would have “far-reaching implications for our daily work.”

The court’s findings, he said, “naturally have consequences for other similar agencies, as well as for a large number of online services that are already suspected of carrying out scoring in their own specific way based on a wide range of personal data available online, for example when setting individual prices.” 

The deal that EU governments and lawmakers clinched on the AI Act last Friday is a high-level political one, with a number of technical details yet to be agreed. But its final form will certainly reach further than the GDPR’s Article 22.

AI Act enforcement is unlikely to substantially begin until 2025 and later, however, suggesting the significant protections against AI systems contained in that GDPR provision and last week's ruling shouldn't be ignored.

For access to breaking news and predictive analysis on regulatory risk across the globe in real time, start your free trial today.

blue and yellow star flag

desk globe on table