OpenAI, Meta, Anthropic could face civil liability in EU over their AI models
OpenAI, Meta, Anthropic could face civil liability in EU over their AI models
25 June 2024
By Luca Bertuzzi
Rolling out a new rulebook regulating artificial intelligence has absorbed much of the regulatory firepower that EU tech lawmakers had over the past few years.
A question remains about who is responsible when something goes wrong. The AI Liability Directive, a piece of legislation meant to supplement the new AI Act, is now facing an uncertain future.
Yet while artificial intelligence companies focused their attention on this legislation, an enormously consequential law for AI companies was quietly passed without many noticing: the Product Liability Directive.
The revamped law may not have AI in its name, but it now extends the EU's strict product liability rules to cover software.
The EU Product Liability Directive had been in place since the 1980s and, as such, provided one of the cornerstones of the bloc's internal market. With the rise of intangible products like software, the liability regime needed an update.
AI, as a software type, will be fully covered under the EU product liability regime, even though the economic actor's responsibility will depend on its position in the value chain.
AI defectiveness
A relatively straightforward situation is when an AI application causes damage due to a defect. The law covers material damages such as personal injury, psychological harm and losses of property or data not used for professional purposes.
In these cases, the claimant can sue the application provider, asking for compensation for the damage, as long as it can provide details on the product’s defectiveness, the damage suffered, and the causal link between the two.
The rules allow a claimant who faces excessive technical or scientific complexity to prove the product's defectiveness or the causal link to be presumed correct as long as the claimant proves they are likely.
In a dramatic case in Belgium last year, a man committed suicide after an AI-powered chatbot convinced him to sacrifice himself to reduce his carbon footprint.
“If there is a market where this provision will become relevant, it is the AI market,” said Christiane Wendehorst, a professor at the University of Vienna and a former member of the EU expert group on liability and new technologies. The provision is a “loose cannon because it is hard to predict how national courts will apply it.”
In other words, even relatively simpler cases might become complicated when artificial intelligence is involved since a product is deemed defective when it does not provide the safety that a person is entitled to or required by law.
“What is the safety that people should be entitled to expect by a computer program that learns by itself?” said Andrew Tettenborn, a university professor in Swansea, Wales. “It is difficult enough to prove defectiveness for a normal product.”
Civil liability for AI models
An even more complex situation would arise when the damage results from an AI model that is integrated into a product for what could be deemed a safety component, such as the driving system of an automated car.
In these cases, if the AI component causes the car to hit someone, the likes of OpenAI or Anthropic would be liable alongside the car manufacturer unless the model was free and open-source software developed and provided outside a commercial activity.
AI companies might try to avoid liability in different ways for commercial AI models. First and foremost, the model providers could prove that the defect resulted from how the carmaker integrated the AI component.
Another way to avoid liability would be to prove that the manufacturer of the finished product substantially modified the model in a way that caused the defect. This principle applies typically to the finished product, but could also be deemed applicable to components.
However, the tricky part for component providers, especially AI companies, is that they must prove the modification occurred outside their control. If, for example, the model is still receiving software updates, the model is deemed to be under their control.
“We have argued that this concept of ‘control’ ignores the intrinsic characteristics of software and related digital services: they can be deployed in various ways, they continue to be further developed over time, and their features are contingent on the way in which they are used,” Scévole de Cazotte, senior vice-president of the US Chamber of Commerce's Institute for Legal Reform, told MLex.
The AI companies might also argue that, since the AI model's potentially harmful output is information, they cannot be subject to the compensation claim as per the EU Court of Justice’s Krone judgment.
However, the Krone case refers to a product, not a component, and was a much simpler case referred to a newspaper. AI models could be seen as a “related service,” a specific component that allows the product to function, such as traffic data for the navigation system.
Value chain responsibilities
The product liability rules states that defectiveness can be presumed when the product does not comply with EU or national law safety requirements.
The recently passed AI Act includes some duties for AI model providers, such as transparency and copyright. Companies will be obliged to disclose information to downstream players, which might lead to a presumption of defectiveness if the AI company fails to disclose relevant model limitations.
By contrast, at the level of AI applications, the EU rulebook envisages a much stricter regime in terms of risk management and data governance, meaning that — as is usually the case for civil liability — it will be much easier to prove defectiveness for a concrete application than an AI model.
From the claimant's perspective, it does not matter whether the defectiveness resulted from the component, since they can always seek compensation from the manufacturer of the finished product.
However, the distribution of responsibility might matter much for those who are part of the complex AI value chain, since the EU product liability regime is designed to protect consumers, while only partially considering the business-to-business dimension.
This means that smaller enterprises might often find themselves at the losing end of the bargain, with small businesses liable for a defect caused by a large firm upstream in the value chain.
In the age of AI, this power imbalance might be further exacerbated by the complexity intrinsic to this technology, which the product liability rules let claimants use to their advantage. Downstream players, on the other hand, might be stuck between a rock and a hard place.
While allocating responsibilities alongside the AI value chain was a critical issue in discussions about the AI Act, civil liability will depend heavily on how the courts interpret concepts like substantial modification and manufacturer’s control.
AI companies may be unable to avoid liability vis-à-vis consumers, but they could also ask downstream players to waive the right of recourse as part of their licensing agreements.
This provision was meant to protect micro- and small-sized software companies, yet time will tell if it becomes the industry standard.
"Developers may wish to negotiate indemnities in their agreements where possible. Nevertheless, there is likely to be residual product liability risk for which they may want to consider purchasing insurance coverage," Peter Schildkraut, Adela Williams and Tom Fox of law firm Arnold & Porter told MLex.
For the latest developments in AI & tech regulation, privacy and cybersecurity, online safety, content moderation and more, activate your instant trial of MLex today.