The complete picture on AI regulation, with global insights from MLex®

The complete picture on AI regulation, with global insights from MLex®

As regulators world-wide grapple with the impact of artificial intelligence on all sectors, MLex correspondents continue to bring our subscribers specialist news and predictive insights from North America, Europe, the UK and Asia-Pacific―with highlights from the past week including:

Companies deploying AI must consider regulatory, business risks of data those tools need, experts say
2 February 2024

With the US Federal Trade Commission ordering companies to delete algorithms and data because of privacy violations, companies planning to launch AI products need to carefully weigh regulatory risks against the benefits of those products, experts agreed at a key legal gathering in New York this week. A new FTC order today underscored that regulatory risk.

UK must rebalance AI regulation from risk to innovation or fall behind, lawmakers warn
2 February 2024

Companies developing generative AI systems need more "steer" from regulators as their technology evolves, a report by a committee of UK lawmakers said today. The government should avoid any catastrophizing and focus on promoting innovation, evading regulatory capture, and resolving emerging competition and copyright problems. If the UK does not adjust its regulatory approach, it could miss out on the opportunities brought by AI, they worried.

EU’s AI Act jumps closer to final approval after passing crucial stage
2 February 2024

The EU’s landmark artificial intelligence law today moved closer to final approval, after passing a key legislative stage. Ambassadors representing EU national governments signed off on the final version of the legislation, after weeks of doubts that it would pass muster with France and Germany. The agreement was unanimous, a spokesman for the Belgian government said. The European Parliament will also vote on it in the coming weeks.

AI developers might avoid Australia if it adopts excessive regulation, advisory agency says
2 February 2024

Excessive regulation could make the Australian market undesirable for international artificial intelligence developers, the country’s chief economic advisory agency warned. This week, the Productivity Commission published three papers on Australia’s AI opportunity and regulation. The papers emphasize the need to reduce the size and likelihood of harm from AI to acceptable levels without imposing an “excessive regulatory burden on society.”

Keep scrolling for the stories, or start your 14-day free trial now for full access to breaking news and expert analysis on the regulatory landscape surrounding AI.

waving flag

Companies deploying AI must consider regulatory, business risks of data those tools need, experts say

2 February 2024
By Mike Swift

With the US Federal Trade Commission ordering companies to delete algorithms and data because of privacy violations, companies planning to launch AI products need to carefully weigh regulatory risks against the benefits of those products, experts agreed at a key legal gathering in New York this week.

Generative artificial intelligence, and its probability of transforming the legal industry as well as the scores of markets where it is introduced, was the central theme of the LegalWeek conference.* But even as companies race to introduce AI products that could save money by automating tasks now done by people, companies need to weigh the risks of FTC and state enforcement of new privacy laws passed by US states based on the data those AI tools need, legal experts said.

In another order today underscoring that risk, the FTC ordered Blackbaud to delete personal data that it doesn’t need to retain to settle allegations the company had “shoddy” security and misled its users about the seriousness of a 2020 data breach.

New state privacy laws can create a business as well as a regulatory risk for companies investing in new AI tools, one expert said.

“Organizations need to be careful about how they're using personally identifiable information and giving consumers the opportunity to opt out of those things” due to those new state laws, said Garylene Javier of the firm Crowell & Moring. “That could really have such a huge business impact, if all of a sudden you're having multiple consumers coming in saying, ‘Please, I'm opting out of the use of any kind of automated decision-making.’ ”

Led by California, more than a dozen US states have now passed comprehensive privacy laws, with others likely to follow in the absence of a national US law. The California Privacy Protection Agency is developing new rules for automated decision-making.

Combined with FTC enforcement actions such as the enforcer’s 2022 order to a Weight Watchers subsidiary to delete personal data and destroy algorithms that used it, Javier said companies could face millions of dollars of lost investment if the FTC were to order a company that fed personally identifiable information into its AI training data to destroy “anything that is the fruit of the poisonous tree.”

“I do think there’s a danger in moving too fast here” on AI, agreed Ignatius Grande, a director at the Berkeley Research Group who specializes in electronic discovery and data privacy issues. He cautioned companies against wanting “to just jump in” on AI without carefully assessing privacy and security issues because that is “likely going to cause great ethical issues or other issues.”

Across the annual gathering of the legal industry, the letters “AI” were central to almost every discussion, whether it was about the pain or the gain of what most agree will be a transformative technology.

A group of current and former federal judges, in a discussion at the conference, expressed skepticism that AI would soon become central to court proceedings.

“I think it’s probably going to be a slow start because of the conservative nature of attorneys,” said US Magistrate Judge Kimberly Priest Johnson, who sits in the Eastern District of Texas.

US Magistrate Judge Sarah Cave, who sits in the Southern District of New York, said generative AI has invented cases that didn’t exist, and from now on, “everybody is on notice, every lawyer is on notice, that if you use generative AI proceed very carefully and check whatever it is you are citing to.”

One retired judge said she believes, however, that using generative AI in a court case isn’t qualitatively different from any other work product submitted to a court. “It’s a matter of whatever you submit to a court, you stand by,” said former US District Judge Shira Scheindlin. “It’s really that simple.”

In today’s order against Blackbaud, which provides software services to nonprofits, the FTC alleges the company deceived users by failing to implement the “appropriate physical, electronic and procedural safeguards to protect your personal information” that left people’s Social Security and banking account numbers accessible to a hacker in 2020. The company also did not delete data it no longer needed, the complaint said.

“This action illustrates how indefinite retention of consumer data, which can lure hackers and magnify the harms stemming from a breach, is independently a prohibited unfair practice under the FTC Act,” Commissioners Lina Khan, Alvaro Bedoya and Rebecca Kelly Slaughter wrote in a statement.

With reporting by Madeline Hughes in Washington, DC.

* LegalWeek, ALM/Law.com, New York City, Jan. 29-Feb 1, 2024.

the logo of the company is seen on a smartphone

UK must rebalance AI regulation from risk to innovation or fall behind, lawmakers warn

2 February 2024
By Jakub Krupa

Companies developing generative artificial intelligence systems need more "steer" from regulators as their technology evolves, a report by a committee of UK lawmakers said today. The government should avoid any catastrophizing and focus on promoting innovation, evading regulatory capture, and resolving emerging competition and copyright problems, they said.

In a detailed 95-page report, Parliament's Communications and Digital Committee expressed fears that if the UK does not adjust its regulatory approach, it could miss out on the opportunities brought by AI, particularly systems built on large-language models — algorithms based on huge data sets that allow AI applications to generate original content.

The lawmakers' report said companies need better-equipped regulators and agile regulatory frameworks capable of dealing with practical problems here and now — from copyright to disinformation, cyber security, online safety and digital competition issues — without distractions provided by dire warnings about things going wrong further down the line.

Balancing act

The report highlighted a particular worry that the UK government was "narrowly focused on catastrophic risks" and that it "is not striking the right balance between innovation and risk," with its attention "shifting too far towards a narrow view of high-stakes AI safety" and not enough on opportunities.

Focus on "catastrophic" outcomes — as outlined in a paper before the inaugural AI Safety Summit hosted in the UK last year — could make Britain miss more urgent issues and put it at risk of "falling behind international competitors and becoming strategically dependent on a small number of overseas tech firms," mainly in the US and China.

It was "almost certain," they said, that existential risks "will not manifest within three years and highly likely not within the next decade," so the government should focus on getting the most out of AI safely and not restricting it by red tape and overly prescriptive compliance demands.

While praising different elements of the regulatory environments in the US, the EU and China, the committee said that "wholesale replication of their regulatory approaches appeared unwise" given that the UK "lacks the distinctive features that shape their positions — such as the EU's customer base and appetite for regulatory heft; American market power; and China's political objectives."

The UK should try to pave its own way, seeking not to diverge too far from other partners but also not using fear of doing so as an excuse to delay, they said. Developments so far have been "slow," with a principles-based policy paper published last March yet to be translated into any practical regulation. An update promised in December has not yet been published.

While the committee agreed that "extensive primary legislation aimed solely at large language models is not currently appropriate," ministers should still focus on developing "an enforceable, pro-innovation" framework.

Separate worries were expressed about the "inadequate" delivery of the government's "central function" intended to support key AI regulators such as the Information Commissioner's Office, the Competition and Markets Authority and the Financial Conduct Authority.

"Relying on existing regulators to ensure good outcomes from AI will only work if they are properly resourced and empowered," the lawmakers said. They called for "standardized powers" across sectors to help regulators and a sanctions regime to "provide a credible deterrent against egregious wrongdoing."

Existing regulators had "significant variation in technical expertise," with some having no AI governance specialists or funding in place.

Regulatory capture, competition

Lawmakers further warned that the industry was showing "mounting concern" about regulatory capture as a result of lobbying and a situation where "officials lack technical know-how and come to rely on a narrow pool of private-sector expertise to inform policy and standard."

They also warned that overreliance on external AI expertise could give rise to conflicts of interest, calling for more transparency in how ministerial aides and sherpas for international negotiations are picked.

The committee called for administrative safeguards to ensure "decisions are subject to systematic challenge and review" to "mitigate the risks of inadvertent regulatory capture and groupthink."

The committee also pointed to competition concerns given that businesses dominating the market for large-language models "will have unprecedented powers to shape access to information and commercial practices across the world."

The lawmakers heard from experts that "the exploitation of first mover advantage among large developers could lead to entrenched market power," with similar effects to search engines and social media platforms.

They recommended that the government make market competition in large language models "an explicit policy objective," with "ensuring regulatory interventions do not stifle low-risk open-access model providers."

They also urged ministers to work closely with the Competition and Markets Authority, building on its first foundation models review late last year.

Copyright

Unresolved questions around copyright regulations were highlighted in the report, with government-backed talks among market participants having struggled to get to an agreement on any voluntary code of practice.

The ambition is to find a way to reconcile the interests of rights holders with those of tech developers who warn it would be impossible to train models without copyrighted materials.

The committee said it was "disappointed" that the government could not pro-actively articulate its view on applying current laws in the AI context and rejected ministerial suggestions that it would be up to courts to interpret the existing provisions.

Siding with the rights holders, the lawmakers said that it was not "fair for tech firms to use rightsholder data for commercial purposes without permission or compensation." They warned that "the current legal framework is failing to ensure [fair] outcomes occur."

"The government has a duty to act. It cannot sit on its hands for the next decade until sufficient case law has emerged," they warned. Ministers should clarify how they understand the current legal limbo. If they conclude that existing safeguards are insufficient, they should "set out options for updating legislation."

With voluntary talks repeatedly failing to reach any consensus, the committee suggested a deadline of spring 2024, after which ministers "must set out options and prepare to resolve the dispute definitively" if there is no agreement.

In November, AI minister Jonathan Berry said the government would not "get into an endless talking shop about this" as he sought to turn up the heat on negotiators to move or face regulatory intervention, but it is understood the talks are not progressing.

Big Ben, London

EU’s AI Act jumps closer to final approval after passing crucial stage

2 February 2024
By Sam Clark and Matthew Newman

The EU’s landmark artificial intelligence law today moved closer to final approval, after passing a key legislative stage.

Ambassadors representing EU national governments signed off on the final version of the legislation at a meeting today, according to a spokesperson for the Belgian government, which represents the EU governments in the talks.

The agreement was “unanimously confirmed,” the spokesperson said.

The approval comes after weeks of speculation that France was planning to bring together a coalition of countries to vote against it. France, along with Germany and Italy, had raised concerns earlier in the negotiations over rules on foundation models.

However, German digital minister Volker Wissing said earlier this week that he is now willing to accept the law after changes to protect small businesses. That decision by Germany made it significantly more difficult for France to block the law at today’s meeting.

The law will be the first standalone artificial intelligence legislation, and has been hailed by EU lawmakers as an example of the bloc’s ability to quickly regulate fast-moving new technologies. It focuses mostly on safety, with separate rules for the largest and most powerful AI systems.

Lawmakers in a joint European Parliament industry and civil liberties committee meeting will vote to sign off on the legislation on Feb. 13, with a vote by the entire parliament expected in March or April. It will then be rubber-stamped by EU national government ministers.

The legislation will apply two years after its publication in the EU’s Official Journal, except for rules on prohibited uses of AI, which will apply after six months, and on general purpose AI, which will apply after one year.

In the meantime, the European Commission will be launching an “AI Pact”, which will bring together AI developers from Europe and around the world to voluntarily commit to key obligations of the AI Act before it comes into force.

A spokesperson for the Belgian government said today: “We are absolutely very proud of being able to conclude this important landmark piece of legislation. ‘Landmark,’ because it will establish the first ever rules on the use of AI in the world, keeping in mind the EU’s fundamental rights, while keeping in mind innovation and the importance and impact AI has and is able to have on society. It was a long but successful joint EU process — a work of almost three years.”

* Updated on Feb. 2, 2024 at 15:22 GMT: Adds statement from Belgian government

glass walled building during daytime

AI developers might avoid Australia if it adopts excessive regulation, advisory agency says

2 February 2024
By Saloni Sinha

Excessive regulation could make the Australian market undesirable for international artificial intelligence, or AI, developers, the country’s chief economic advisory agency warned.

While supporting a risk-based approach to regulation of AI in the country, the Productivity Commission said that an application of “idiosyncratic” local regulations on the design and development of AI technologies could see developers bypassing Australia.

This week, the Productivity Commission published three papers on Australia’s AI opportunity and regulation. The papers emphasize the need to reduce the size and likelihood of harm from AI to acceptable levels without imposing an “excessive regulatory burden on society”.

In one of the papers titled “The Challenges of Regulating AI,” the Productivity Commission said that regulating the design of an AI model or application inconsistently compared to foreign markets might steer developers away from Australia.

“[It] may simply mean developers do not sell to Australia, harming the domestic market,” the paper warned.

“Further, regulating less harshly may not make a difference as developers would need to meet the specifications of larger markets (such as the EU or the US),” it added.

The agency’s statement follows the Australian government’s interim response in January 2024 to a discussion paper on AI policy settings in the country, saying AI applications in what are viewed as high-risk settings could be targeted by new laws.

In the response, Australian Industry Minister Ed Husic said the government was now considering mandatory guardrails for AI development and deployment in specific high-risk circumstances through the creation of “new AI-specific laws”.

The Productivity Commission has argued that many of the potential harms that could be created by using AI are “old wine in new bottles” and are adequately dealt with by existing laws and regulations.

“Effective implementation of AI will require our regulatory infrastructure to recognize where harms are already covered and adopt a flexible approach so that existing rules can be applied to the new context presented by AI,” it said, adding that new technology does not imply “new rules.”

In December 2023, the Productivity Commission’s Stephen King criticized the European Union’s proposed AI Act, saying that it was better to regulate the use of a certain technology rather than the technology itself.

He said that if some of the uses are not covered under existing laws, the next step should be to check if it is possible to modify those laws.

“If the answer from that is still ‘no,’ then we can have technologically neutral regulation ... And only as a last resort do we say: ‘Let's have specific AI regulation,’” King said.

The papers also state that AI has the potential to address some of Australia’s most prominent productivity challenges, such as skill and labor gaps and new technology-specific laws could stifle the proper uptake of AI in the country.

“Achieving productivity gains will depend on how the technology and complementary technologies continue to develop, and how successfully these are adopted and applied. It will also depend on how government policy around AI regulation and data develops,” another paper said.

The Productivity Commission is not the only one that has urged the government to consider a risk-based approach towards AI regulation in Australia.

In a September 2023 submission to the Safe and Responsible AI discussion paper, published in June 2023 by Australia’s Department of Industry, Science and Resources, the Tech Council of Australia, an industry association representing major technology players, also called for clarification of existing laws to support regulators in providing guidance on the application of laws relating to AI.

The Digital Platform Regulatory Forum, or DP-REG, a body of representative members from four Australian regulators, used its submission to call for an approach that identifies gaps in the existing regulatory framework.

structural photography

desk globe on table