US states building on EU framework for AI regulation
US states building on EU framework for AI regulation
29 May 2024
By Amy Miller
In the absence of federal rules for artificial intelligence, US states are stepping in to fill the void, much as they did with data breach and consumer privacy regulation. Once again, state lawmakers are turning to the EU for guidance, and EU officials say they are happy to help.
Two months after the EU passed landmark legislation regulating the use of AI in 27 countries, Colorado became the first US state to enact a law building on the EU’s risk-based approach. It probably won’t be the last. EU officials are working with lawmakers and regulators in California and other states hoping to pass similar legislation to put guardrails around AI that could threaten fundamental human rights.
The EU’s office in San Francisco has a map of all 43 states where AI legislation has been introduced this year, said Gerard de Graaf, Senior EU Envoy for Digital to Silicon Valley, but much of the recent focus has been in Sacramento.
It is de Graaf’s job to promote EU tech policy and strengthen cooperation with Silicon Valley. Coordination is necessary, de Graaf said, because technology is a global industry, and regulators need to avoid forcing businesses to comply with different rules in different jurisdictions.
US states need to coordinate their regulation of AI across their borders, too, de Graaf said, and a first step is settling on a uniform definition of AI, and deciding which technologies should be regulated, without chilling innovation.
“I always say it's bad rules that stifle innovation,” de Graaf said. “Good rules can actually support innovation and often do.”
Colorado follows EU
Like the EU AI Act, Colorado’s AI law focuses on consumer protection and high-risk AI systems. The EU AI Act bans emotional recognition technology in schools and the workplace, prohibits social credit scores that reward or punish certain kinds of behavior, and prohibits predictive policing in certain instances. The EU AI Act also applies high risk labels to AI in health care, hiring, and issuing government benefits.
Starting in February 2026, makers and deployers of high-risk AI systems in Colorado will also have to be far more transparent with the public about how their technology operates, how it’s used and who it could hurt.
The Colorado law imposes now-familiar notice, documentation, disclosure, and impact assessment requirements on developers and deployers of “high-risk” AI systems. Much like the EU AI Act, those are defined as any AI system that “makes, or is a substantial factor in making, a consequential decision,” such as such as housing, lending and employment. Makers and deployers will have to disclose the types of data used to train their AI.
There are obvious differences in scope and enforcement. The EU AI Act addresses how law enforcement agencies can use AI, while the Colorado AI Act does not. The Colorado attorney general’s office will be responsible for enforcement and has rule-making authority, and both developers and deployers of high-risk AI will have to demonstrate compliance with risk management requirements.
But not everyone is convinced. Colorado Governor Jared Polis, a Democrat, said he approved the legislation even though he had “reservations” it could hurt the state’s budding AI industry, particularly for small startups.
Despite that skepticism, Colorado’s groundbreaking AI law will likely be a model for other US states. It’s the most successful result, so far, from a bipartisan, multi-state AI working group seeking to coordinate AI regulations across state lines, and it builds on concepts from the US government agencies, as well as the EU.
State lawmakers wanted the EU’s input because interoperability was a primary goal of the working group, state lawmakers said during a livestreamed discussion on LinkedIn last week. The AI working group heard multiple presentations on AI from EU officials, as well as from privacy attorneys and scholars who followed the EU’s AI framework closely, they said.
“The goal was always, well the EU is doing it, so we can do it,” said Colorado Sen. Robert Rodriguez, sponsor of the Colorado AI Act.
Focus on California
EU officials have been particularly focused on California, the epicenter of AI technology and investment in the US, de Graaf said. In recent weeks, California lawmakers and regulators have met multiple times to discuss a wide range of AI issues with EU officials and leaders who prepared and shaped the EU AI Act.
De Graaf testified at a public hearing in Sacramento that the EU wants to set the global standard for AI regulation, much as it did for consumer privacy with the General Data Protection Regulation (GDPR). EU officials are “very keen” to work with California lawmakers on alignment, he said.
“I can tell you that our colleagues in Brussels are following very closely what you’re doing in California,” de Graaf told an Assembly privacy committee in February. “They’re fully aware of the bills that you have introduced, and they are very interested in these bills and further cooperation.”
Unlike Colorado, California lawmakers have introduced dozens of bills aimed at regulating various aspects of AI, from prohibiting discrimination to forcing companies to tell the public more about how the technology operates.
De Graaf said he is advising California lawmakers on three proposals that incorporate several aspects of the EU’s AI Act, including risk-based approaches to regulation, required testing and assessment of AI deemed high risk, and greater transparency requirements for AI-generated content. If enacted, the proposals would cover about 80 percent of what the EU AI Act regulates, de Graaf said.
Assemblymember Rebecca Bauer-Kahan, a San Ramon Democrat, is sponsoring AB 2930, a bill that would require businesses and state agencies to prohibit discrimination in automated decision-making technology. State Senator Scott Wiener, a Democrat from San Francisco, is sponsoring SB 1047, which would require developers of AI models to implement safeguards and policies to prevent public safety threats, and would also create a new oversight agency to regulate generative AI. Assemblywoman Buffy Wicks, a Democrat from the East Bay, is sponsoring AB 3211, which requires online platforms to watermark AI-generated images and videos.
Last week all three AI bills passed out of the chambers in which they were introduced.
“It’s not just a one-way street,” de Graaf said. “It's a two-way street where we learn from California, and they learn from us and we try to exchange the best ideas between Europe and California.”
For the latest developments in AI regulation and the tech sector, privacy and cybersecurity, online safety, content moderation and more, activate your instant trial of MLex today.