More AI legislative proposals coming from US states, but no uniform model yet

More AI legislative proposals coming from US states, but no uniform model yet

22 October 2024
By Amy Miller

Another wave of bills aimed at regulating artificial intelligence is expected from US states next year, and lawmakers are already trying to mitigate potential issues around interoperability and harmonization.

Business groups are pushing states to focus on regulating high-risk uses of AI systems next year, and legislators say that will be a top priority. But a uniform model for regulating AI that state lawmakers across the country could copy hasn’t yet emerged from states.

There’s no question that in the absence of federal action, states are stepping in to regulate AI. Last year state lawmakers introduced 191 pieces of AI legislation, according to an analysis released today from BSA, also known as the Software Alliance, an advocacy group for the global enterprise software industry.

This year, legislators in 45 states introduced nearly 700 bills aimed at regulating various aspects of AI, with 113 signed into law, the group said. That’s just a precursor of what the group is expecting next year, said Craig Albright, a senior vice president for US Government Relations.

“I think when you look at 2024, the biggest takeaway is 2025 has already started,” Albright said.

Colorado passed the first, and so far only, comprehensive US law regulating high-risk use of AI. Now, makers and deployers of high-risk AI systems in Colorado will have to be far more transparent with the public about how their technology operates, how it’s used, and who it could hurt.

California lawmakers were more prolific. Governor Gavin Newsom signed about a dozen bills into law regulating various aspects of AI, such as banning election deepfakes and requiring watermarking for AI-generated content.

Newsom also vetoed this year’s most closely watched and controversial AI bill — SB 1047 — which would have regulated large-scale “frontier” AI models and required companies to install a “kill switch” in case their systems ran amok. Nearly a dozen AI bills focused on high-risk AI systems, like the Colorado AI Act, suffered a similar fate this year in states such as Connecticut.

Planning for next year

But disappointed state lawmakers aren’t giving up. Both Democratic and Republican state legislators across the country say they’re already preparing for next year.

“I anticipate we'll see, if anything, even more legislation introduced next session,” said Tatiana Rice, deputy director for US Legislation at the Future of Privacy Forum.

Right now, all eyes are focused on Colorado, where a 25-member task force is working on potential amendments to the state’s new AI law before it takes effect in February 2026, as requested by the governor. Some reporting requirements to the state attorney general may need clarification or amending, along with requirements for developers and deployers, the law’s primary sponsor, Senate Majority Leader Robert Rodriguez, said. Definitions may also need tweaking, he said.

“Our plans are to start digging into some policy definitions and parts of the policy that could use some tweaks or some attention,” Rodriguez said. “We’re seeing if there are some unintended consequences we've missed.”

The Colorado task force is also coordinating with a multi-state AI legislation working group led by Rodriguez and Connecticut Senator James Maroney, who sponsored a similar high-risk AI bill that died in the legislature this year.

The multi-state working group is made up of more than 200 state senators, representatives, public officials, and staff members. The goal is to help states create a uniform framework to regulate AI that won’t be a heavy, expensive compliance burden on the industry, particularly small startups.

Maroney is planning to reintroduce his AI bill, which would have required high-risk AI developers and deployers to protect consumers from any known or foreseeable risks of algorithmic discrimination.

Now he’s already working on updates after seeing what AI legislation passed this year, he said, “and again, trying to stay in alignment with what happens in Colorado.”

That’s good news for business groups such as BSA, which is urging state legislatures to focus their attention on high-risk AI systems in the absence of federal legislation. BSA has developed a risk-management framework aimed at mitigating unintended bias in high-risk AI systems through impact assessments used to identify risks.

“We're also urging policy makers to have consistency in what they view as high risk,” Albright said. “This is an area that has had a lot of discussion, and there is hope that there can look at definitions of high risk similarly and try to have clarity so it's not differing vastly across jurisdictions.”

Inevitable differences

But there’s also acknowledgment among lawmakers and policy experts that differences among state proposals are inevitable. Despite efforts at harmonization, a regulatory model that other states could emulate across the country has yet to emerge.

Colorado will be closely examined, but state lawmakers are expected to incorporate pieces of Colorado’s AI law into their own proposals instead of copying it verbatim.

Texas Representative Giovanni Capriglione, a Republican, for example, is widely expected to unveil his version of AI legislation soon that would address AI use by both the private and public sectors. It’s expected to be more a business friendly approach than similar proposals introduced in Democratically led state legislatures in California or Connecticut.

Some states may choose to focus on a specific area of high-risk AI, such as employment. This year Illinois enacted a landmark artificial intelligence employment law, HB 3773, which aims to prevent discrimination when companies use AI to make employment decisions.

Some state lawmakers will also prioritize other issues besides the risk of discrimination, most notably transparency. Legislators in other states are likely to propose their own versions of California's AI Transparency Act, which is the first US law requiring developers of generative AI systems to include a detection tool with their products so users can see if the content has been created or manipulated by AI. California also passed AB 2013, which will require developers of generative AI systems to publicly disclose what data they used to train the system or service.

Others may try a completely new approach. New York Assemblymember Alex Bores said he plans to introduce a bundle of bills that would require companies to label AI-generated images, videos and other media. But he’s embraced an emerging global provenance standard known as C2PA, an open technical standard providing publishers, creators, and consumers the ability to trace the origin of different types of media. The Department of Defense is also an early adopter.

“States are chipping away at this issue,” said Grace Gedye, a policy analyst with Consumer Reports. “It’s not totally accurate to say Colorado is this lone wolf out there. I think other states are coming right along, and they're planting their flag in the ground on issues like training data, transparency, or employment in AI.”

For the specialist news and predictive insights you need to stay ahead of the policies, rules, laws and litigation set to shape the future deployment of AI, activate your instant trial of MLex today.

aerial photo of parking lot during daytime