Controversial California AI safety bill raises concerns about future of open-source products

Controversial California AI safety bill raises concerns about future of open-source products

4 September 2024
By Amy Miller

Open-source artificial intelligence companies are worried about their future after California state lawmakers sent the first and most significant framework for regulating AI in the US to Gov. Gavin Newsom for approval last week. 

If enacted, it would be first and most significant framework for regulating AI in the US. But AI developers and their financial backers say it would also impose unreasonable liability on open-source AI developers who aren’t seeking to make a profit and can’t control what customers might do with their technology, and they’ve taken their fight online to convince Newsom to veto it.

The bill’s sponsor, Sen. Scott Wiener, a Democrat from San Francisco, says the bill’s requirements won’t hurt open-source AI developers because it’s squarely aimed instead at huge “foundation” AI models that are trained on enormous amounts of human-made and synthetic data.

Newsom hasn’t indicated whether he will sign the bill, which would take effect on Jan. 1, 2026.

Despite such reassurances from Wiener and offers to work with the tech industry on amendments, the sweeping proposal has faced concerted opposition from a long list of tech companies. OpenAI, Google, and Meta Platforms, and even fellows Democrats in the US Congress, say it's flawed, mistargeted and ultimately would drive AI companies out of California, where most are now based.

Other companies such as Microsoft and Anthropic haven’t formally opposed or endorsed the bill. Recent changes to the bill, such as a provision providing that AI companies can only be sued by the state attorney general after their AI models cause some harm, improved the measure enough that its “benefits likely outweigh its costs,” Anthropic CEO Dario Amodei said in a letter to Newsom.

SB 1047 requires developers to predict how customers would use their AI products, and they could face steep fines if those products later cause harm, such as loss of life or cyberattacks costing more than $500 million in damages.

A court could order companies to stop operations, and covered models would need to have a “kill switch” to shut them down if they’re deemed dangerous.

Wiener drafted the proposal with help from Center for AI Safety (CAIS), which is backed by Open Philanthropy, a group affiliated with effective altruism, a philanthropic movement concerned about the risks of AI, and he’s quick to note it would apply to developers that spend more than $100 million to train their AI models or that spend more than $10 million to update an existing model.

Open-source internet browser Mozilla argues that the latter category could apply to many small open-source AI developers, which don’t have deep pockets like Google or Microsoft.

At the same time, open-source AI developers often don’t know what other developers or customers will do with their products because they can be downloaded directly to their personal devices. Mozilla said it’s worked on the bill with Wiener, but it still threatens the continued use of open-source technology.

“Today, we see parallels to the early Internet in the AI ecosystem, which has also become increasingly closed and consolidated in the hands of a few large, tech companies,” Mozilla said. “We are concerned that SB 1047 would further this trend, harming the open-source community and making AI less safe — not more.”

Venture capitalists and AI developers took to social media to post their critiques hoping to drum up opposition and convince Newsom to veto the bill, and many raised concerns about how the bill would impact the open-source community, particularly small developers.

Big tech companies such as Meta may be able to write off the bill’s reporting requirements as “just a cost of doing business,” Danielle Fong, founder of Lightcell Energy, a startup that converts light to electricity, wrote on X.

But it’s “highly likely” the bill would apply to open-source AI and could slow or even stop open-sourced AI models from being released, or even being trained at all, she said.

Wiener said he amended the bill after soliciting feedback from AI companies, such as eliminating a new proposed agency to regulate AI, the Frontier Model Division.

He also made several changes specifically in response to concerns about open-source systems, he said. For example, a requirement was clarified so that developers will only have to shut down AI models deemed risky that are currently in their possession, Wiener said.

Developers are also not responsible if someone tweaks an open-source AI model and turns it into, effectively, a different model, he said.

Newsom has until Sept. 30 to veto SB 1047, or sign it into law.

For the latest developments in AI & tech regulation, privacy and cybersecurity, online safety, content moderation and more, activate your instant trial of MLex today.

blue and yellow star flag

desk globe on table