California bill to control AI advances despite technical resistance with some adjustments

A California bill to control artificial intelligence cleared a vital legislative hurdle on Thursday, advancing out of a key committee hearing despite fierce opposition from technology firms.

The State Assembly's Budget Committee approved an amended version of the bill. SB1047 The bill was approved by Senator Scott Wiener, a Democrat from San Francisco, on an 11-3 vote on Thursday, removing it from the “suspense folder,” where dozens of bills with potential budgetary implications are sometimes sorted out after being handled in a rapid succession of votes with out a public hearing.

Wiener's bill would regulate the “development and deployment of advanced AI models” by large firms that develop AI models valued at greater than $100 million. It would require security testing, safeguards to forestall misuse of AI, and monitoring after the technology is deployed.

The bill also provides whistleblower protections for AI employees and allows the state's attorney general to take legal motion against firms that cause “serious harm” or endanger public safety. It also creates a public cloud computing cluster called CalCompute.

The bill has divided Silicon Valley. Major technology firms and two members of the Silicon Valley Congress have spoken out against the bill, saying it could harm innovation. Meanwhile, some leading minds within the AI ​​field, including the “Godfathers of AI,” Geoffrey Hinton and Yoshua Bengio, supported.

Before the bill passed committee, several amendments requested by critics of the bill, including AI startup Anthropic, were added. Among other things, Wiener agreed to remove criminal penalties from the bill and to eliminate the creation of a brand new regulatory agency, the Frontier Model Division.

Wiener said the bill still incorporates necessary guardrails for the emerging technology, which is each promising and has great potential for abuse, starting from robotic weapons to automated hacking and manipulation of monetary markets.

“We can drive both innovation and security; the two are not mutually exclusive,” Wiener said in an announcement. “I believe we have addressed the core concerns of Anthropic and many others in the industry.”

However, critics from the technology industry and the legislature remain opposed.

In a letter to Governor Newsom on Thursday, Bay Area Reps. Anna Eshoo, Zoe Lofgren and Ro Khanna urged Newsom to dam the bill, citing concerns about its impact on the state's “innovation economy.”

“SB 1047 creates unnecessary risks to California's economy and provides very little benefit to public safety,” the letter says. “SB 1047 is designed to address extreme abuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks such as misinformation, disinformation, non-consensual deepfakes, environmental impacts, and job displacement.”

In early August, Jaikumar Ramaswamy, the overall counsel at Andreessen Horowitz, a significant AI investor, argued in a letter that the bill would cause quite a lot of harms to developers. The regulations would “hinder innovation,” stifle open-source AI models and will cause technology firms to maneuver their operations to other states, Ramaswamy wrote.

The bill now goes to the House of Representatives, where it have to be passed by August 31. It will then go to the state Senate for approval after which to Governor Newsom, who has not yet commented publicly on the measure.

The bill could be the primary of its kind amongst dozens of AI bills on the national level. California lawmakers wish to pre-empt possible federal regulations.

“Congress has not passed major technology regulations since computers began using floppy disks,” Wiener said in an announcement. “California must act to anticipate the foreseeable risks of rapidly advancing AI while encouraging innovation.”

Originally published:

image credit : www.mercurynews.com