Newsom vetoes controversial AI regulation bill

Gov. Gavin Newsom on Sunday vetoed what’s believed to be probably the most ambitious bill to manage artificial intelligence within the country.

A day before the deadline to take motion or sign the bill into law, Newsom rejected SB 1047, the Frontier Artificial Intelligence Models Act, saying it could burden AI corporations and criticizing the scope of the bill draft law is simply too broad.

“While SB 1047 is well-intentioned, it does not take into account whether an AI system is used in high-risk environments, requires critical decisions, or uses sensitive data,” Newsom said in an announcement. “Instead, the bill applies strict standards for even the most basic functions – as long as they are delivered across a large system. I don’t believe this is the best approach to protecting the public from real threats from technology.”

In other words, Newsom believes the law doesn’t distinguish between AI systems utilized in high-risk environments and people used for essential tasks. He said the bill lacks nuance in differentiating every kind of AI inside large systems, no matter their function.

The bill would have required AI corporations to adopt security measures to guard the general public from cyberattacks, prevent AI from getting used to develop weapons, and stop automated crime. It would have required corporations to conduct security tests for giant AI products that might have cost no less than $100 million to develop and to activate a “kill switch” for brand spanking new AI technology.

The bill's lead writer, San Francisco Democrat Scott Wiener, said in an announcement Sunday: “This veto is a setback for everyone who believes in oversight of large companies that make critical decisions that affect safety and security the well-being of the public and the future of the country.” Planet.”

The bill was implemented quickly as it was only presented in February. It quickly polarized Silicon Valley and Washington, DC

Despite strong opposition from California's robust tech sector, the bill passed last month in the state Senate by a vote of 29-9 and in the Assembly by a vote of 41-9.

Tech giants like Meta, Google and OpenAI vehemently opposed the bill and lobbied the state parliament to argue that the bill would stifle AI innovation.

Meanwhile, the bill prompted Democratic lawmakers to take the rare step of interfering in Sacramento business given the national impact the bill would have had.

Former House Speaker Nancy Pelosi, San Francisco Mayor London Breed and Silicon Valley lawmakers Ro Khanna and Zoe Lofgren all spoke against the bill. While they agreed that regulations were needed, they called Wiener's AI law the wrong approach.

The bill has some fierce opponents, but it received support from strong AI advocates and major tech companies like the Center for AI Safety, major AI developer Anthropic and billionaire Elon Musk.

The pushback came even after several changes called for by the bill's previous critics, including Anthropic, were added.

Wiener agreed to remove criminal penalties from the bill and, among other things, to delete the creation of a new regulator, the Frontier Model Division.

Surprisingly, after Anthropic changed its mind and supported the bill, the normally regulation-shy Tesla CEO and X owner Elon Musk also supported it.

Meta, who continued to oppose the bill despite the changes, said she was pleased Newsom vetoed it.

“We are pleased that Governor Newsom vetoed SB1047. “This bill would have stifled AI innovation, hurt business growth and job creation, and broken the state’s long tradition of encouraging open source development,” a Meta spokesperson said. “We support responsible AI regulations and remain committed to working with lawmakers to promote better approaches.”

Teri Olle, director of Economic Security California Action and a co-sponsor of the bill, said the veto “ignores the overwhelming public support for Big Tech accountability.”

“Governor Newsom’s veto of SB 1047 squanders our country’s most promising opportunity to establish responsible guardrails for AI development today,” Olle said. “The failure of this bill demonstrates the continued power and influence of the deep-pocketed tech industry, driven by the need to maintain the status quo – a straightforward regulatory environment and exponential profit margins.”

Nathan Calvin, senior policy counsel at CAIS, said he was disheartened by the veto of the “urgent and common-sense security bill.”

“Experts have determined that catastrophic threats to society from AI could materialize quickly, so today’s veto represents an unnecessary and dangerous gamble on the public’s safety,” Calvin said.

While Wiener was disappointed by the governor's veto, he said the debate around the bill has increased discussion about the need for AI safety measures.

“At the same time, the debate over SB 1047 has dramatically advanced the issue of AI safety on the international stage. “Large AI labs have been forced to be specific about the protections they can provide the public through policies and oversight,” Wiener said. “The work of this incredible coalition will continue to bear fruit as the international community considers the best ways to protect the public from the risks of AI. California will continue to lead this conversation – we are not going anywhere.”

Originally published:

image credit : www.mercurynews.com