California Governor Gavin Newsom vetoed a contentious AI safety regulation Sunday, which had drawn support from advocacy groups and Hollywood as well as criticism from Silicon Valley’s big tech giants.
The vetoed Senate Bill 1047, which largely targeted model development, would have enacted whistleblower protections, beefed up safety requirements and authorized the attorney general to bring civil action against non-compliant businesses.
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in his veto message to members of the California State Senate. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”
The governor is working with stakeholders in the space to craft more effective legislation, according to the Sunday announcement.
Frontier model developers that would have been most directly impacted by the law are "probably breathing a sigh of relief,” said Jennifer Everett, partner in Alston & Bird’s technology and privacy group.
Technology companies, including leaders from OpenAI, Meta and Google, lamented the broad brush strokes used in the bill’s language.
The bill will now return to the legislature, where a two-thirds majority vote in both houses can override Newsom’s veto. However, veto overrides are rare in California politics, the Center for AI and Digital Policy said in its Monday newsletter.
“This legislation, which CAIDP strongly supported, aimed to establish crucial safeguards for large-scale AI models to prevent potential catastrophic harm,” CAIDP said, calling the veto a setback for AI safety advocates.
Stakeholders are already looking for what’s next.
“Regulators need to hold Big Tech accountable, demand genuine transparency about data usage, and refuse to accept opaque systems and rigid, unchangeable solutions,” Peter Guagenti, president at Tabnine, said in an email to CFO Dive sister publication CIO Dive. “This may affect their cost of doing business, but will build trust in AI more broadly and ultimately help us build a more vibrant, more profitable ecosystem.”
Despite the veto on SB 1047, Newsom has signed 17 other bills this month that touched on the deployment and development of generative AI, requiring AI watermarking and combating AI-generated misinformation.
“This isn’t going to be the end of regulations coming out of California for AI,” Everett said.
Top technology leaders at financial services institutions, automotive insurance companies and consumer goods giants are moving forward on AI adoption plans that keep regulatory compliance top of mind.
“The map of AI regulations across U.S. states is going to continue to develop and evolve and be rather complex,” Everett said.
Most businesses are heading into the fourth quarter of the year with a sustained focus on AI, including deploying use cases and bringing stakeholders together.
“Companies should be careful to not put AI in a particular box,” Everett said. “These proposed laws emphasize that the use, deployment or modification of AI systems require a more comprehensive contribution by various stakeholders across the enterprise.”
Technology leaders have growing concerns about regulation, which can be attributed, in part, to lagging best practices. Most C-level tech execs admit their organization is trailing when it comes to deploying responsible AI practices at scale, according to an IBM Institute for Business Value study published in August.
Incorporating a meaningful governance structure will have a significant impact on whether or not enterprises can easily conform to future compliance requirements. Common themes of regulatory provisions so far have focused on transparency, ethical practices and security, separating high-risk systems from other use cases.
As CIOs continue assessing the regulatory landscape, one of the existing laws that have started to gain attention is Assembly Bill 2013. The bill requires developers to post documentation online regarding data sets used in development, expanding transparency requirements.
“AB 2013 is notable because developers are pretty broadly defined,” Everett said. “It’s not just companies that produce AI systems, but even those who may substantially modify an AI system, so that could include conversational product searches or shopping assistants or chatbot support.”
The documentation is required by January 1, 2026, but applies to systems released after January 1, 2022, a timeframe that includes ChatGPT’s public release and the spur of generative adoption that followed.
California is also not the only state with legislators working to set standards for AI in their jurisdiction.
Colorado passed a consumer protection law focused on AI, which was approved by the governor in May. Oregon, Montana and Tennessee have enacted AI-related legislation, and numerous states across the country have proposed provisions.
U.S. federal legislators are also working toward proposing AI legislation and, in the European Union, the countdown is on toward the enforcement of its AI Act.