On September 29, California Governor Gavin Newsom (D) vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), putting an end, for now, to a months-long effort to establish public safety standards for developers of large AI systems. SB 1047’s sweeping AI safety and security regime, which included annual third-party safety audits, shutdown capabilities, detailed safety and security protocols, and incident reporting requirements, would likely have established a de facto national safety standard for large AI models if enacted. The veto followed rare public calls from Members of California’s congressional delegation—including Speaker Emerita Nancy Pelosi (D-CA) and Representatives Ro Khanna (D-CA), Anna Eschoo (D-CA), Zoe Lofgren (D-CA), and Jay Obernolte (R-CA)—for the governor to reject the bill.
In his veto message, Governor Newsom noted that “[AI] safety protocols must be adopted” with “[p]roactive guardrails” and “severe consequences for bad actors,” but he criticized SB 1047 for regulating based on the “cost and number of computations needed to develop an AI model.” SB 1047 would have defined “covered models” as AI models trained using more than 1026 FLOPS of computational power valued at more than $100 million. In relying on cost and computing thresholds rather than “the system’s actual risks,” Newsom argued that SB 1047 “applies stringent standards to even the most basic functions–so long as a large system deploys it.” Newsom added that SB 1047 could “give the public a false sense of security about controlling this fast-moving technology” while “[s]maller, specialized models” could be “equally or even more dangerous than the models targeted by SB 1047.”
The veto follows Newsom’s prior statement, on September 17, expressing concerns with the “outsized impact that [SB 1047] could have,” including “the chilling effect, particularly in the open source community” and potential effects on the competitiveness of California’s AI industry.
Echoing the risk-based approaches taken by Colorado’s SB 205—the landmark AI anti-discrimination law passed in May—and California’s AB 2930, a similar automated decision-making bill that failed to pass the state Senate in August, Newsom called for new legislation that would “take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.”
In contrast with Colorado Gov. Jared Polis’s call for a unified federal approach to AI regulation, however, Newsom noted that a “California-only approach” to regulating AI “may well be warranted . . . especially absent federal action by Congress.” Newsom also pointed to the numerous AI bills “regulating specific, known risks” he signed into law, including laws regulating or prohibiting digital replicas, election deepfakes, and AI-generated CSAM (AB 1831).
While the legislature can override the governor’s veto by a two-thirds vote, it has not taken that step since 1979. Instead, we expect the legislature to revisit AI safety legislation next year.
* * *
Follow our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.