On August 29, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking yet another major development in states’ efforts to regulate AI. The legislation, which draws on concepts from the White House’s 2023 AI Executive Order (“AI EO”), follows months of high-profile debate and amendments and would establish an expansive AI safety and security regime for developers of “covered models.” Governor Gavin Newsom (D) has until September 30 to sign or veto the bill.
If signed into law, SB 1047 would join Colorado’s SB 205—the landmark AI anti-discrimination law passed in May and covered here—as another de facto standard for AI legislation in the United States in the absence of congressional action. In contrast to Colorado SB 205’s focus on algorithmic discrimination risks for consumers, however, SB 1047 would address AI models that are technically capable of causing or materially enabling “critical harms” to public safety.
Covered Models. SB 1047 establishes a two-part definition of “covered models” subject to its safety and security requirements. First, prior to January 1, 2027, covered models are defined as AI models trained using a quantity of computing power that is both greater 1026 floating-point operations per second (“FLOPS”) and valued at more than $100 million. This computing threshold mirrors the AI EO’s threshold for dual-use foundation models subject to red-team testing and reporting requirements; the financial valuation threshold is designed to exclude models developed by small companies. Similar to the Commerce Department’s discretion to adjust the AI EO’s computing threshold, California’s Government Operations Agency (“GovOps”) may adjust SB 1047’s computing threshold after January 1, 2027. By contrast, GovOps may not adjust the valuation threshold, which is indexed to inflation and must be “reasonably assessed” by the developer “using the average market prices of cloud compute at the start of training.”
SB 1047 also applies to “covered model derivatives,” defined as(1) “fine-tuned” covered models; (2) modified and unmodified copies of covered models; and (3) copies of covered models combined with other software. Prior to January 1, 2027, fine-tuned covered model derivatives must be fine-tuned using at least three times 1025 FLOPS of computing power worth more than $10 million. After January 1, 2027, GovOps may adjust the computing threshold.
Critical Harms & AI Safety Incidents. SB 1047 would require AI developers to report “AI safety incidents,” or specific events that increase the risk of critical harms, to the California Attorney General within 72 hours after discovery. Critical harms are defined as mass casualties or at least $500 million in damages caused or materially enabled by a covered model that: (1) creates or uses chemical, biological, radiological, or nuclear (“CBRN”) weapons; (2) conducts or instructs cyberattack(s) on critical infrastructure; or (3) engages in unsupervised acts that would be criminal if done by a human. Critical harms also include other grave harms to public safety and security of comparable severity.
“AI safety incidents” are defined as incidents that demonstrably increase the risk that critical harms will occur by means of the following: (1) a covered model autonomously engaging in behavior not requested by a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of a covered model’s model weights; (3) critical failures of technical or administrative controls; or (4) unauthorized uses of a covered model to cause or materially enable critical harms.
Pre-Training Developer Requirements. SB 1047 would also impose requirements on developers prior to the start of training a covered model, including:
- Administrative, Technical, & Physical Cybersecurity Protections. Protections must be reasonably designed to prevent unauthorized access, misuse, or unsafe modifications.
- Full Shutdown Capability. Developers must implement the capability to promptly enact a full shutdown of each covered model.
- Safety & Security Protocols. Developers must implement protocols for managing risks across each covered model’s life cycle, including procedures for avoiding critical harms, compliance requirements that can be verified by third parties, testing for unreasonable risks of critical harms, and conditions for enacting a full shutdown, among other things. Developers must designate senior personnel responsible for ensuring compliance and retain and disclose protocols to the public and the California Attorney General.
Pre-Deployment Developer Requirements. SB 1047 would impose separate requirements for developers prior to using a covered model or making a covered model available for commercial or public use, including:
- Critical Harm Assessments. Developers must assess whether each covered model is reasonably capable of critical harms. The tests and results in these assessments must be retained for as long as the model is available, plus five years. If unreasonable risks of critical harm are found, developers may not use or provide the covered model.
- Critical Harm Safeguards & Attribution. Developers must take reasonable care to implement appropriate safeguards to prevent critical harms. Additionally, developers must take reasonable care to ensure that each covered model’s actions and resulting critical harms can be accurately and reliably attributed to the covered model.
Ongoing Developer Requirements. Finally, SB 1047 would require developers to annually reevaluate their policies, protections, and procedures, and impose other ongoing requirements:
- Third-Party Audits. Starting January 1, 2026, developers must annually retain third-party auditors to perform SB 1047 compliance audits. Auditors must produce certified reports with compliance assessments, instances of noncompliance, recommendations to improve compliance, and assessments of developers’ internal controls. Developers must retain, publish, and disclose these reports to the California Attorney General.
- Compliance Statements. Within 30 days after using or making a covered model available and annually thereafter, developers must submit statements of SB 1047 compliance to the California Attorney General, including assessments of potential critical harms and assessments of the sufficiency of safety and security protocols.
- AI Incident Reporting. As mentioned above, developers must report AI safety incidents affecting covered models to the California Attorney General within 72 hours after learning of the incident or developing a reasonable belief that an incident occurred.
- Whistleblower Protections, Notice, & Reporting. Developers are prohibited from retaliating against employees who disclose information to the California Attorney General indicating noncompliance or unreasonable critical harm risks. Developers must notify employees of their rights and responsibilities under SB 1047 and provide internal processes for anonymously disclosing information on noncompliance to the developer.
Future Regulations and Guidance. SB 1047 requires GovOps to issue, by January 1, 2027, new regulations on the computational thresholds for covered models and auditing requirements for third-party auditors, in addition to guidance for preventing unreasonable risks of critical harms. The regulations and guidance must be approved by the “Board of Frontier Models,” a nine-member group of AI and safety experts established by SB 1047.
SB 1047 is just one of over a dozen AI bills passed by the California legislature last month covering a range of AI-related topics including election deepfakes, generative AI content and training data, and digital replicas. The passage of SB 1047 also comes as Colorado lawmakers embark on a revision process for SB 205, as we have covered here.
* * *
Follow our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.