On August 29, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking yet another major development in states’ efforts to regulate AI.  The legislation, which draws on concepts from the White House’s 2023 AI Executive Order (“AI EO”), follows months of high-profile debate and amendments and would establish an expansive AI safety and security regime for developers of “covered models.”  Governor Gavin Newsom (D) has until September 30 to sign or veto the bill. 

If signed into law, SB 1047 would join Colorado’s SB 205—the landmark AI anti-discrimination law passed in May and covered here—as another de facto standard for AI legislation in the United States in the absence of congressional action.  In contrast to Colorado SB 205’s focus on algorithmic discrimination risks for consumers, however, SB 1047 would address AI models that are technically capable of causing or materially enabling “critical harms” to public safety. 

Covered Models.  SB 1047 establishes a two-part definition of “covered models” subject to its safety and security requirements.  First, prior to January 1, 2027, covered models are defined as AI models trained using a quantity of computing power that is both greater 1026 floating-point operations per second (“FLOPS”) and valued at more than $100 million.  This computing threshold mirrors the AI EO’s threshold for dual-use foundation models subject to red-team testing and reporting requirements; the financial valuation threshold is designed to exclude models developed by small companies.  Similar to the Commerce Department’s discretion to adjust the AI EO’s computing threshold, California’s Government Operations Agency (“GovOps”) may adjust SB 1047’s computing threshold after January 1, 2027.  By contrast, GovOps may not adjust the valuation threshold, which is indexed to inflation and must be “reasonably assessed” by the developer “using the average market prices of cloud compute at the start of training.”

SB 1047 also applies to “covered model derivatives,” defined as(1) “fine-tuned” covered models; (2) modified and unmodified copies of covered models; and (3) copies of covered models combined with other software.  Prior to January 1, 2027, fine-tuned covered model derivatives must be fine-tuned using at least three times 1025 FLOPS of computing power worth more than $10 million.  After January 1, 2027, GovOps may adjust the computing threshold.

Critical Harms & AI Safety Incidents.  SB 1047 would require AI developers to report “AI safety incidents,” or specific events that increase the risk of critical harms, to the California Attorney General within 72 hours after discovery.  Critical harms are defined as mass casualties or at least $500 million in damages caused or materially enabled by a covered model that: (1) creates or uses chemical, biological, radiological, or nuclear (“CBRN”) weapons; (2) conducts or instructs cyberattack(s) on critical infrastructure; or (3) engages in unsupervised acts that would be criminal if done by a human.  Critical harms also include other grave harms to public safety and security of comparable severity.

“AI safety incidents” are defined as incidents that demonstrably increase the risk that critical harms will occur by means of the following: (1) a covered model autonomously engaging in behavior not requested by a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of a covered model’s model weights; (3) critical failures of technical or administrative controls; or (4) unauthorized uses of a covered model to cause or materially enable critical harms.

Pre-Training Developer Requirements.  SB 1047 would also impose requirements on developers prior to the start of training a covered model, including:

  • Administrative, Technical, & Physical Cybersecurity Protections.  Protections must be reasonably designed to prevent unauthorized access, misuse, or unsafe modifications.
  • Full Shutdown Capability.  Developers must implement the capability to promptly enact a full shutdown of each covered model.
  • Safety & Security Protocols.  Developers must implement protocols for managing risks across each covered model’s life cycle, including procedures for avoiding critical harms, compliance requirements that can be verified by third parties, testing for unreasonable risks of critical harms, and conditions for enacting a full shutdown, among other things.  Developers must designate senior personnel responsible for ensuring compliance and retain and disclose protocols to the public and the California Attorney General.

Pre-Deployment Developer Requirements.  SB 1047 would impose separate requirements for developers prior to using a covered model or making a covered model available for commercial or public use, including:

  • Critical Harm Assessments.  Developers must assess whether each covered model is reasonably capable of critical harms.  The tests and results in these assessments must be retained for as long as the model is available, plus five years.  If unreasonable risks of critical harm are found, developers may not use or provide the covered model.
  • Critical Harm Safeguards & Attribution.  Developers must take reasonable care to implement appropriate safeguards to prevent critical harms.  Additionally, developers must take reasonable care to ensure that each covered model’s actions and resulting critical harms can be accurately and reliably attributed to the covered model.

Ongoing Developer Requirements.  Finally, SB 1047 would require developers to annually reevaluate their policies, protections, and procedures, and impose other ongoing requirements:

  • Third-Party Audits.  Starting January 1, 2026, developers must annually retain third-party auditors to perform SB 1047 compliance audits.  Auditors must produce certified reports with compliance assessments, instances of noncompliance, recommendations to improve compliance, and assessments of developers’ internal controls.  Developers must retain, publish, and disclose these reports to the California Attorney General.
  • Compliance Statements.  Within 30 days after using or making a covered model available and annually thereafter, developers must submit statements of SB 1047 compliance to the California Attorney General, including assessments of potential critical harms and assessments of the sufficiency of safety and security protocols.
  • AI Incident Reporting.  As mentioned above, developers must report AI safety incidents affecting covered models to the California Attorney General within 72 hours after learning of the incident or developing a reasonable belief that an incident occurred.
  • Whistleblower Protections, Notice, & Reporting.  Developers are prohibited from retaliating against employees who disclose information to the California Attorney General indicating noncompliance or unreasonable critical harm risks.  Developers must notify employees of their rights and responsibilities under SB 1047 and provide internal processes for anonymously disclosing information on noncompliance to the developer.

Future Regulations and Guidance.  SB 1047 requires GovOps to issue, by January 1, 2027, new regulations on the computational thresholds for covered models and auditing requirements for third-party auditors, in addition to guidance for preventing unreasonable risks of critical harms.  The regulations and guidance must be approved by the “Board of Frontier Models,” a nine-member group of AI and safety experts established by SB 1047. 

SB 1047 is just one of over a dozen AI bills passed by the California legislature last month covering a range of AI-related topics including election deepfakes, generative AI content and training data, and digital replicas.  The passage of SB 1047 also comes as Colorado lawmakers embark on a revision process for SB 205, as we have covered here

*                      *                      *

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years…

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years of experience on Capitol Hill, private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels businesses—especially technology companies—on matters involving intellectual property, national security, and regulation of critical and emerging technologies like artificial intelligence and autonomous vehicles.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch, including U.S. Capitol security after the January 6, 2021 attack and the rules and procedures governing the Senate. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s joint bipartisan investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory policy matters and managing state advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.