Skip to Main Content

AI Task Force

Oct. 24, 2025

California SB 53 — Expanded Compliance Guide for Frontier AI Developers

By Mallory Acheson, CIPM, CIPP/E, FIP, Jason I. Epstein

This is part of a series from Nelson Mullins' AI Task Force. We will continue to provide additional insight on both domestic and international matters across various industries spanning both the public and private sectors.

Starting January 1, 2026, the California's Transparency in Frontier Artificial Intelligence Act (“TFAIA”) introduces new requirements for developers of advanced "frontier" AI models. This guide provides details on who needs to comply, what actions are required (and when), and how these requirements fit into the broader U.S. and global regulatory framework. Although the TFAIA primarily addresses catastrophic risks or critical safety incidents, its obligations apply to all frontier developers.

a) Frontier Developer

You are a “frontier developer” if you train—or initiate training of—a “frontier model” defined as a foundation model (a model that is: (1) is trained on a broad set of data; (2) is designed for generality of output; and (3) can be adapted to a wide range of distinctive tasks [1]) that has been trained using more than 10²⁶ integer or floating-point operations (FLOPs), including cumulative compute across initial training and subsequent fine-tuning, reinforcement learning, or material modifications. [2]

Once an entity qualifies as a frontier developer for a model, it remains subject to the Act’s obligations for that model, even if subsequent compute use drops below the threshold.

b) Large Frontier Developer

A “large frontier developer” is a frontier developer whose entity (and its affiliates) had annual gross revenues exceeding US $500 million in the preceding calendar year. [3] Large frontier developers are subject to additional obligations under the Act.

Key Takeaway for Your Organization

  • Step 1: Catalogue all foundation models trained, fine-tuned, or modified and estimate cumulative compute usage (FLOPs) to determine if the 10²⁶ threshold is met.
  • Step 2: Determine whether your consolidated gross revenue (including affiliates) exceeded U.S. $500 million in the prior year.
  • Step 3: 
    • If both thresholds are met → you are a large frontier developer.
    • If only compute threshold is met → you are a frontier developer. 
    • If neither threshold is met → you do not currently fall under the Act’s mandatory obligations, though some companies may explore voluntary alignment.

2. Key Requirements – What You Must Do (and When)

(i) Frontier-AI Framework (Large Frontier Developers Only)

What: A large frontier developer must write, implement, comply with, and clearly and conspicuously publish on its website a “frontier AI framework” applicable to its frontier models. [4]
 
Core Elements Include:

  1. Incorporate national standards, international standards, and industry-consensus best practices into the frontier AI framework. 
  2. Define and assess thresholds for when a frontier model may pose a catastrophic risk (which may include multi-tiered thresholds). 
  3. Apply mitigations based on those assessments before deployment or internal use. 
  4. Review assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally.
  5. Use third-party evaluators or assessments for catastrophic-risk mitigation effectiveness. 
  6. Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures.
  7. Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties.
  8. Identifying and responding to critical safety incidents.
  9. Instituting internal governance practices to ensure implementation of these processes.
  10. Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.

Additional Required Provisions:

  • The developer may publish model cards or system cards that satisfy certain disclosure obligations. [5]
  • Redactions are permitted for trade secrets, cybersecurity, public safety, and national-security reasons, provided the developer describes the nature of the redaction and retains unredacted information for five years. [6]
  • Though not explicitly stating “annual review”, the commentary materials indicate the law takes effect January 1, 2026, and the Department of Technology is required to assess definitions annually [7] which implies monitoring and updating.

Practical Steps:

  • Form a cross-functional “AI Safety & Compliance Working Group” (Legal, AI research, IT/Security).
  • Conduct a gap-analysis of current model development lifecycle against statutory requirements.
  • Draft web-published, version-controlled Frontier AI Framework document; ensure board/senior-leadership awareness.
  • Integrate into corporate governance, audit, and oversight functions.

(ii) Transparency Reports [8]

What: Before deploying (making it available to a third party for use/modification/copying) but excludes making it available primarily for development or evaluation [9]) any new or substantially modified frontier model you must publish a transparency report. 

Report Content (Baseline):

The statute does not specify separate deadlines for large vs. non-large frontier developers. Both must publish transparency reports before deploying a new or substantially modified frontier model, with large frontier developers providing additional disclosures.

  • Developer website and contact mechanism. 
  • Model release date. 
  • Supported languages and output modalities. 
  • Intended uses and restrictions/licensing conditions. 

Additional Content for Large Frontier Developers:

  • Steps taken to fulfil the Frontier AI Framework. 
  • Assessment of catastrophic risks and results. 
  • Extent of third-party evaluation involvement. 

Practical Steps:

  • Prepare standardized templates for these transparency reports.
  • Embed the requirement into your model‐release workflow and product/deployment checklist.
  • Define “substantial modification” in internal policy (fine-tuning, reinforcement learning, structural changes) to trigger new report obligation.
  • Maintain a public record (website) of reports with revision dates and version history.

(iii) Catastrophic-Risk Assessments (Large Frontier Developers Only) [10]

What: A large frontier developer must transmit to the California Office of Emergency Services (OES) “a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to OES with written updates, as appropriate.” 

Definition of “Catastrophic Risk”: 

“Catastrophic risk” is a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident involving a frontier model doing any of the following:

  1. Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon.
  2. Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense.
  3. Evading the control of its frontier developer or user.

“Catastrophic risk” does not include a foreseeable and material risk from any of the following:

  1. Information that a frontier model outputs if the information is otherwise publicly accessible in a substantially similar form from a source other than a foundation model.
  2. Lawful activity of the federal government.
  3. Harm caused by a frontier model in combination with other software if the frontier model did not materially contribute to the harm.

Practical Steps:

  • Establish a catastrophic-risk register and risk assessment process.
  • Document scenario modelling, likelihoods, mitigation strategies, governance sign-off.
  • Prepare submission templates for OES; establish internal schedule (quarterly or alternate).
  • Ensure board/senior leadership are briefed on these obligations and oversight.

(iv) Critical Safety Incident Reporting [11]

The Office of Emergency Services (OES) is authorized under to adopt regulations and may designate federal laws or guidance as an alternative compliance mechanism for critical-safety-incident reporting. Developers subject to overlapping federal regimes should monitor OES rulemaking closely.

What: A frontier developer must report “critical safety incidents.” The statute requires the OES to establish a mechanism for submission by a frontier developer or member of the public.

Definition of “Critical Safety Incident”

“Critical safety incident” means any of the following:

  1. Unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury.
  2. Harm resulting from the materialization of a catastrophic risk.
  3. Loss of control of a frontier model causing death or bodily injury.
  4. A frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.

Deadlines:

  • For incidents presenting an imminent risk of death or serious injury, a report must be made to law-enforcement or public-safety agencies within 24 hours of discovery. 
  • For other qualifying incidents, the report must be made to OES within 15 days of discovery. 

Practical Steps:

  • Build an AI-specific incident-response workflow integrated with security and operations.
  • Train teams on identifying trigger events, escalation protocols, 24-hour vs 15-day reporting obligations.
  • Maintain an incident log recording discovery date/time, initial assessment, mitigation steps, external reporting details.

(v) Whistleblower Protections [12]

What: Employees (“covered employees”) who assess, manage or address frontier model risk are protected when they disclose:

  • (i) a specific and substantial danger to public health or safety from a catastrophic risk; or
  • (ii) a violation of the chapter. 

Large frontier developers must:

  • Provide an anonymous internal reporting process for such disclosures. 
  • Provide monthly updates to the whistleblower on status of disclosure. 

If retaliation occurs, the burden shifts to the employer to show clear and convincing evidence that they would have taken the same action absent the disclosure. 

Practical Steps:

  • Update your whistleblower policy to cover frontier-AI risk disclosures.
  • Set up an anonymous channel (internal or third-party) with monthly update mechanism.
  • Train human resources, legal, and safety teams in handling such disclosures in line with the Act.

(vi) Prohibition on False or Misleading Statements [13]

What:

  • A frontier developer shall not make a materially false or misleading statement about catastrophic risk from its frontier models or its management of catastrophic risk. 
  • A large frontier developer shall not make a materially false or misleading statement about its implementation of, or compliance with, its frontier AI framework. 
  • Exception: A statement is exempt if made in good faith and was reasonable under the circumstances.

Practical Steps:

  • Review all public communications (model cards, press releases, blog posts, investor materials) for accuracy and alignment with internal risk assessments and published frameworks.
  • Establish a review process (legal/comms) for statements about model safety, risk mitigation, frameworks.
  • Maintain a record linking public disclosures to internal documentation to demonstrate consistency.

(vii) Enforcement & Penalties [14]

What:

  • A large frontier developer that fails to publish or transmit a required document, makes a prohibited false or misleading statement, fails to report an incident, or fails to comply with its own framework is subject to a civil penalty in an amount not to exceed one million dollars (U.S. $1,000,000) per violation. 
  • A civil action to recover the penalty may be brought only by the California Attorney General. 
  • Additional Provision: The loss of value of equity does not count as “damage to or loss of property.” [15]

3. The Emerging Regulatory Patchwork – U.S. and Global Context

While SB 53 is a landmark, frontier-AI regulation is evolving rapidly. You should map your compliance posture beyond California. For example, the below are just subset of additional AI regulations which may overlap with your company’s AI program.

  • California AB 2013 (AI Training Data Transparency Act): Requires generative-AI systems to document training data provenance; effective January 1, 2026.
  • Colorado AI Act (SB 205): Governs “high-risk AI systems” used in employment, credit, education etc.; requires impact assessments, transparency statements, risk-management programs. Effective February 1, 2026.
  • New York RAISE Act (proposed): Focuses on algorithmic accountability (employment, housing, credit, public safety); not yet law, but may impose algorithmic-impact assessments, bias audits.
  • EU AI Act: Covers high-risk and general-purpose AI systems; requires risk management, documentation, conformity assessment, human oversight; global implications for U.S. developers.
  • Industry Standards (NIST AI RMF / ISO/IEC 42001): Increasingly referenced as foundational frameworks for trustworthy AI; often contractual commitments for vendors; SB 53 explicitly references these standards.

5. Strategic Considerations for Frontier AI Developers

  • Governance & Oversight: Integrate frontier-AI risk into enterprise risk management; assign board/senior leadership accountability.
  • Public Disclosures: Your frameworks and transparency reports will be publicly available; ensure legal/comms review for trade-secret, cybersecurity, national-security or safety risks.
  • Federal Interface: Although there is no comprehensive federal statute yet, federal agencies (including under the White House Executive Order and Commerce Dept. compute-reporting rules) may overlap. Align federal, state, and global obligations.
  • Evolving Technology & Thresholds: Expect thresholds and definitions to evolve; build flexibility into your compliance program.

6. Conclusion

SB 53’s Transparency in Frontier Artificial Intelligence Act represents a first-in-the-nation regulatory framework targeting developers of large-scale foundational models. It effectively mandates the same kind of transparency and governance found in other regulated industries, public disclosure, risk management, whistleblower protections, and incident-reporting.

For additional information or assistance with TFAIA compliance, please contact a member of Nelson Mullins’ AI Task Force. 

Follow Nelson Mullins' Idea Exchange for more thought leadership from our AI Task Force, or click here to subscribe to emails from the Nelson Mullins AI Task Force blog.

This client alert is provided for informational purposes only and does not constitute legal advice. Clients should consult with counsel regarding their specific circumstances and compliance obligations under TFAIA.


[1] § 22757.11(f))

[2] § 22757.11(i)(1)

[3] § 22757.11(i)(2)

[4] § 22757.12(a)–(d)

[5] § 22757.12(c)(3)

[6] § 22757.12(f)

[7] § 22757.14.

[8] § 22757.12(c)

[9] § 22757.11(e)

[10] § 22757.12(d)

[11] § 22757.13(a)–(f); § 22757.13(f)

[12] §1107

[13] § 22757.12(e)(1)(A)–(B)

[14] § 22757.15(a)–(b)

[15] § 22757.16