Sept. 24, 2024
This is part of a series from Nelson Mullins' AI Task Force. We will continue to provide additional insight on both domestic and international matters across various industries spanning both the public and private sectors.
Update: Governor Newsom vetoed this bill on Sept. 29, 2024, and announced a set of new AI initiatives. For more details, see the Nelson Mullins' AI Taskforce's updated blog.
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (“AI Safety Bill”) was passed by the California legislature on Aug. 28. Governor Gavin Newsom has until the end of this month to sign the bill into law. The AI Safety Bill introduces significant compliance obligations for businesses developing, training, or fine-tuning artificial intelligence models that meet specific computational power and financial thresholds and that could be used to create critical harms, all as further defined, below.
Affected businesses should consider several immediate steps to ensure they remain compliant:
In the detailed analysis below, we’ll explore the key provisions of the AI Safety Bill and how they shape the responsibilities of businesses working with covered models.
The AI Safety Bill establishes a Board of Frontier Models (the “Board”), a component of the Government Operations Agency, which will provide oversight and regulation over individuals or entities that develop, train, or fine-tune covered models. Under the bill, “covered models” refers to AI models that meet the following criteria:
For purposes of this article, “covered model” includes the covered model and any derivative of the covered model.
The AI Safety Bill is directed at preventing “critical harm,” which is defined as:
In addition, the AI Safety Bill defines “AI safety incident” as any incident that demonstrably increases the risk of critical harm by any of the following means:
Key Requirements for California Companies Developing Covered Models
For California companies developing, training, or fine-tuning covered models, these are some of the compliance obligations under the AI Safety Bill and where they relate to the AI lifecycle:
Enforcement and Compliance Expectations
The California Attorney General is empowered to seek a range of remedies for violations. These include civil penalties for violations causing death, bodily harm, or significant property damage, with fines reaching up to 10% of the computing power costs used to train the AI model, increasing to 30% for subsequent violations. Additional penalties include fines for labor violations, with amounts up to $10 million for related offenses. The Attorney General may also pursue injunctive relief, monetary damages, punitive damages, and attorneys’ fees. Courts are authorized to provide any other relief deemed appropriate and may disregard corporate formalities to impose joint and several liability if affected businesses have structured their entities in a way that unreasonably limits or avoids liability.
The AI Safety Bill introduces new regulatory requirements for large-scale AI models developed or fine-tuned in California. It establishes compliance obligations and grants the Attorney General the authority to bring civil enforcement actions for violations. Affected California businesses need to understand how these regulations will impact their operations, including the need for robust safety protocols and potential penalties for non-compliance. The interaction between these state-level rules and existing or future federal regulations is still uncertain and companies should closely monitor how this regulatory framework evolves.
Companies must proactively prepare for compliance. This involves not only understanding and adhering to the new regulations but also enhancing internal safety protocols. By anticipating these regulatory changes and integrating robust risk management strategies, businesses can better navigate the evolving landscape of AI regulation and safeguard their operations against potential liabilities.
Follow Nelson Mullins' Idea Exchange for more thought leadership from our AI Task Force, or click here to subscribe to emails from the Nelson Mullins AI Task Force blog.
These materials have been prepared for informational purposes only and are not legal advice. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. Internet subscribers and online readers should not act upon this information without seeking professional counsel.