Skip to Main Content

Privacy & Data Security Alert

Jan. 12, 2026

New York Laws “RAISE” the Bar in Addressing AI Safety: The RAISE Act and AI Companion Models

By Jennie Cunningham, Amanda Witt, Mallory Acheson, CIPM, CIPP/E, FIP

The state of New York was one of the states at the forefront of artificial intelligence (AI) regulation in 2025. Among its notable 2025 activities, the New York state legislature enacted an omnibus budget law implementing safeguards for AI companions, which went into effect on November 5, 2025, and the Governor signed an amended version of the Responsible Artificial Intelligence Safety and Education (RAISE) Act on December 19, 2025. New York joins the ranks of states such as Colorado, California, and Utah in regulating AI at the state level in a growing patchwork of comprehensive and targeted AI legislation. Dozens of states ultimately passed AI-related legislation in 2025. As states continue to push forward with efforts to regulate at a state level, federal activities continue to raise questions regarding pre-emption and the fate of AI legislation. For example, an executive order (EO) issued in December directed the creation of an AI Litigation Task Force within the DOJ to challenge state laws on Interstate Commerce and First Amendment Grounds and threatened to withhold federal funding from states with onerous AI regulations. The EO has generated debate regarding the extent of federal authority over state-level AI regulation. Although it does not carry the same preemptive weight as an act of Congress, it introduces a period of legal uncertainty that may prompt states to adjust their regulatory timelines as the courts address the resulting constitutional challenges.

Below we have provided a summary of some of the notable New York AI-related laws that passed in 2025.

The AI Companion Model law requires operators and providers of AI companions to implement safety measures and protocols to detect and address users’ expression of suicidal ideation or self-harm and to regularly disclose to users that they are not communicating with a human.

Scope:

The AI Companion Model law applies to all operators of AI companions with users located within New York state. “Operators” includes both operators and providers of AI companions. Under the law, “AI companions” are defined as systems using AI, generative AI, and/or “emotional recognition algorithms” that are designed to simulate a sustained human-like relationship with a user by (1) retaining information on prior interactions and user preferences to personalize and facilitate ongoing engagement; (2) asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and (3) sustaining an ongoing dialogue concerning personal matters. “Emotional recognition algorithms” are defined as AI that “detects and interprets human emotional signals” in text, audio, video, or a combination thereof. AI can interpret text by using natural language processing and sentiment analysis; audio by using voice emotion AI; and video by using facial movement analysis, gait analysis, or physiological signals. An “AI Companion” does not include any system used by a business solely for customer service, internal purposes, or employee productivity; or any system used primarily for providing efficiency improvements, or research or technical assistance.

Key Requirements:

The AI Companion Model law requires operators of AI companions to provide a clear and conspicuous notification, either verbally or in writing, that the user is not communicating with another human being. This notification does not need to be provided more than once per day but does need to be provided at least every three hours for continuing interaction. Operators of AI companions must also ensure that their system contains protocols to take reasonable measures to detect and address a user’s expression of suicidal ideation or self-harm. At a minimum, upon detection of a user’s expression of suicidal ideation or self-harm, the operator must refer the user to an appropriate crisis service.

The law will be enforced by the state attorney general (“AG”), who may seek an injunction and civil penalties against operators of AI companions that the AG believes have violated or are about to violate these provisions. The AG may seek civil penalties of up to $15,000 per day for violating the notification and safety measures.

Next Steps:

Operators covered by the law must determine: (1) how to notify users verbally or in writing that the users are not communicating with a human, (2) how to ensure the notice is clear and conspicuous, and (3) how often to provide this information. Operators must also determine (4) how to detect a user’s expression of suicidal ideation or self-harm and (5) how to address expressions of suicidal ideation or self-harm.

New York’s approach to “AI companions” is novel, though California passed a similar law in October that became effective on January 1, 2026. California’s SB 243 tracks the New York requirements fairly closely, with some variation specific to notifications and protections for minors. California has passed a number of other laws related to chatbots and AI. Other states have taken steps to regulate chatbots more generally (e.g., Utah and Colorado) and the Kentucky AG has reportedly filed the first state lawsuit against a companion chatbot company.

New York also tackled broader AI safety issues in passing the RAISE Act,[1] effective January 1, 2027, which requires developers of frontier models to implement transparency and disclosure requirements, including making their safety and security protocols measures available to relevant authorities. On December 19, 2025, the Governor signed an amended version of the RAISE Act. The amended RAISE Act “builds on” and appears to more closely track California’s Transparency in Frontier AI Act (TFAIA), which was enacted in September 2025, than the version of the RAISE Act passed by the New York legislature in June, though it still contains significant requirements for frontier model developers. Note: The Governor and the legislature reportedly agreed that the RAISE Act will be implemented after chapter amendments are made to the current text in January 2026, but the final version of the law has not been published as of the date of this publication. The summary below is based on the current text of the bill and official press releases issued by the Governor and related agencies. Discrepancies are noted as relevant.

We will continue to monitor developments and update this alert as necessary when the final text is issued. 

The RAISE Act applies to frontier models developed, deployed or operated in New York. Frontier models are extremely large-scale AI systems (models trained using greater than 10º26 computational operations) with compute costs that exceed $100 million (and $5 million for specific “distilled” models); however, the chapter amendments reportedly may include a revenue threshold of $500 million to align with the TFAIA.

The RAISE Act’s legislative memo outlined key concerns that the legislators attempted to address, including statements from the AI industry around critical risk thresholds, testing that revealed models attempting self-replication and deception, risks related to biological weapon design assistance, industry concerns about the lack of federal regulation, and others.

The RAISE Act’s key requirements for developers include:

  • Conducting annual safety reviews and independent third-party audits; updating protocols as needed.
  • Publishing information about safety protocols (with some redactions permitted). The current text of the bill requires developers to also grant access to the relevant authorities, including the Attorney General (AG) and Division of Homeland Security and Emergency Services.
  • Reporting safety incidents (as defined) within 72 hours.
  • According to the current text, determining whether their models could cause “critical harm”, defined as the death or serious injury of 100 or more people or at least $1 billion in damage; creation of chemical, biological, radiological, or nuclear weapons; or engaging in conduct that (1) acts with limited human intervention and (2) would constitute a crime if committed by a human (involving certain levels of mens rea).
  • Creating a detailed safety and security protocol to prevent such critical harms and engaging in ongoing testing.
  • Maintaining specific records and reports.

The current text of the RAISE Act prohibits deployment of models posing an “unreasonable risk of critical harm.”

The New York Department of Financial Services will establish a new office to oversee AI development. The AG will be permitted to seek civil penalties of up to $1,000,000 for the first violation and up to $3,000,000 for subsequent violations, a significant reduction from the current text of the law, which permits $10 million and $30 million, respectively. The RAISE Act also contains whistleblower protections for employees.

New York enacted several other AI-related bills in 2025, including laws related to the disclosure of personalized algorithmic pricing, the use of algorithmic pricing by landlords, the disclosure of synthetic performers in advertisements, and the use of digital replicas. California was also particularly active on the AI regulation front in 2025, and over a dozen other states now have AI-specific laws relevant to the private sector.  We expect to see additional states propose AI-specific laws in 2026 despite the uncertainty created by the latest Executive Order.

Look for our forthcoming privacy and AI recommended compliance steps for 2026 for laws that have recently become effective or will become effective in 2026.


[1] Note that the chapter amended version of the RAISE Act does not appear to be publicly available on The New York State Senate legislative website as of January 8, 2026.