Skip to Main Content

AI Task Force

Oct. 22, 2025

Federal and State Regulatory Update: Growing Tension Over AI Oversight

By Jake Kohn

This is part of a series from Nelson Mullins' AI Task Force. We will continue to provide additional insight on both domestic and international matters across various industries spanning both the public and private sectors.

Artificial intelligence (AI) policy in the U.S. is entering a pivotal phase. The Trump administration is moving toward a deregulation-focused, market-driven framework for AI — while states, led by California, are advancing their own safety and transparency requirements. These parallel efforts are creating growing tension over who sets the rules for AI’s development and use, particularly in highly regulated sectors like health care. Understanding this evolving landscape is critical for organizations deploying or developing AI tools.

The Department of Health and Human Services (HHS) has clarified that the Trump administration does not support the private-sector-led Coalition for Health AI (CHAI) as the primary mechanism for vetting artificial intelligence tools in medicine. Deputy HHS Secretary Jim O’Neill emphasized that CHAI “does not speak for HHS” and cautioned against market consolidation under a single industry body. The administration instead favors a more decentralized, transparent, and market-driven approach to AI oversight in health care.

Separately, the Food and Drug Administration (FDA) issued a Request for Information (RFI) seeking feedback from patients, providers, and developers on how to best measure and evaluate the performance of medical AI tools — a signal that federal agencies are continuing to gather input as they assess potential regulatory frameworks.

More broadly, the Trump administration’s AI Action Plan emphasizes deregulation, infrastructure expansion, and global technology competitiveness, focusing on export controls and investment rather than new rulemaking. It notably sidesteps contentious areas such as AI and copyright, leaving those to the courts.

On Capitol Hill, Senate Commerce Chair Ted Cruz (R-TX) and House GOP Energy and Commerce leaders are exploring a federal moratorium on state and local AI laws, seeking to establish national consistency. However, bipartisan skepticism leaves the proposal’s path forward uncertain.

Federal–State Tension: Competing Approaches to AI Governance

While Washington prioritizes a light-touch, innovation-first approach, states—especially California—are asserting their own leadership. California recently enacted a landmark AI safety law requiring frontier AI developers to conduct and disclose safety and risk assessments. The law is designed to evolve as technology advances and is viewed as a potential blueprint for other states and even Congress.

Governor Gavin Newsom’s pragmatic approach—signing bills shaped through negotiation with industry—has allowed moderate measures to advance, even as more aggressive proposals on labor or child safety stall. Some AI firms, such as Anthropic, are supporting these moderate efforts to help shape future federal standards, while others are resisting state-based frameworks they view as inconsistent with the administration’s “America First” AI goals.

This dynamic underscores an emerging regulatory divide: a federal government favoring deregulation and market competition, versus states pursuing flexible, risk-based guardrails. The outcome of this push and pull will help define how innovation, accountability, and public trust evolve in the U.S. AI ecosystem.

Why It Matters for Industry

For technology, health care, and other AI-adopting sectors, these developments highlight the need to stay closely informed and strategically engaged. As federal policy moves toward deregulation while states build their own frameworks, companies risk compliance uncertainty and fragmented standards. Active engagement—through public comments, industry coalitions, and direct government relations—will be critical to ensuring workable policies that balance innovation with safety and public confidence.

The Nelson Mullins federal advocacy team has extensive AI experience to bring clients an understanding of how the legislative, administrative, regulatory, and political processes operate on the Hill and at various state houses and, in turn, impact your industry and members. 

Please reach out to Jake Kohn for any questions regarding this topic.

Follow Nelson Mullins' Idea Exchange for more thought leadership from our AI Task Force, or click here to subscribe to emails from the Nelson Mullins AI Task Force blog.