With federal preemption of AI regulation appearing unlikely, having been removed by a vote of U.S. senators in the negotiation over the federal budget bill, it is a good time to take stock of U.S. state-level AI regulation.
In the second half of 2024, many observers had predicted a rapid spread of EU‑style, cross‑sector “AI Acts” across the U.S. states. Colorado had recently enacted the Colorado Artificial Intelligence Act (“CAIA”); Virginia, Texas, California, Connecticut, and others were considering similar comprehensive frameworks; and lobbyists were warning of a patchwork of dozens of different state AI laws.
But as we begin the second half of 2025, the picture looks very different. There are doubts that CAIA will be implemented as drafted on schedule, and every other state bill that proposed comprehensive AI regulation has stalled, been vetoed, or been pared back dramatically. Instead, there is increasing momentum behind narrower, fit-for-purpose AI laws.
In this Debevoise Data Blog post, we survey the current status of state-level AI legislation, consider why comprehensive AI bills are struggling, review some of the fit-for-purpose rules that are filling the gaps, and provide guidance for businesses that are trying to chart a course through uncertain regulatory waters.
Comprehensive State AI Laws in Retreat
- Colorado. CAIA remains the only comprehensive, cross‑sector AI framework adopted by any state. But almost a year later, few politicians in Colorado seem particularly pleased with it. The law was passed and signed with express reservations, but efforts to amend the law before the end of the Colorado 2025 legislative session all failed. There is speculation that the governor will call a special session for the legislature to forge a workable compromise on CAIA’s substantive provisions, but with a looming effective date of February 2026, a strategic delay seems a likely possibility.
- Virginia. Virginia’s High‑Risk Artificial Intelligence Developer and Deployer Act (HB-2094) cleared both chambers this spring only to be vetoed by the governor. The bill would have imposed obligations on participants at various stages of the AI value chain, including requirements for impact assessments, bias testing, and detailed documentation and disclosures. Critics argued that these requirements were overly broad, unclear in scope, and difficult to implement without further regulatory guidance—all concerns that the governor cited in his veto message.
- Connecticut. Connecticut’s attempt to pass a comprehensive risk-based AI bill preceded Colorado’s adoption of CAIA. Introduced in February 2024, the state’s SB-2 proposed a tiered approach to regulating AI systems based on risk classification, with requirements for algorithmic impact assessments, transparency measures, and governance policies for high-risk applications. The bill drew early support from privacy advocates but encountered pushback from the business community and state officials concerned about competitiveness and enforceability. The bill stalled in 2024, but it was reintroduced in the 2025 session, and it passed the state Senate in May 2025. It now sits as it did in 2024, awaiting action from the House and facing potential opposition from Connecticut Governor Ned Lamont.
- Texas. The state’s initial draft AI bill would have established risk-based requirements for both public and private sector AI use, including mandatory algorithmic impact assessments, transparency obligations, and limitations on certain applications deemed high-risk. Following sharp criticism from industry stakeholders and state agencies about potential overreach and enforceability challenges, lawmakers pared it back dramatically. The final product, the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”), passed with broad support in the state and will become effective January 1, 2026.As adopted, TRAIGA is addressed almost entirely to government procurement and use of AI systems, with private sector prohibitions limited to a small number of egregious, knowing, and intentional instances of misuse. These include: (1) developing or deploying an AI system in a manner that intentionally encourages or incites a person to commit physical self-harm, harm to another person, or to engage in criminal activity; (2) developing or deploying an AI system with the sole intent of infringing, restricting, or impairing an individual’s Constitutional rights; (3) developing or deploying an AI system with the intent of unlawfully discriminating against a protected class in violation of either federal or state law; and (4) developing or distributing an AI system with the sole intent of producing or distributing child pornography, unlawful explicit deepfake videos or images, or sexually explicit text-based conversations where the AI system impersonates or imitates a minor.
Why Broad State “AI Acts” Lost Momentum
There are several reasons for the loss in momentum:
- Adequacy of Existing Enforcement Tools. Experience has shown less of a gap in existing law than people thought. Many of the potential harms from AI adoption that have drawn the greatest attention—including fraud, discrimination, and manipulation of consumers—are already illegal, and regulators like the Federal Trade Commission and the SEC have shown they can police AI-related misconduct under existing technology-neutral statutes.
- Innovation Concerns. Faced with extreme AI industry competition and high hopes for commercial growth, governors, economic‑development offices, and industry representatives have successfully argued that broad, ambiguous, and burdensome frameworks could drive AI talent and capital to another state.
- Patchwork Fatigue. Consumer groups have acknowledged that dozens of divergent state “AI Acts” might raise compliance costs for businesses without materially increasing protection.
- No Proven Success Model. The buyers’ remorse visible among Colorado policymakers and EU member states has made it difficult for new entrants to think that the benefits of comprehensive regulations are worth their costs. As with Colorado’s CAIA, the EU AI Act’s timeline now seems to be in jeopardy, as the Swedish Prime Minister recently proposed a delay in implementation for the next phase of the EU AI Act based on the failure to publish required technical standards that would facilitate compliance.
- The Risk of Federal Preemption. The looming prospect of federal preemption could also explain the shift away from attempts at comprehensive AI regulation: it would be understandable if state legislators were reluctant to sink political capital into laws that they believed could have been nullified overnight.
Targeted, Domain‑Specific Rules Fill the Gap
As omnibus legislation has faltered, fit‑for‑purpose statutes and regulations have been proliferating in a variety of areas where gaps were perceived in existing regulations.
- Insurance. States like Colorado and New York have adopted targeted insurance regulations or guidance focused on the risk of unfair discrimination that could result from the use of AI in pricing and underwriting insurance. Key examples include Colorado DOI Regulation 10‑1‑1 and New York DFS Insurance Circular Letter No. 7.
- Child Safety & Deepfakes. More than 30 states have now passed legislation prohibiting AI-generated child sexual abuse material (“CSAM”) and non-consensual sexual deepfakes.
- Political Disinformation. California has led efforts to regulate AI-generated political misinformation with measures like AB 1831, AB 2655, AB 2839, and AB 2355, which include criminal penalties, takedown obligations, and disclosure requirements for AI‑generated content distributed during election seasons.
- Consumer Chatbot Laws. Utah’s Artificial Intelligence Policy Act (“UAIPA”) took effect in May 2024 and required transparency when generative AI is used to simulate interactions with a consumer. For regulated occupations, the disclosures are required without any prompting by the consumer. Otherwise, the disclosures are required only if a consumer specifically asks whether they are interacting with generative AI. Taking a similar approach, on June 12, 2025, Maine passed LD 1727, a one-page bill that requires clear and conspicuous disclosures when an AI chatbot is used to engage in trade and commerce with a consumer in a manner that would mislead them into believing that they are interacting with a human being. The bill frames violations of the disclosure requirement as violations of the Maine Unfair Trade Practices Act.
- Mental-Health Chatbots. Utah’s HB 452 imposes requirements on mental-health chatbot providers, including mandatory AI disclosures, escalation protocols for users in crisis, and the imposition of steep fines for noncompliance.
- “AI Companion” Risks. New York’s AI Companion Law (2025-A6767) creates duties for developers of AI companions (which are AI systems designed to simulate sustained human-like relationships with users) to detect suicidal or violent statements and to provide users with periodic reminders that they are interacting with a generative AI system.
These targeted provisions frequently share two hallmarks:
- Clear Theory of Harm. Each of the statutes or regulations above is targeted to address a concrete risk of specific harm resulting from particular uses of AI (e.g., biased insurance pricing, sexual exploitation, voter deception). The focus on defined harms helps in two ways: (1) it galvanizes support from key constituencies, and (2) it makes designing mitigations or requirements easier.
- Regulator Domain Expertise. In many instances, there are existing agencies—insurance commissioners, financial regulators, attorneys general—that are well-positioned to enforce these statutes or regulations within their traditional regulatory wheelhouses.
Key Takeaways
- Opponents of comprehensive regulation shouldn’t declare victory, yet. As proposals for narrow AI regulations proliferate, the sheer number and diversity of bills could lead to renewed interest in one-and-done comprehensive legislation. For today, however, it is looking less likely that we will have multiple overlapping comprehensive AI state laws than it did even a few months ago. And that is probably a good thing.
- Sector-specific compliance expectations are growing. If your business involves retail financial services, insurance, healthcare, education, employment, or targeted advertising, assume more specialized AI rules are coming.
- Leverage existing programs as part of effective operational risk management. In the face of regulatory uncertainty, the best approach is to first focus on operational risks and compliance with existing laws, rather than attempt to guess where it all will shake out.
*****
To subscribe to the Data Blog, please click here.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them with their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.
The cover art used in this blog post was generated by ChatGPT 4o.