Drowning In Rules: Navigating America’s AI Regulatory Patchwork
US companies are drowning in AI rules. With a labyrinth of conflicting state laws and no single federal requirement, even the most responsible innovators are struggling to stay afloat. California’s landmark Transparency in Frontier Artificial Intelligence Act proved that states can regulate AI without killing innovation, but it also underscores a hard truth: The current US patchwork of AI regulations is making accountability more complex, not less.
Three Forces Are Throwing Compliance Into Chaos
In our new report, Enza Iannopollo and I dive into the dynamics that make US AI regulations increasingly challenging yet absolutely critical. To Navigate The Patchwork Of US AI Regulations, you must know that there is:
- A flurry of state laws. With over 1,100 AI bills introduced across states in 2025, businesses face an impossible task as each state creates its own compliance regime, definitions, and penalties. Companies operating across state lines have their hands full monitoring and implementing multiple requirements simultaneously. Sadly, for many, this means diverting resources from innovation to compliance bureaucracy.
- Federal preemption. Federal action aims to preempt state-level AI laws, with the administration’s AI Litigation Task Force challenging regulations that “unconstitutionally regulate interstate commerce.” The December 2025 executive order calls for a national framework forbidding conflicting state laws, but instead of delivering clarity, its federal preemption clause has raised legal issues and simply added another regulatory layer.
- A litigation juggernaut on the horizon. As AI adoption accelerates and US federal AI regulation lags, the courtroom, not Congress, serves as the primary arena for AI concerns. Litigation continues to influence the AI conversation, and 80% of US corporate counsel now project an increase in class-action lawsuits stemming from AI use. These aren’t academic debates; they’re creating the compliance requirements of tomorrow while organizations scramble to meet the conflicting demands of today.
The GSA’s Proposed AI Clause Adds Another Layer of Uncertainty
The US General Services Administration’s released-for-comment proposed AI clause, GSAR 552.239-7001, would impose sweeping AI requirements on government contractors, including mandates for exclusive use of American-made AI system, 72-hour incident reporting, and direct liability for vendor compliance. Comments submitted by organizations such as the US Chamber of Commerce and the Coalition for Common Sense in Government Procurement highlight multiple concerns with vague requirements that defy clear interpretation and would create more chaos than clarity.
Don’t Expect Regulatory Clarity — Act Now
The bottom line is that there’s no single playbook coming from Washington any time soon. Companies must turn this regulatory chaos into their competitive advantage by embracing regulatory complexity as a strategic capability, not a compliance headache. AI risk and compliance pros, you must now:
- Adapt your governance framework to account for agentic AI. Responsible AI in the age of agentic systems must transition to governing autonomous decision-making in real time, not periodically or at random moments. Adopt Forrester’s AEGIS framework to embed explainability, accountability, and trust into your AI infrastructure and in alignment with your risk appetite and values.
- Cross-map AI requirements across state laws and regimes. Doing dozens of assessments is neither practical nor effective. Use Forrester’s AI regulatory crosswalk to mapping controls across regulations, such as the EU AI Act, and frameworks, like the NIST AI Risk Management Framework and ISO 42001, to fast-track your readiness.
- Complement AI governance with AI risk management. Understanding the difference between the two is critical. While governance hands you principles and policies, it doesn’t identify threats or prevent harm. AI risk management helps you move from intent to protection. It assigns accountability, surfaces real risks, and mitigates damage as AI moves from design to deployment to scale.
- Get a handle on AI risk of third-party providers. Treat third‑party AI use as a risk vector, not a procurement issue. That means identifying AI exposure early and often. Start by asking the right questions when evaluating AI products or services. Ask for evidence, test and validate claims, bake AI clauses into contracts, and reinforce risk expectations through SLAs, monitoring, and escalation paths. And add embedded AI (new AI features and capabilities of existing vendors and suppliers) to your AI governance and risk program.
If you are a Forrester client, schedule a guidance session with us to continue this conversation and get tailored insights and guidance for your AI compliance and risk management programs.