From Veto To Victory: California’s New AI Act Revives The National (And International) Conversation On AI Regulations
Defying the odds and lobbying pressure, California’s SB 53, known as the Transparency in Frontier AI Act (TFAIA), is now officially a law and new framework for AI policy nationwide. With Governor Newsom’s signature, California not only contributes to defining what responsible AI governance could look like but also proves that AI oversight and accountability can achieve (at least some) industry support.
Unlike its predecessor (SB 1047), vetoed a year ago for being too prescriptive and stringent, TFAIA is laser-focused on transparency, accountability, and striking the delicate balance between safety and innovation. This is particularly critical considering the state is home to 32 of the top 50 AI companies worldwide.
TFAIA Finds The Illusive Middle Ground
At its core, TFAIA requires safety protocols, best practices, and key compliance policies, but it stops short of prescribing risk frameworks and imposing legal liabilities. Here’s a closer look at what’s in the new AI law:
- Transparency. This law applies to large developers of frontier AI models with revenue exceeding $500 million. They must now publicly share detailed frameworks describing how their models align with national and international safety standards and industry best practices. Companies that deploy AI systems, companies that use AI, users of AI products, and small AI developers are not subject to these requirements.
- Public-facing disclosure. Disclosures of general safety framework(s), risk mitigation policies, and model release transparency reports must be made available on the company’s public-facing website to ensure safety practices are accessible to both regulators and the public.
- Incident reporting. The law mandates reporting of critical safety incidents “pertaining to one or more” of its models to the California Governor’s Office of Emergency Services within 15 days. Incidents that pose an imminent risk of death or physical injury must be disclosed within 24 hours of discovery to law enforcement or public safety agencies.
- Whistleblower protections. It expands whistleblower protections, prohibits retaliation, and requires companies in scope to establish anonymous reporting channels. The California attorney general will begin publishing anonymized annual reports on whistleblower activity in 2027.
- Supports innovation through “CalCompute.” The law establishes CalCompute, a publicly accessible cloud compute cluster under the Government Operations Agency. Its goal is to democratize research, drive fair competition, and foster development of ethical and sustainable AI.
- Continuous improvement. The Department of Technology is tasked with annually reviewing and recommending updates, ensuring that California’s AI laws evolve at the speed of innovation and adapting to new international standards.
Another Blueprint For States
With no foreseeable path to a US federal policy, and following Meta’s announcement of a super PAC to fund state-level candidates that are sufficiently pro-AI (sufficiently against AI regulations), the battle over regulating AI is at the state level, not Congress. With TRAIA, California sends a clear message that states now own the responsibility and capacity to set meaningful standards for AI. And they can do so without sacrificing innovation, growth, or opportunity.
California Ends The “Stop The Clock” Rhetoric
California’s newly adopted AI legislation breaks the spell of the regulation “slow down and wait” narrative. It shows regulation and successful AI development don’t just coexist; they reinforce each other. Expect this new bill to puncture the “stop the clock” rhetoric and spur more governments to get serious about their own AI rules.
Companies Will Have To Monitor And Pay Attention
The three major state AI laws passed so far vary in focus and intent. California’s (TFAIA) focused on transparency, Colorado’s Artificial Intelligence Act (CAIA) targets high risk applications and consequential decisions especially for consumers, while Texas’ Responsible Artificial Intelligence Governance Act (TRAIGA) concentrates on prohibiting harmful uses of AI particularly for minors. Organizations operating across state lines will need to carefully monitor these and any new laws as they’ll need to comply with all states’ unique requirements.
If you are a Forrester client, schedule a guidance session with us to continue this conversation and get tailored insights and guidance for your AI compliance and risk management programs.