July marked a defining moment for global AI regulation, as momentum shifted decisively toward responsible innovation with an emphasis on guardrails. Both in the US and in the EU, policy makers removed or abandoned some heavy roadblocks that stood in the way of laws mandating transparency and regulations enshrining risk management.

The US AI Moratorium Is Defeated, For Now

On July 1, the US mega bill became federal law — without a controversial moratorium on state-level AI regulations. The moratorium would have banned enforcement of state rules on AI models and systems for 10 years. It had been added to the bill because of AI tech companies’ frustrations with the patchwork of state-level regulations.

The removal of this state-level AI regulation moratorium from the final law reaffirms that lawmakers can’t serve two masters: their constituents and big AI companies. States like California, Colorado, and New York are now free to continue pioneering their own AI safeguards and states that have delayed similar rules can now accelerate their efforts.

The White House’s new AI Action Plan, released on July 23, doesn’t fundamentally change the situation. The White House plan once again links federal funding to state AI laws perceived as “burdensome” or “restrictive to innovation.” However, it doesn’t define these terms, it is unlikely to affect states like California and New York who are less dependent on federal funds, and even if the White House plan does influence some states’ AI regulations, it won’t wipe them away.

It’s Not “Stop The Clock” On EU AI Regulations. Rather, Set Up The Alarm!

Meanwhile, on the other side of the Atlantic, the European Commission formally rejected technology lobbyists’ favorite “stop the clock” doctrine, ending speculations that went on for months. The EU Commission reaffirmed that the timeline for the enforcement of the EU AI Act will continue as originally announced. To the further despair of those who hoped for a delay in the implementation of the law, the EU also published the Code Of Practice for General-Purpose AI Models last Thursday.

  • The EU AI Act’s enforcement timeline remains intact. If you operate your AI in the EU or use AI-generated insights on the EU market, you must comply with this regulation. Some requirements are already under enforcement — including AI literacy requirements and those on prohibited AI use cases, and some will become enforceable on August 2. Among these, focus on the rules related to general-purpose AI providers, in particular. Providers, such as those of generative AI (genAI) models, are directly responsible for meeting these rules. But, these requirements will impact the value chain and the third-party risk management practices of any company using genAI models and systems — those directly purchased from genAI providers or embedded in other technologies.
  • The Code of Practice for General-Purpose AI Models supports compliance. This voluntary code of practice, crafted by 13 independent experts and reviewed by over 1,000 stakeholders, helps providers of genAI models to prepare to meet their requirements. It covers safety and security, copyright, and transparency. These are all critical priorities for every company — not just genAI providers — and this code of practice can help every company develop their compliance practices in these areas. This material can also indirectly support companies redesigning their third-party risk management practices for AI providers. Check it out along new guidance for the Commission on the implementation of the associated requirements.

A Turning Point For Every AI Compliance Strategy

As multinationals and organizations that operate in or have customers abroad must also comply with regulations in those regions, these twin AI regulatory developments will impact your company’s AI risk and compliance strategy. For risk and compliance pros, that means business as usual — you’ll have to make AI regulatory decentralization in the US and AI enforcement reinforcement in the EU work for your organization. In doing so, you’ll need to:

  • Continue to monitor state AI laws in the US. Track state-level AI proposals, monitor regulation statuses, pending dates for implementation, and timeframes for compliance. Keep abreast of upcoming state AI laws like the NY State RAISE Act, and prepare for AI laws placed on hold, to now be fast tracked.
  • Use the EU AI Act as a means to a trustworthy AI goal. Despite being very imperfect, many of the EU AI Act’s requirements can be useful steps toward building trustworthy AI frameworks, including robust data privacy, security, data governance, and elements of effective risk management. All these disciplines require that organizations assess the risks of their AI use cases. The risk pyramid in the Act is a powerful way to get your risk assessments and the associated mitigation going.

If you are a Forrester client, schedule a guidance session with us to continue this conversation and get tailored insights and guidance for your AI compliance and risk management programs.