After rejecting the “stop the clock” lobbying efforts from the tech industry, the EU is moving forward as planned with the next phase of the EU AI Act.

If your company operates AI systems in the EU or uses AI-generated insights on the EU market, you need to pay close attention — especially to the rules concerning general-purpose AI (GPAI) providers. This accountability includes providers of genAI models. But the impact doesn’t stop there. Any organization using genAI — whether through direct purchase or embedded in other technologies — will likely face ripple effects across their value chains and third-party risk management programs.

Despite speculation about possible delays, the EU has held firm on its timeline and released a range of tools to help companies prepare. Every company, not only GPAI providers, must be familiar with:

  • EU guidelines on the scope of GPAI providers’ requirements. The EU has defined key terms — such as what qualifies as a “general-purpose AI model” — and introduced a training‑compute threshold as a practical benchmark. These elements are very useful for every company looking to clarify critical concepts of the regulation, such as which significant modifications trigger provider obligations, how to interpret the meaning of “general-purpose” AI, etc. Developed through extensive consultation, the guidelines are not legally binding but reflect the European Commission’s enforcement interpretation and are intended to guide providers in preparing for regulatory obligations.
  • The EU code of practice for GPAI providers. This is a voluntary framework designed to help companies align with the upcoming requirements of the EU AI Act ahead of formal enforcement. The code outlines practical steps GPAI providers can take to improve transparency, safety, and accountability in their AI systems. It includes guidance on model documentation, risk mitigation, and responsible deployment practices. Major AI companies such as OpenAI, Mistral, and Anthropic have already signed on, signaling growing industry support for trustworthy and harmonized AI governance in the EU. For companies that use GPAI models and systems, the code of practice is useful in guiding updates to party risk management frameworks for GPAI providers.
  • Template for transparency of training data for GPAI providers. This is a mandatory template requiring all GPAI providers to publish a public summary of the major sources’ data used to train their models. This summary must cover training content across all stages — from pre‑training to fine‑tuning — and include types of data such as public and private datasets, web‑scraped content, and user‑generated and synthetic data. Companies using GPAI must obtain these summaries via providers’ websites and distribution channels and expect them to be updated every six months (at least) if the provider uses substantial new datasets.

The EU AI Act isn’t just a regional regulation — it’s the only binding global framework for trustworthy AI. Whether you like it or not, it’s set to influence AI governance, risk management, and compliance practices around the world. And while the act isn’t perfect, it offers practical steps toward building more responsible AI systems — including stronger data governance, privacy, security, and risk oversight. At the heart of this is the act’s AI risk pyramid, which gives companies a structured way to evaluate and mitigate the risks of their AI use cases.

If you have any questions about compliance readiness and best practices, what the EU AI Act means for your AI strategy, and how to use it to build trustworthy AI, schedule a guidance session with me. Be sure to keep following my latest research, as new reports on software offerings designed to help companies meet the requirements of AI regulations are on the way!