EU AI Act Enters Force, Compliance Clock Starts

Europe’s landmark AI law begins phased rollout

The European Union’s Artificial Intelligence Act has entered into force, starting a phased rollout over the next two years. The law sets rules for how AI is built and used in the bloc. It follows a risk-based approach and introduces new duties for developers and users. Regulators say the goal is to build trust while supporting innovation.

“The AI Act is the first comprehensive law on AI worldwide,” the European Commission states on its official explainer. The measure was adopted after years of debate and negotiation. It arrives as businesses rush to deploy generative AI in products and workflows.

What the law does

The Act groups systems into categories by potential harm. Higher risk means tighter controls. The law also adds tailored rules for general-purpose AI (GPAI), including powerful models used across many tasks.

  • Unacceptable risk: Systems that pose a clear threat to safety or rights are banned. Examples include manipulative uses that exploit vulnerabilities and “social scoring by public authorities.”
  • High risk: Tools used in areas like hiring, education, critical infrastructure, medical devices, and law enforcement face strict requirements. These include risk management, data governance, cybersecurity, documentation, and human oversight.
  • Limited risk: Systems must meet transparency duties, such as telling users they are interacting with AI when it is not obvious.
  • Minimal risk: Most AI applications fall here and are largely unaffected beyond existing laws.

The law also creates an EU-level AI Office for oversight of advanced models and coordination with national authorities. Together, they will supervise compliance, issue guidance, and coordinate enforcement.

Timeline at a glance

  • Now: The Act is in force. EU bodies are setting up governance, including the AI Office and national market surveillance authorities.
  • +6 months: Bans on “unacceptable-risk” systems apply.
  • +9 months: Voluntary codes of practice are expected to guide early implementation, particularly for GPAI providers.
  • +12 months: Core rules for general-purpose AI models begin to apply, including technical documentation and model governance duties. Extra obligations may apply to the most capable models.
  • +24 months: The bulk of high-risk system requirements take effect, along with conformity assessments and post-market monitoring.

Companies with EU customers or operations should map products to these phases now. Timing may vary by system type and the specific obligations triggered.

Who is affected

The law targets the full AI value chain. Duties differ by role.

  • Providers (developers) must build compliance into design and training. They will need technical files, risk controls, and quality management systems for high-risk systems.
  • Deployers (users in organizations) face obligations, too. These include conducting impact assessments when required, keeping logs, and ensuring competent human oversight.
  • Importers and distributors must verify CE markings and documentation before placing systems on the EU market.
  • GPAI providers must meet documentation, copyright, and model governance duties. The most capable models may face systemic risk tests, incident reporting, and cybersecurity requirements.

Open-source components receive certain exemptions. But if an open-source model is integrated into a commercial high-risk system, the finished product must comply.

Penalties and oversight

Fines can be significant. The law allows penalties up to 7% of global annual turnover for the most serious violations, such as using prohibited systems. Other breaches can draw lower, but still substantial, fines. Authorities can also order fixes or remove systems from the market.

National regulators will conduct checks and respond to complaints. The AI Office will coordinate cross-border cases and guidance. It also holds new powers over the largest models, including the ability to request information and commission evaluations.

Business response and readiness

Companies are building AI governance programs to prepare. Many are aligning with the U.S. National Institute of Standards and Technology framework. NIST’s AI Risk Management Framework recommends organizations “govern, map, measure, and manage” AI risk. The approach has become a common baseline for compliance teams in Europe and beyond.

Large firms are running gap assessments, updating data governance, and cataloging models. Vendors are refreshing product roadmaps to include logging, testing, and human-in-the-loop controls. Smaller companies are seeking clear templates from industry groups and regulators.

  • Data: Check the legality and quality of training and validation sets. Document sources and consent where needed.
  • Testing: Build pre-release and ongoing testing for bias, robustness, and security.
  • Transparency: Prepare user-facing notices and system cards that explain capabilities and limits.
  • Oversight: Define human review points for important decisions. Train staff and assign accountability.
  • Incident response: Set up channels to monitor, log, and report serious incidents or model malfunctions.

Analysts say the new costs will be most intense for high-risk products and advanced models. But they note potential benefits. Clear rules can reduce uncertainty and support cross-border deployment.

Civil society and academic views

Rights groups have pressed for hard limits on surveillance and discrimination. They argue that strong enforcement is key. Researchers also call for more access to data and model interfaces to study real-world risks. Many support phased rules if they come with transparency and accountability hooks.

Universities and standards bodies are preparing evaluations and benchmarks. Work is under way on testing methods for bias, generalization, and safety. The law encourages harmonized standards, which may lower costs if adopted widely.

Global context

The EU move adds momentum to a broader regulatory wave. The NIST AI RMF is voluntary but influential. The G7 has promoted the Hiroshima AI Process. The OECD AI Principles call on AI actors to respect human rights and the rule of law. At the United Nations, member states endorsed a resolution that urges safe, secure, and trustworthy AI development.

For global vendors, this means building “privacy- and safety-by-design” into products. Many plan to ship a single, compliant stack rather than maintain separate versions for each region.

What to watch next

Key milestones in the coming months will shape how the law works in practice.

  • Guidance: Clarifications from the AI Office and national authorities on documentation, testing, and GPAI thresholds.
  • Standards: European standards bodies are drafting technical norms for risk management, data governance, and transparency.
  • Codes of practice: Early playbooks for GPAI and deployers that can later become formalized.
  • Enforcement cases: Initial actions will test how regulators interpret the rules and set precedents for penalties.

For now, the message to companies is simple. Start early. Map your AI systems. Prioritize high-risk use cases. Invest in testing, documentation, and oversight. Build a governance track that can scale. As one EU summary notes, the Act aims to keep AI systems “safe and respectful of fundamental rights” while allowing innovation. The compliance clock is running.