EU AI Act Starts to Bite: The Compliance Countdown

Europe’s landmark AI law enters its enforcement phase

Europe’s new rules for artificial intelligence are moving from text to practice. The EU Artificial Intelligence Act entered into force in August 2024, setting a staged path to enforcement. The law takes a risk-based approach. It restricts or bans some uses of AI while setting obligations for others. Companies now face a firm compliance clock.

EU officials have framed the law as a targeted intervention. The European Commission says the Act “does not regulate AI as a technology”, but rather focuses on how AI is used and the risks it creates. The law is the first of its kind at this scale. It aims to protect fundamental rights while keeping innovation alive.

What the AI Act does

The Act divides AI systems into categories of risk. The obligations grow as risks increase.

  • Prohibited practices. These include AI for “social scoring by public authorities”. They also include systems that manipulate behavior in ways likely to cause harm, and certain types of biometric categorization involving sensitive traits. The law also sharply restricts remote biometric identification in public spaces, with limited exceptions under strict safeguards.
  • High-risk systems. These cover AI used in areas like critical infrastructure, employment, education, law enforcement decision support, and medical devices. Providers must meet requirements for data governance, documentation, risk management, human oversight, and cybersecurity.
  • General-purpose AI (GPAI) models. Powerful models that can be adapted to many tasks face transparency duties. The most capable models have extra obligations around model evaluation, security, and incident reporting.
  • Limited-risk systems. Some tools must provide basic transparency. For example, users should be told when they are interacting with AI-generated content.

Penalties are significant. For the most serious breaches, fines can reach up to €35 million or 7% of global turnover, whichever is higher. Lesser violations draw lower ceilings.

Key dates and what changes when

Compliance requirements are phased in. The dates matter.

  • August 2024: The Act took effect. Institutions began setting up new governance bodies, including the Commission’s AI Office.
  • Six months after entry into force: Bans on prohibited practices apply. Providers must have stopped using outlawed systems.
  • 12 months after entry into force: Obligations for GPAI models start. Providers of large, general-purpose models must publish technical summaries, disclose training content sources in high-level terms, and implement security and evaluation measures as defined in guidance.
  • 24 months after entry into force: The bulk of high-risk system requirements apply. Conformity assessments, quality management systems, post-market monitoring, and incident reporting come online across many sectors.

National supervisory authorities will oversee enforcement. Harmonized standards and guidance will shape how companies comply. Industry will look to European standards bodies and the Commission for clarity on testing, documentation, and model evaluation.

Industry reaction and expert views

Reaction is mixed but pragmatic. Many firms welcome legal certainty after years of debate. Startups worry about paperwork and assessment costs. Open-source developers want room to experiment without heavy burdens. Civil society groups say the law is a starting point. They want strong enforcement and clear red lines on surveillance.

Policy analysts argue the risk-based lens matches how AI is deployed. In its AI Risk Management Framework, the U.S. National Institute of Standards and Technology notes that “The AI RMF is intended to be voluntary.” The EU takes a binding route but leans on similar concepts: risk identification, measurement, and governance throughout the AI lifecycle.

International coordination matters. At the 2023 AI Safety Summit in the UK, governments issued the Bletchley Declaration, saying they “affirm the need to address the risks from frontier AI.” The EU Act is now the most detailed legal instrument giving that idea force inside a major market.

Global ripple effects

The EU is large enough to shape practices beyond its borders. Many global providers will adjust products to meet EU rules. That could set de facto standards in documentation, safety testing, and transparency.

  • United States. There is no federal AI law yet, but agencies are active. The White House issued an executive order in 2023. NIST’s AI RMF 1.0 and emerging profiles guide voluntary risk management. Sector regulators, including the FTC and FDA, are scrutinizing AI claims and practices.
  • United Kingdom. The UK favors a regulator-led model. Its AI Safety Institute is testing advanced models. The government has signaled it may legislate if gaps remain.
  • G7 and OECD. The G7 Hiroshima process and updated OECD AI Principles push transparency and accountability. They align with the EU on risk management, even as legal tools differ.
  • Standards. Technical standards will underpin compliance. Industry is watching work at CEN-CENELEC and ISO/IEC, including management system standards designed for AI.

For multinational organizations, mapping overlaps between the EU Act, U.S. guidance, and UK expectations could reduce duplication. A single internal framework that covers governance, data controls, testing, and incident response can serve multiple regimes.

What companies should do now

Firms do not need to wait for every guidance document. Practical steps can begin today.

  • Inventory AI systems. Build and maintain a register. Note purpose, inputs, model types, and users.
  • Classify risk. Decide which systems could be high-risk. Document the rationale.
  • Strengthen data governance. Track data sources. Record licenses and consent where relevant. Test for bias and drift.
  • Set up human oversight. Define when and how people review, override, or stop AI decisions.
  • Prepare technical documentation. Keep up-to-date model cards, test results, and change logs. Plan for audits and conformity assessments.
  • Secure the pipeline. Apply security controls to training data, model weights, prompts, and outputs. Monitor for model and data poisoning.
  • Vendor management. Update contracts with AI suppliers. Seek transparency about model provenance and evaluations.
  • User transparency. Label AI-generated content where required. Offer clear notices and appeals for affected users.

Open questions to watch

The next year will bring more detail. Companies and regulators face several open questions.

  • Definitions and scope. Guidance will refine what counts as a GPAI model, how to measure capability thresholds, and how open-source models fit.
  • Standards and testing. Harmonized standards will set the bar for data quality, risk testing, robustness, and reporting. The specifics will drive day-to-day compliance work.
  • Enforcement approach. National authorities may start with education and warnings. But serious violations will test the law’s penalty regime.
  • Innovation balance. Policymakers will track whether compliance costs deter small developers. Sandboxes and support programs may help.

The EU AI Act is reshaping how AI moves from lab to market. It formalizes practices that many responsible teams already use. It also raises the floor for safety and transparency. For companies, the message is clear. Start early, document decisions, and build governance into the product lifecycle. The countdown has begun.