EU’s AI Act Starts to Bite: What Changes Now

Europe moves first with binding AI rules

The European Union’s Artificial Intelligence Act has begun to take effect, starting a phased rollout that will reshape how AI is built and used in Europe and beyond. The law entered into force in August 2024. Its earliest bans and duties start to apply in 2025, with broader obligations following over the next two years.

EU officials call the measure the world’s first comprehensive AI law. In a 2024 press release, the European Commission described the AI Act as “a global first” and framed it as a way to encourage innovation while protecting rights.

The law sorts AI into risk tiers and imposes stricter rules for higher-risk uses. It also sets transparency duties for general-purpose models, including popular generative systems. The EU has created an AI Office within the European Commission to coordinate enforcement across member states.

What changes, and when

The AI Act’s obligations do not arrive all at once. They are phased in to give regulators and industry time to prepare. Key milestones include:

  • Prohibited practices: Certain uses deemed “unacceptable risk” are banned from early 2025. Examples include social scoring by public authorities and some forms of biometric categorization.
  • General-purpose AI duties: Transparency and documentation rules for general-purpose AI (GPAI), including generative models, begin in 2025. Providers must disclose a sufficiently detailed summary of training data and share technical information with downstream users.
  • High-risk systems: Most obligations for high-risk AI, such as systems used in critical infrastructure, education, and employment, phase in over the following years. These include risk management, data quality, human oversight, and post-market monitoring.

National market surveillance authorities will oversee compliance within each country. The European Commission’s AI Office will coordinate cross-border issues and supervise the most advanced, systemic models.

What companies need to do now

Many developers and deployers are starting compliance programs. Their first steps include mapping AI systems to risk categories and shoring up documentation.

  • Inventory and classification: Identify AI systems in use. Classify them under the law’s risk tiers. Flag any systems that may fall under the bans.
  • Data and documentation: Prepare technical files. Record training data sources. For GPAI models, publish a summary of training content. Improve data governance to reduce bias and errors.
  • Testing and evaluation: Build internal evaluation pipelines. Test for safety, robustness, and potential harms. Document limits and known failure modes.
  • Human oversight: Define who can intervene or override AI decisions. Train staff. Ensure users receive clear instructions and warnings.
  • Vendor management: Update contracts. Request conformity evidence from suppliers. Align terms with EU transparency and copyright rules.

Experts say early planning can reduce costs. It can also create a paper trail for regulators. That may matter if systems are implicated in incidents or complaints.

Supporters and critics weigh in

The AI Act aims to safeguard fundamental rights while keeping a path open for innovation. Supporters say clear rules will build trust and reduce legal uncertainty.

Some civil society groups welcomed the bans on social scoring and certain biometric uses. Others worry about enforcement gaps and carve-outs for law enforcement. Industry groups say the law is ambitious. They urge regulators to provide practical guidance and harmonized standards.

Sam Altman, the chief executive of OpenAI, told US lawmakers in 2023: “We think that regulatory intervention by governments will be critical.” That view has since become common among large AI developers, who seek clarity on expectations and liability.

AI pioneer Geoffrey Hinton, who left Google in 2023, warned in a BBC interview: “It is hard to see how you can prevent the bad actors from using it for bad things.” Advocates of the EU approach say guardrails are needed to reduce risks while research continues.

Why the rules matter beyond Europe

Global companies often standardize products to meet Europe’s requirements. This so-called “Brussels effect” has shaped privacy and digital markets before. The AI Act could have a similar reach.

Providers that serve EU users may choose to apply the same controls worldwide. That could include clearer labeling for AI outputs, better child safety features, and limits on biometric tracking. Some firms may ship EU-specific versions of products, but that can be costly and complex.

The law also encourages development of technical standards for AI safety and transparency. European standards bodies are working with industry and researchers to draft methods for risk management, data quality, and human oversight. These standards could influence international practice.

Enforcement and open questions

Authorities will have new tools. They can request information, order corrective actions, and levy fines. Penalties scale with the severity of violations and company size.

But enforcement will be hard. Regulators must track fast-moving models and opaque supply chains. Smaller national agencies may need more staff with technical skills. The EU AI Office is expected to guide consistent decisions in complex, cross-border cases.

Several questions remain:

  • Defining high risk: Drawing the line between general and high-risk use cases will test the guidance. The same model can be low risk in one context and high risk in another.
  • Open-source models: The law treats open-source differently in some areas, but obligations can still apply if a model is integrated into risky systems.
  • Systemic models: The most capable AI models may face extra scrutiny. The criteria and tests for “systemic risk” will shape the market.

What to watch next

The next year will bring detailed guidance. The Commission plans secondary rules and codes of practice for general-purpose AI. National authorities are setting up reporting channels and sandboxes to help startups test compliant systems.

Universities and standards bodies are developing benchmarks for safety and bias. Companies are rolling out “model cards” and “system cards” that explain capabilities and limits. Expect more third-party audits and red team exercises, especially for models used in hiring, education, and health.

The EU’s move is not happening in isolation. The United States has issued an executive order on AI safety and rights. The United Kingdom hosted a global AI safety summit in 2023 with a joint pledge to pursue safe development. Other countries are drafting rules on transparency, data use, and accountability.

The bottom line

The AI Act is now a reality. Some bans and duties already apply, and more arrive in 2025 and beyond. Companies that plan early will adapt more smoothly. Regulators face a learning curve, but expectations are clearer than a year ago. The coming months will test whether Europe’s bet on early, risk-based regulation can make AI safer without slowing useful innovation.