EU AI Act Becomes Law: What Changes Now

The European Union’s Artificial Intelligence Act has entered into force after its adoption in 2024, making the bloc the first major jurisdiction to set comprehensive, horizontal rules for AI. Supporters call it a landmark for digital rights and market certainty. Critics warn of compliance burdens and unanswered questions for fast-moving technologies. Either way, the law sets a template likely to influence AI governance far beyond Europe.
What the law does
The AI Act takes a risk-based approach. It bans certain uses outright, imposes strict duties on high‑risk systems, and sets transparency and safety obligations for powerful general‑purpose models. The law also establishes an EU-level oversight structure and significant penalties for violations.
- Risk tiers: The Act defines tiers from “unacceptable risk” (banned) to “minimal risk” (largely unregulated). Consumer chatbots and image tools typically fall into limited or minimal risk categories, while AI used in critical sectors can be deemed high risk.
- Banned practices: Prohibitions include social scoring by public authorities, manipulative AI that exploits vulnerabilities, and some forms of untargeted facial scraping. Real‑time remote biometric identification in public spaces by law enforcement is heavily restricted, with narrow exceptions.
- High‑risk AI obligations: Systems used in areas such as medical devices, critical infrastructure, employment screening, education, and essential services must undergo conformity assessments, maintain technical documentation, ensure human oversight, manage data quality, and implement risk management and incident reporting.
- General‑purpose AI (GPAI) and foundation models: Providers must supply technical documentation, follow copyright rules, and disclose training data summaries. Models posing systemic risk face additional duties, including model evaluations, cybersecurity measures, and reporting on incidents and serious malfunctions.
- Enforcement and fines: National authorities and a new EU AI Office will supervise compliance. Penalties can reach up to 7% of global annual turnover for the most serious violations, placing AI enforcement among the EU’s toughest tech regimes.
When the rules apply
The law staggers obligations to give industry time to adapt. While exact dates depend on the Official Journal publication and entry-into-force milestones, the sequence is broadly as follows:
- Within months: Bans on unacceptable-risk uses start to apply first.
- About one year after entry into force: Key transparency duties for general‑purpose AI roll out, alongside initial governance arrangements.
- About two years after entry into force: Most high‑risk obligations become mandatory, including conformity assessments and post‑market monitoring.
Some niche obligations may come later, and the Commission can update technical annexes as standards evolve. Companies placing AI systems on the EU market will need to track the staggered schedule closely.
Why it matters globally
European rules often set de facto global standards because multinationals prefer one compliance playbook. That dynamic, seen with the GDPR privacy law, could repeat for AI. The Act’s emphasis on documentation, testing, and transparency may therefore ripple through product development worldwide.
Other governments are moving too, but with different tools. The United States issued a sweeping executive order in 2023 directing agencies to develop guidance on safety testing, watermarking, and critical infrastructure risks, and it tasked the National Institute of Standards and Technology with building out evaluation frameworks. The United Kingdom has taken a regulator‑led, sectoral approach, asking existing watchdogs to apply flexible principles rather than passing a single AI law. The G7’s “Hiroshima Process” and the OECD AI Principles (adopted in 2019) provide additional international reference points.
Industry leaders have long acknowledged the need for guardrails. “AI is too important not to regulate — and too important not to regulate well,” Google CEO Sundar Pichai wrote in 2020, calling for balanced rules that encourage innovation while managing harms. And in U.S. Senate testimony in 2023, OpenAI’s Sam Altman said, “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
Support and scrutiny
EU officials have framed the Act as both pro‑innovation and rights‑protective. “The EU becomes the first continent to set clear rules for AI,” Internal Market Commissioner Thierry Breton said when negotiators reached political agreement in late 2023. Lawmakers argue the clarity will unlock investment while guarding against surveillance abuse, discrimination, and unsafe deployments.
Civil society groups have welcomed bans on social scoring and some biometric uses but argue that exceptions for law enforcement and the breadth of “high‑risk” categories need careful oversight in practice. Startups worry about documentation and audit burdens. Open‑source communities have sought assurances that contributions to freely available models will not be chilled; the final text provides lighter duties for open‑source developers unless their models reach systemic‑risk thresholds or are integrated into commercial products.
What companies should do now
Firms building or deploying AI in Europe — and any company that sells AI-enabled products into the EU — will need to map their systems against the law’s requirements. Practical steps include:
- Inventory and classification: Catalogue AI systems, intended uses, and user groups. Determine whether any system is high risk or involves general‑purpose models.
- Governance basics: Assign accountability, set up risk management, and define human oversight. Prepare incident response and post‑market monitoring processes.
- Data and documentation: Ensure training and testing data meet quality standards. Build the technical file, model cards, and usage instructions needed for conformity assessments.
- Model evaluation: For powerful foundation models, plan security, robustness, and bias testing. Document safety mitigations and consider third‑party evaluations where appropriate.
- Supplier and customer contracts: Update terms to reflect shared responsibilities — including transparency, update rights, and downstream risk controls.
- Watch the standards: Track harmonized European standards and guidance from the EU AI Office and national authorities as they emerge.
The road ahead
Implementing the AI Act will be a multi‑year effort involving technical standards, new oversight bodies, and test‑and‑learn compliance. The law leaves room for future updates as technology advances. Its impact will depend on how consistently authorities enforce rules, how workable the standards are for SMEs as well as tech giants, and whether global coordination narrows gaps between jurisdictions.
For now, the EU’s bet is clear: tighter rules can build trust, boost uptake, and steer AI toward beneficial uses. Whether that formula becomes the global norm, or remains a distinctly European path, will be one of the defining technology policy stories of the decade.