EU’s AI Act Enters Force: What Changes Now
The European Union’s landmark Artificial Intelligence Act has officially entered into force, marking the world’s first comprehensive legal framework for AI. The law introduces a risk-based approach that bans some uses outright, tightens oversight of high-risk systems, and sets new obligations for developers of general-purpose models. Its staged rollout begins within months, and full requirements will phase in over the next two years.
What the law does
The AI Act classifies systems by the risk they pose to safety, fundamental rights, and democratic values. Certain applications are prohibited, while others face strict compliance rules.
Prohibited practices include:
- Social scoring of individuals by public authorities.
- Biometric categorization based on sensitive attributes (such as political opinions, religious beliefs, or sexual orientation).
- Untargeted scraping of facial images to build recognition databases.
- Manipulative or exploitative techniques that significantly distort a person’s behavior and cause harm.
- Predictive policing tools that profile individuals as likely to commit crimes based solely on profiling or location.
High-risk systems—such as those used in critical infrastructure, education, employment, access to essential services, law enforcement, migration, and justice—must meet stringent requirements. These include robust risk management, high-quality datasets, human oversight, transparency, cybersecurity, and post-market monitoring.
General-purpose AI (GPAI) and foundation models are covered, too. Providers of models that can be adapted for many tasks must document capabilities and limitations, share technical information with downstream deployers, respect EU copyright rules (including text-and-data-mining opt-outs), and summarize training data sources. Models with systemic impact face additional obligations, including more rigorous evaluations and reporting.
How enforcement will work
Enforcement is shared between national authorities and EU-level coordination. The European Commission has set up an AI Office to supervise rules for general-purpose models and support consistent application across the bloc. National market surveillance authorities will oversee compliance in their jurisdictions. A new AI Board of national regulators will advise on implementation and share best practices, while formal standards bodies work on technical norms to support compliance.
Penalties and timelines
Sanctions scale with severity and company size. Fines can reach up to €35 million or 7% of global annual turnover for prohibited uses. Other violations can trigger fines up to €15 million or 3%, while supplying incorrect information to authorities can draw penalties up to €7.5 million or 1.5%. Caps are proportionate for smaller firms and startups.
The law phases in as follows:
- Six months after entry into force: bans on prohibited practices start to apply.
- Twelve months: obligations for general-purpose AI providers begin.
- Twenty-four months: most high-risk requirements take effect for deployers and providers.
What changes for companies and public bodies
For developers of high-risk systems, the most immediate work involves establishing risk management processes, documenting datasets, and ensuring human oversight that is meaningful and accountable. Providers will need to create technical documentation, conduct conformity assessments, and register high-risk systems in an EU database before placing them on the market.
Organizations deploying AI in hiring, credit, healthcare, or public services should prepare to:
- Map AI systems and classify risk under the Act.
- Upgrade governance, including clear roles for accountability and incident reporting.
- Assess data quality, bias, and robustness with repeatable testing.
- Train staff to monitor and intervene, and to explain AI-assisted decisions to affected people.
- Maintain logs and post-deployment monitoring to capture real-world performance and harms.
For general-purpose AI providers, the focus shifts to model transparency, security, and responsible deployment. That includes disclosure of capabilities, safety testing, protecting intellectual property, and supporting content labeling and detection tools for synthetic media.
Reactions from Brussels and beyond
EU officials cast the Act as a watershed. “Europe is the first continent with clear AI rules,” Internal Market Commissioner Thierry Breton said after the Parliament’s approval in March 2024, framing the law as a blueprint for trusted innovation.
Digital rights advocates offered qualified support but warned of gaps. Rights group Access Now argued the law contains “serious loopholes,” highlighting concerns over the scope of biometric surveillance and exceptions for law enforcement. Industry groups have generally welcomed legal clarity, while urging regulators to keep guidance pragmatic as technical standards mature.
Global ripple effects
The EU’s move adds momentum to a patchwork of AI rules emerging worldwide. The United States has focused on executive action and guidance, directing federal agencies to develop safety testing, watermarking standards, and sector-specific oversight, while Congress debates comprehensive legislation. The United Kingdom is taking a regulator-led, principles-based approach, tasking existing bodies to supervise AI in their sectors rather than passing a single law. China has issued targeted rules for recommendation algorithms and generative AI, centered on content controls, security reviews, and provider responsibilities.
Multilateral efforts are also accelerating. Standards bodies and forums, including the OECD, ISO/IEC, and NIST, are crafting technical and governance guidance to help companies operationalize risk management. Many organizations are already aligning internal controls to frameworks such as the NIST AI Risk Management Framework and ISO standards for AI management systems, which dovetail with the EU’s emphasis on documentation, testing, and human oversight.
What it means for users and citizens
People interacting with AI in the EU should see more transparency and avenues for redress. Deepfakes and other AI-generated media must be clearly labeled in most contexts. Users of high-risk systems—such as students, job applicants, or patients—should receive understandable information about AI involvement and have channels to challenge or seek human review of consequential decisions. National authorities will be empowered to investigate complaints and order corrective actions.
Key open questions
Implementation will determine the law’s impact. Regulators still need to finalize guidance and harmonized standards. Developers and deployers must translate high-level principles into checklists, controls, and audit trails that work across complex supply chains. Important open questions include:
- How to measure and report systemic risk for advanced foundation models.
- What “state of the art” means for robustness, interpretability, and content provenance tools.
- How to handle cross-border enforcement and third-country providers serving EU users.
- How to balance security and transparency when disclosure could aid misuse.
The bottom line
The AI Act gives Europe a head start in setting the guardrails for rapidly advancing technology. It raises the bar on safety and accountability, while leaving room for innovation through standards and guidance. For companies, the message is clear: start building governance and testing capabilities now. For citizens, the promise is more transparency and protection where AI decisions matter most. With global regulators watching, the EU’s rules are likely to shape AI practices far beyond its borders.