EU AI Act’s First Major Rules Take Effect

Europe’s landmark Artificial Intelligence Act reached a new milestone this week, as the first major obligations for providers of general-purpose AI systems came into force. The phased rollout, which began in 2024, marks a shift from political agreement to daily compliance work for developers, vendors, and users across the European Union and beyond.
What changed this week
The AI Act’s rules for general-purpose AI (GPAI) now apply. Providers of large models and tools that can be adapted across many uses must publish technical documentation, manage systemic risks, and set up channels to report serious incidents. Some prohibited practices were already banned months after the law took effect. Broader requirements for “high-risk” AI, such as in hiring, health, and critical infrastructure, will apply later in the rollout.
EU officials call this the start of a long compliance arc. National authorities, coordinated by the new European AI Office in Brussels, are expected to prioritize guidance in the coming months while building enforcement capacity.
What the law does
The AI Act follows a risk-based approach. Obligations scale with the potential impact of the system:
- Prohibited AI: Certain uses are banned, including social scoring by public authorities and manipulative systems that exploit vulnerabilities and cause harm. Some biometric categorization that infers sensitive traits is also prohibited.
- Strictly regulated uses: Remote biometric identification in public spaces for law enforcement is severely restricted and subject to tight authorization and safeguards.
- High-risk AI: Systems used in areas like employment, education, and essential services must meet rules on risk management, data governance, logging, human oversight, robustness, and conformity assessment.
- General-purpose AI: Providers must document model capabilities and limits, assess systemic risks, test and monitor safety, and cooperate with national regulators. Some transparency duties also apply to those who integrate GPAI into downstream products.
- Enforcement and penalties: Non-compliance can lead to steep fines, with the most serious breaches subject to penalties of up to a percentage of global annual turnover.
The regulation states its central goal plainly: “This Regulation lays down harmonised rules on artificial intelligence.” EU lawmakers argue that consistent standards will give businesses clarity while protecting consumers.
Why it matters
Europe is a large market. Its rules have an extraterritorial effect, reaching non-EU firms that offer AI systems in the bloc. This could influence global product design and documentation. It may also pressure firms to adopt common compliance baselines across regions to avoid maintaining separate versions.
Industry is split on the impact. Large providers already publish model cards, testing reports, and safety notes. Smaller companies worry about paperwork and the cost of audits as high-risk categories come into scope. Civil society groups welcome bans on some uses while pressing for stronger guardrails on surveillance and workplace AI.
Thierry Breton, the EU’s industry chief, said during the political deal in 2023: “Europe is now the first continent to set clear rules for AI.” Supporters see the law as a blueprint. Critics warn that fast-moving research and open-source collaboration could be stifled if compliance becomes too burdensome.
Background and global context
The AI Act was proposed in 2021, with a political agreement in late 2023 and formal adoption in 2024. It arrives amid rapid advances in generative AI. That pace has raised policy alarms and inspired new oversight efforts around the world.
- United States: The White House issued an executive order on AI safety in 2023. Federal agencies published risk management guidance. Congress has held hearings but has not passed a comprehensive AI law.
- United Kingdom: The UK favors a “pro-innovation” approach, asking existing regulators to apply AI principles. It hosted the 2023 AI Safety Summit and launched research on frontier risks.
- G7 and UN: The G7 developed a voluntary code of conduct for advanced model developers. The UN General Assembly endorsed a non-binding AI resolution in 2024 calling for safe, trustworthy, and human rights–based AI.
Expert opinion remains mixed. In a 2023 Senate hearing, OpenAI chief executive Sam Altman warned: “If this technology goes wrong, it can go quite wrong.” A separate 2023 statement from the Center for AI Safety argued: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Others stress near-term harms like bias, fraud, and misinformation, which the EU law seeks to address through testing, transparency, and redress.
How companies are responding
Providers are updating documentation and safety processes. Many are expanding red-teaming and evaluation protocols. Others are publishing more details about training data sources and model limits. Downstream users, such as banks and hospitals, are mapping their systems to see which ones may fall into high-risk categories when those rules apply.
- Building internal risk registers and incident reporting channels.
- Tracking model changes and keeping logs for audits.
- Adding human-in-the-loop controls for sensitive decisions.
- Reviewing vendor contracts to assign compliance duties.
Open-source developers receive certain exemptions, especially for research and non-commercial releases. But if a model reaches the threshold of systemic risk, obligations tighten regardless of licensing. That balance aims to keep innovation flowing while maintaining accountability for the most impactful systems.
What happens next
Key dates lie ahead. Prohibited practices are already in force. The GPAI duties start now. The high-risk regime will follow in the next phase of the rollout, expected in 2026. In parallel, European standards bodies CEN and CENELEC are drafting harmonized standards that companies can follow to demonstrate compliance. Member states are also setting up regulatory sandboxes to support testing under supervision, especially for startups and small firms.
Enforcement will ramp up. National authorities need staff and tools. The AI Office will issue guidance, coordinate cross-border cases, and engage with technical experts through advisory forums. Early cases are likely to focus on clear-cut violations and on transparency obligations that can be checked quickly.
The bottom line
Europe’s AI Act is no longer just a headline. It is a working rulebook. Companies that deploy AI in the EU must now show their work: what a system can do, how it was tested, who oversees it, and how they will fix problems. Supporters say clear rules will increase trust and cut the risk of harm. Skeptics warn that compliance costs could weigh on smaller players.
What is certain is momentum. As one line in the regulation makes clear, the bloc seeks to “lay down harmonised rules.” The coming year will test how those words translate into practice, model by model and use case by use case.