EU AI Act: What Changes for AI in 2025

Europe’s landmark AI law enters the real world
Europe’s sweeping Artificial Intelligence law is moving from text to practice. The EU AI Act, adopted in 2024, begins a staged rollout through 2025 and 2026. The law aims to make AI systems safe, transparent, and accountable without stalling innovation. It is the first comprehensive AI rulebook by a major regulator. Companies that develop or deploy AI in the EU now face clear duties. So do providers of general-purpose and generative models. Consumers should see more disclosures and stronger safeguards.
How the law works: a risk-based model
The AI Act uses a risk-based approach. The obligations depend on how an application is used, not just on the technology itself.
- Unacceptable risk: Certain practices are banned. Examples include social scoring by public authorities and AI that manipulates vulnerable users. Some uses of real-time remote biometric identification in public spaces face strict limits.
- High risk: Systems used in sensitive areas—like hiring, credit, education, law enforcement, or critical infrastructure—must meet strong requirements. These include risk management, high-quality training data, human oversight, logging, and cybersecurity. Many high-risk systems require a formal conformity assessment before market entry.
- Limited risk: Tools such as chatbots must inform users they are interacting with AI. AI that generates or manipulates images, audio, or video must disclose that content is AI-created in many contexts.
- Minimal risk: Most AI uses, including many consumer applications, remain largely unregulated beyond general EU law.
The law also introduces duties for general-purpose AI (GPAI). Providers of foundational and generative models must share technical documentation with downstream developers, follow acceptable use policies, and assess and mitigate systemic risks. Models with the potential for systemic impact face stricter testing, incident reporting, and security obligations.
Key dates and enforcement
The Act entered into force in 2024. Its rules take effect in phases. Bans on the most harmful AI practices apply first, within months of entry into force. Obligations for general-purpose models follow. Core requirements for high-risk systems arrive later, with most of those rules becoming enforceable in 2026. Companies should plan now to meet the later deadlines.
Enforcement is shared. National regulators in each EU member state will supervise most applications. The European Commission is setting up an AI Office to coordinate and oversee general-purpose models and cross-border questions. Penalties can be severe, rising to multi-million-euro fines or a percentage of global turnover for serious violations, such as using banned AI practices.
What changes for developers and buyers
For businesses and public agencies, the first task is due diligence. Teams must classify their AI systems by risk and document how they work. Many will need to add or formalize testing, monitoring, and human oversight.
- Developers: Expect more model and data documentation. High-risk deployments require robust evaluations, bias and robustness testing, and post-market monitoring. Providers of general-purpose models will need to publish information to help downstream users comply.
- SMEs and startups: The law includes support measures and regulatory sandboxes. But compliance will still take time and resources. Early planning can reduce costs.
- Public sector buyers: Procurement teams must ensure vendors comply. Contracts should include audit, data governance, and transparency requirements.
- Consumers: Users should see clearer labels and disclosures. Complaint channels and redress mechanisms are expected to improve under the Act and existing EU rights law.
Support and criticism
Backers say the law offers legal certainty and curbs risky deployments. They argue that clear rules will build trust and help the market grow. Many consumer and civil society groups support bans on manipulative systems and limits on biometric surveillance.
Critics warn about compliance costs and red tape. Some argue that fast-moving AI research could outrun static rules. Others worry about unintended effects on open-source models or about extraterritorial reach. Industry groups have asked for practical guidance to avoid over-compliance and delays.
Global context: many paths, shared concerns
Europe is not alone. The United States, the United Kingdom, and the G7 have advanced their own approaches. A 2023 U.S. executive order called for the “safe, secure, and trustworthy development and use of artificial intelligence.” The UK hosted a global summit on AI safety in late 2023, which produced a shared statement on the need to manage risks from advanced systems. The G7’s Hiroshima AI Process promoted baseline principles and a code of conduct for developers.
Despite different legal systems, many priorities overlap. Governments emphasize transparency, testing, and incident reporting. They are pushing for watermarks on AI-generated media, clearer labeling, and stronger cybersecurity. Standard-setting bodies such as NIST in the U.S. and CEN-CENELEC in Europe are drafting technical guidance to align practice with policy.
Some leaders in the field say the stakes are high. “I think if this technology goes wrong, it can go quite wrong,” OpenAI’s Sam Altman told U.S. senators in 2023. That view underpins many calls for independent evaluation and red-teaming of advanced models.
What companies should do now
Legal experts advise starting compliance programs early. Steps include:
- Map your AI portfolio. Identify systems in or headed to the EU. Classify each by risk and use case.
- Harden governance. Set up an AI risk committee. Assign accountability. Document decisions.
- Evaluate and monitor. Institute pre-deployment testing for accuracy, bias, robustness, and security. Plan for post-market monitoring and incident response.
- Improve data practices. Track sources, consent, and quality. Apply privacy and IP safeguards.
- Clarify user disclosures. Label AI interactions and AI-generated media where required. Offer instructions for human oversight.
- Engage with standards. Follow emerging EU harmonized standards and relevant international frameworks to streamline conformity assessments.
What to watch next
Three developments will shape the rollout. First, the Commission and national regulators will publish guidance on how to classify systems and meet requirements. Second, technical standards will mature, giving companies a path to demonstrate conformity. Third, the first enforcement actions will set precedents on penalties and acceptable practice.
The EU AI Act will not answer every question about the future of AI. But it moves the debate onto practical ground: what to build, what to ban, and how to prove safety. For developers, buyers, and users, 2025 will be a year of adjustment. By 2026, the guardrails should be clearer. Whether Europe’s model becomes a template for others or a regional experiment will depend on how well it balances protection with progress.