EU Starts Enforcing Key Rules of the AI Act

Foundation model duties now in force across the EU
Europe has moved into a new phase of artificial intelligence regulation. As of August 2025, the European Union has begun enforcing key parts of its landmark Artificial Intelligence Act that apply to general-purpose AI (GPAI), often called foundation models. The change affects companies that build the large models powering chatbots, code assistants, image tools, and other generative services used by millions of people and thousands of businesses.
The AI Act, formally adopted in 2024 and billed by Brussels as a risk-based framework, introduces obligations over several years. The first bans on certain practices took effect earlier in 2025. The rules for GPAI are now active. Requirements for the most sensitive “high-risk AI systems”—such as tools used in hiring, credit scoring, or critical infrastructure—will phase in through 2026 and 2027.
What the law asks of AI model makers
Under the new provisions, developers of powerful models must meet transparency and safety standards. The European Commission says the law follows a “risk-based approach”, focusing oversight where potential harm is greatest. For GPAI, initial obligations center on documentation and responsible deployment.
- Training data transparency: Providers must publish a “summary of the content used for training”. The goal is to give users and regulators visibility into broad sources, without forcing disclosure of full datasets.
- AI safety practices: Companies are expected to implement testing and mitigation to reduce foreseeable risks, including harmful or illegal outputs. They must disclose known limitations.
- Copyright safeguards: Providers need to put in place measures that respect EU copyright law, including opt-outs where applicable.
- Technical documentation: Firms must prepare documentation that helps authorities assess compliance. That includes system capabilities, evaluation methods, and usage guidance.
The Act also introduces extra duties for models deemed to present systemic risk, a category tied to scale and impact. These models face stricter evaluation, incident reporting, and cybersecurity expectations. The Commission’s new European AI Office will coordinate this area and issue further guidance.
What is banned already
Earlier this year, the EU began enforcing bans on a set of practices lawmakers consider incompatible with fundamental rights. These prohibitions aim to draw clear red lines while the rest of the regime ramps up.
- “Social scoring”: Public authorities cannot use AI to rank people’s behavior or characteristics in ways that produce unfair or disproportionate treatment.
- Manipulative systems: AI that uses “subliminal techniques” or exploits vulnerabilities to materially distort behavior in ways that cause harm is prohibited.
- Biometric restrictions: Certain uses of biometric categorization—especially those that infer sensitive attributes—and untargeted scraping of facial images to build databases are banned.
Member States’ market surveillance authorities can investigate suspected violations. The Act foresees significant penalties for non-compliance, scaled to a company’s global turnover and the severity of the breach.
Why it matters for companies and users
The new obligations will ripple across the AI value chain. Foundation model providers will need to document training sources more clearly, harden safety processes, and respond to questions from regulators. Enterprise customers may see updated model cards, clearer usage guidance, and new controls for filtering risky outputs. Smaller startups using third-party models could gain from improved transparency but may face due diligence requests from clients who must verify compliance down the line.
Advocates of the law say these steps bring accountability to tools that are increasingly embedded in products and public services. Consumer groups argue that simple measures—like better documentation and meaningful warnings about limitations—can reduce real-world harms. Business groups broadly back legal clarity but want predictable timelines and alignment with global standards to avoid duplication.
Voices from policy and practice
In its official materials, the European Commission describes the Act as a “risk-based approach” that concentrates obligations where risks are highest and keeps requirements proportionate for lower-risk uses. The law’s own terminology underscores a focus on “high-risk AI systems” and “general-purpose AI” to distinguish duties by context and capability.
Regulators stress that the goal is not to slow innovation but to channel it. As one Commission explainer puts it, the aim is to support “safe and trustworthy” AI while protecting fundamental rights. Industry compliance teams, meanwhile, say the main challenge is operational: turning broad principles into repeatable processes for model evaluation, documentation, and incident handling.
How enforcement will work
The EU is relying on a networked model of enforcement:
- National authorities: Each Member State has designated market surveillance bodies to handle investigations, request information, and order corrective measures.
- European AI Office: A new body within the Commission will coordinate on GPAI, publish guidance, and help assess systemic risks.
- Cooperation and guidance: The Commission is expected to issue implementing and delegated acts that refine technical details over time.
Penalties can escalate for repeated or serious violations. The Act also foresees lighter treatment for startups and SMEs, recognizing their limited resources, and encourages regulatory sandboxes so companies can test compliant solutions with supervisory support.
Background: how we got here
The AI Act was first proposed in 2021. Co-legislators hammered out the final text in late 2023 and early 2024 after intense debate over policing tools, biometric technologies, and rules for rapidly advancing generative systems. The law entered into force in August 2024, with obligations taking effect in stages to give companies time to adapt.
The EU chose to regulate outputs and uses rather than specific algorithms. It set baseline duties for providers and deployers, tightened oversight for sensitive contexts, and drew bright-line bans on practices seen as unacceptable in a democratic society. The approach differs from the United States, which leans on sectoral rules and voluntary frameworks, and from China, which has issued detailed content and licensing controls. Global firms will need to navigate these differences in parallel.
What comes next
Over the next year, the Commission is expected to release more guidance on how to classify models, measure capability thresholds, and conduct risk assessments. Companies will look for clarity on acceptable ways to summarize training data, how to demonstrate copyright safeguards, and what counts as sufficient red-teaming.
Meanwhile, civil society groups will watch whether enforcement keeps pace with claims. Researchers want more access to data about model behavior and incidents. Public-sector agencies deploying AI in areas like healthcare and transport will refine procurement terms to reflect the new law.
For European users, little will change overnight. Apps and services will continue to evolve, but with more disclosures and, potentially, fewer surprises. The test for the AI Act will be whether its early guardrails reduce harm without dulling useful innovation. On that, Brussels, industry, and rights advocates share at least one goal: AI that is, in the Commission’s words, “safe and trustworthy”—and demonstrably so.