EU’s AI Act Kicks In: What Changes Now

Europe has entered a new phase of artificial intelligence oversight. The European Union’s AI Act began to bite this year, moving from legislative text to real rules for companies and public bodies. The law, described by the European Commission as the “first comprehensive AI law in the world,” aims to set a global benchmark. It brings immediate bans on some risky uses, clearer duties for general-purpose AI models, and a path toward stricter controls on high-risk systems over the next two years.

A new phase for AI governance in 2025

The AI Act was adopted in 2024 and entered into force later that year. Its measures roll out in stages. The bans on certain practices apply first. New transparency duties for general-purpose AI (GPAI) follow. The stricter, high-risk regime comes later, once technical standards and guidance are in place.

The European Commission has set up an AI Office to coordinate enforcement, especially for powerful foundation models. Each EU country is appointing a national supervisory authority. A new European AI Board will help align decisions, so companies face the same rules across borders.

What the law requires now

Several obligations are already in effect or will apply in the coming months:

  • Prohibited practices: The law bans AI that violates fundamental rights in clear ways. That includes social scoring by public authorities, systems that use sensitive traits to infer or categorize people, and the untargeted scraping of facial images to create recognition databases.
  • General-purpose AI transparency: Providers of large-scale models must disclose technical information, including how the model behaves, its limits, and the steps taken to mitigate risks. They also need to publish a training data summary that explains the use of material protected by copyright. The aim is to improve accountability without forcing disclosure of trade secrets.
  • Very capable models: The Act sets extra duties for models with systemic risk, such as independent evaluations, incident reporting, and cybersecurity safeguards. Criteria include capability thresholds and compute used to train the model.

High-risk AI systems, such as those used in hiring, education, credit scoring, or critical infrastructure, will face stricter rules later. These include risk management, data governance, human oversight, and quality management systems. Most of those obligations take effect after technical standards are finalized and a grace period ends.

Why the EU says the rules are needed

EU officials frame the Act as a rights-first approach. The European Commission says the rules aim to ensure AI systems are “safe and respect fundamental rights.” Regulators argue that clear obligations will build trust. They also say common standards will help businesses scale across the bloc.

Consumer groups welcome the early bans and transparency steps. They say they will curb harmful surveillance and misleading outputs. Business groups support the single-market approach but warn about compliance costs for small firms. Many companies say they are waiting for detailed standards from European standardization bodies to guide implementation.

How enforcement will work

Enforcement will be decentralized but coordinated. National authorities will investigate complaints, audit providers, and issue penalties. The AI Office will oversee powerful model providers and coordinate cross-border cases.

Penalties are significant. The law allows fines of up to 7% of global annual turnover for the most serious violations, or capped amounts in the tens of millions of euros, whichever is higher. Lesser breaches draw smaller fines, but regulators say they will expect concrete risk mitigation plans and documentation.

To help developers comply, the Act promotes regulatory sandboxes. These are supervised test environments where companies and public bodies can trial systems, get feedback, and adjust before full deployment.

Standards and guidance still to come

The EU relies on technical standards to make many rules workable. European bodies are drafting norms on data quality, model evaluation, human oversight, and documentation. These standards will translate broad legal concepts into checklists and measurable tests.

Industry is watching for clarity on the training data summary requirement. Providers want to know how detailed it must be, and how to handle mixed datasets that include copyrighted text, images, and audio. Copyright holders, including news publishers and image libraries, are pressing for robust disclosure and licensing where needed.

Global context: other governments move too

The EU is not alone. The United States issued an AI executive order in 2023. It directs agencies to develop safety testing, protect consumers, and support innovation. The White House also announced voluntary commitments by major AI companies to “develop robust technical mechanisms to ensure that users know when content is AI-generated, such as watermarking systems.” The U.S. National Institute of Standards and Technology introduced its AI Risk Management Framework to help organizations “govern, map, measure, and manage” AI risks.

The United Kingdom convened an AI Safety Summit in 2023, which produced the Bletchley Declaration. Governments there agreed to monitor frontier models and share research on risks. Other countries, including Canada, Japan, and Brazil, are advancing their own laws or guidance.

What changes for companies and users

For developers and deployers in the EU, the near-term impact is about documentation and controls. Providers of general-purpose models must publish more information. Firms that embed those models need to check licenses and user disclosures. Public bodies must review any biometric or surveillance uses for compliance with the bans and upcoming safeguards.

  • Short term: Map systems to the Act’s categories. Remove or replace prohibited uses. Prepare technical documentation and risk mitigation plans. Start internal model evaluations.
  • Medium term: Align with standards as they are published. Build or buy tools for data governance, model monitoring, and human oversight. Set up incident reporting channels.
  • Long term: For high-risk systems, implement quality management systems and third-party conformity assessments where required.

For users, the changes will be gradual. Expect clearer notices when interacting with AI systems. Over time, there should be more consistent safeguards around accuracy, bias, and appeals, particularly in areas like hiring and access to services.

Open questions

Some issues are not settled. How regulators will define a model with systemic risk is still evolving. The balance between transparency and trade secrets will be tested. Cross-border cases will probe how the AI Office and national agencies share duties. And courts will shape how the Act interacts with other laws, including data protection and copyright.

What is clear is that Europe is betting on rules to steer AI toward public trust. As one Commission line puts it, the goal is to keep systems “safe and respect fundamental rights” while allowing innovation. Companies now face the task of turning those words into practice.