Europe’s AI Act Starts to Bite in 2025

Europe’s landmark Artificial Intelligence Act reaches its first enforcement milestone this year. Bans on the most harmful uses of AI begin to apply, and companies are moving to meet new duties. The EU says it is setting the pace for the world. Industry says the clock is ticking and guidance is needed fast.
What the law does
The EU AI Act is the bloc’s comprehensive rulebook for AI. It classifies systems by risk and sets obligations based on that risk. The European Commission calls it “the first comprehensive law on artificial intelligence worldwide.” The legal text is blunt about its goal: “This Regulation lays down harmonised rules on artificial intelligence.”
The law takes a phased approach. Most bans apply in early 2025. Rules for general-purpose AI, including large language models, follow later in the year. Full requirements for high-risk systems, such as in hiring or critical infrastructure, take effect by 2026. A new AI Office inside the European Commission will coordinate enforcement. National regulators will oversee markets and can fine violators.
What changes now
From this year, several practices are outlawed in the EU. These include:
- Social scoring by public authorities, which ranks people based on behavior or traits.
- Untargeted scraping of facial images from the internet or CCTV feeds to build facial recognition databases.
- Manipulative or exploitative techniques that cause significant harm, including those that take advantage of people’s vulnerabilities.
Remote biometric identification in public spaces by police is largely prohibited in real time. There are narrow exceptions, such as searching for victims of certain crimes, and they require strict safeguards.
For high-risk AI used in areas like employment, education, essential services, and law enforcement, the law sets detailed duties. Providers must run risk management, document data and models, ensure human oversight, and log and monitor performance. Systems will need conformity assessments and ongoing checks after they hit the market.
General-purpose AI providers also face obligations. They must disclose their model capabilities and limits, provide technical documentation, and follow EU copyright rules, including summaries of copyrighted training data. Models with systemic risk will face extra testing and cybersecurity duties.
Who is affected
The law has long reach. It applies to providers and users in the EU. It also covers companies outside the EU if they place systems on the EU market or if the output is used in the EU. Fines can reach up to 7% of global annual turnover for the most serious breaches.
Sectors likely to feel the impact first include tech platforms, recruiting services, banks, healthcare, and public bodies. Start-ups will have access to regulatory sandboxes to test systems with supervisors. The Commission and national authorities say they will offer guidance to small firms.
Supporters and critics
Backers say the law will raise standards and rebuild trust. They argue that clear rules will help responsible companies compete. Civil society groups welcome bans on the worst practices. Many had urged a stronger stance on face recognition and predictive policing.
Some industry groups warn of compliance burden and uncertainty. They want clarity on how to classify systems and how to measure risk. They also seek harmonized guidance across member states. The law aims to avoid fragmentation, but national capacities vary.
Academic voices urge realism. Geoffrey Hinton, a pioneer of deep learning, voiced caution as AI advances. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told the New York Times in 2023. His remark reflects broader concerns about dual-use technology and safety incentives.
Global ripple effects
The EU is not alone. In 2023, the White House issued a sweeping U.S. executive order that calls for the “safe, secure, and trustworthy development and use of AI.” Technical standards bodies have also stepped in. The U.S. National Institute of Standards and Technology released a voluntary AI Risk Management Framework in 2023 to help organizations manage risks across the AI lifecycle.
The United Nations General Assembly adopted a resolution on AI in 2024. It urged countries to protect human rights and share benefits. Many countries are now drafting or updating AI strategies. Regulators are watching each other. Companies expect a wave of “Brussels effect,” where EU rules shape global practices.
What companies should do now
Lawyers and compliance teams say early preparation is essential. Practical steps include:
- Map your AI portfolio: Identify systems in use, their purposes, and where they operate.
- Classify by risk: Assess whether a system is prohibited, high-risk, limited-risk, or minimal-risk under the EU scheme.
- Tighten data governance: Document datasets, sources, and cleaning methods. Track copyright-sensitive content.
- Build human oversight: Define when and how people can intervene. Train staff on escalation paths.
- Log and monitor: Set up event logging, performance metrics, and incident reporting.
- Prepare documentation: Create technical files, user instructions, and transparency notes.
- Engage suppliers: Update contracts to require necessary disclosures from model providers.
- Join a sandbox: Where available, test high-risk systems with regulators to reduce uncertainty.
Open questions
Key details still need to be worked out. The EU must finalize guidance on model classification and testing. Authorities need resources for audits and enforcement. Firms want clarity on what counts as a general-purpose model with systemic risk. They also seek practical templates for technical documentation and incident reporting.
Rights advocates are watching for how exceptions are used. They worry that narrow carve-outs could expand over time. Industry urges regulators to avoid conflicting national rules and to keep requirements proportionate for small developers.
The road ahead
The EU AI Act is a big bet. It tries to steer a fast-moving technology with rules that protect people and allow innovation. Whether it works will depend on implementation. It will also depend on cooperation among governments, companies, and researchers.
For now, the message is clear. High-risk uses must meet high standards. Harmful practices will not be allowed. The rest of the world is watching how Europe enforces these promises in 2025 and beyond.