EU AI Act Sets Global Pace as Rules Kick In
Key takeaways
- The European Unions AI Act has entered into force, with staggered deadlines running through 2026.
- The law sets a risk-based rulebook and introduces obligations for general-purpose AI models.
- Companies face steep penalties for non-compliance and a new layer of governance across the EU.
A new rulebook for powerful systems
The European Union has launched what the European Commission calls 22the first-ever comprehensive legal framework for AI.22 The AI Act, approved in 2024 after years of negotiation, is now moving from text to enforcement. It introduces a risk-based system of obligations that aims to protect consumers while supporting innovation.
At its core, the law divides AI into categories by risk. Prohibited practices include social scoring by public authorities and the creation of facial recognition databases through indiscriminate scraping of images from the internet or CCTV. Real-time remote biometric identification in public spaces by law enforcement is also banned, with narrow exceptions related to serious crimes, the search for missing persons, or imminent threats.
High-risk systems face the most stringent requirements. This category covers AI used in areas such as critical infrastructure, education, employment, access to essential services, migration and border control, and law enforcement. Providers must implement risk management, high-quality data governance, technical documentation, human oversight, logging, robustness, and cybersecurity. Most high-risk systems must undergo conformity assessments and carry CE marking, similar to other regulated products in the EU.
The Act also addresses general-purpose AI (GPAI), including large language models. All GPAI models face baseline transparency rules, such as documentation and disclosures. Models designated as posing systemic risk for example due to scale or impact will need to meet tougher obligations, including rigorous safety evaluations, incident reporting, and enhanced cybersecurity measures.
Deadlines: what changes and when
- February 2025 (6 months after entry into force): Bans on prohibited practices begin to apply.
- Mid-2025 (around 12 months): Obligations for general-purpose AI begin, including transparency and documentation. Stricter duties for systemic-risk models will phase in as guidance is issued by the EUs new AI Office.
- 2026 (around 24 months): Most requirements for high-risk systems take effect, including conformity assessments and post-market monitoring.
Enforcement will be led by national market surveillance authorities, coordinated by the European Commissions AI Office. Penalties scale with the type of violation. For banned practices, fines can reach up to 7% of global annual turnover or 35 million euros, whichever is higher. Lesser infringements carry lower caps.
How it fits into the global patchwork
The EUs move lands in a fast-evolving international landscape. In the United States, the White House issued an executive order in 2023 that directs agencies to promote the 22safe, secure, and trustworthy22 development and use of AI. The National Institute of Standards and Technology (NIST) published a voluntary AI Risk Management Framework the same year, which many companies use to map, measure, and manage risks. Congress has held multiple hearings but has not passed comprehensive federal AI legislation.
In the United Kingdom, the government has taken a more 22pro-innovation22 approach by asking existing regulators to apply AI principles in their sectors rather than creating a single AI law. The G7s Hiroshima AI process has pushed for common standards on safety testing, transparency, and incident reporting for advanced models.
Some industry leaders have publicly encouraged guardrails. OpenAIs chief executive, Sam Altman, told U.S. senators in 2023 that 22regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.22 Supporters of the EUs approach say it puts those words into practice with enforceable rules, while critics warn it could burden startups with costs and delay deployment of beneficial tools.
Industry and civil society reaction
Business groups say clarity is welcome but warn of complexity. Trade associations for technology and manufacturing have asked for detailed guidance, especially on when a tool counts as 22high risk22 and how to demonstrate compliance efficiently. Cloud providers and model developers are preparing new documentation, disclosures, and evaluations to meet GPAI and systemic-risk obligations.
Consumer advocates and digital rights organizations have praised the bans on social scoring and on indiscriminate scraping of facial images. They argue that the risk-based approach mirrors established product safety regimes and should reduce harmful deployments in sensitive areas. Still, campaigners want strong oversight of remote biometric identification and clear redress pathways when systems fail.
What companies should do now
- Map use cases and models: Inventory where AI appears in products and internal tools. Classify each use case by risk under the AI Act.
- Stand up governance: Assign accountable owners. Establish risk management, human oversight procedures, and incident response specific to AI.
- Harden data pipelines: Document training and evaluation datasets. Improve data quality, bias testing, and lineage tracking.
- Prepare technical files: Build documentation to support conformity assessments, including model cards, evaluation reports, and cybersecurity controls.
- Label and disclose: For GPAI and generative systems, implement content labeling where required and provide clear user notices.
- Align with global frameworks: Use NISTs AI Risk Management Framework to harmonize controls across jurisdictions and reduce duplication.
Open questions and next steps
Much will depend on secondary rules and guidance. The AI Office in Brussels will issue templates, codes of practice, and reference evaluation methods, especially for systemic-risk models. National authorities will clarify conformity procedures and post-market monitoring. Technical standards bodies in Europe and internationally are drafting norms on testing, robustness, and transparency that companies can adopt.
Interoperability also looms large. Many firms will face overlapping obligations from privacy, product safety, and anti-discrimination laws. The AI Act does not replace the EUs General Data Protection Regulation. Instead, it adds a layer focused on AI systems design, deployment, and oversight. Coordinating these regimes will be a practical challenge for compliance teams.
The economic stakes are significant. Europes goal is to channel investment into trustworthy AI, reduce harms, and create a single market with clear rules. Whether the Act boosts confidence without dampening innovation will be tested as deadlines arrive and enforcement begins. Some startups may need support to navigate assessments and documentation. Larger vendors may gain an advantage if they can offer 22compliance-ready22 services and tooling.
The bottom line
The AI Act is reshaping how advanced systems are built and used in one of the worlds largest markets. Its influence will reach beyond Europe as global companies seek harmonized practices and governments consider similar rules. For now, the message to developers and deployers is clear: treat AI like any other high-impact technology test it, document it, oversee it, and be ready to explain it. The compliance clock has started.