EU AI Act Sets Global Pace as Safety Rules Tighten

A new rulebook for a fast-moving technology

Europe has approved the AI Act, the world’s first comprehensive law to regulate artificial intelligence. The law, adopted in 2024, takes a risk-based approach and applies not only to European companies but to any provider offering AI services in the EU. Supporters call it a template for global governance. Critics warn it could raise costs and slow innovation. Both agree it will shape the market.

“The EU becomes the very first continent to set clear rules for the use of AI,” European Commissioner Thierry Breton said in December 2023, after negotiators reached a political deal. His message reflected a larger goal: set standards early, then let the rules radiate outward.

What the law covers

The AI Act classifies systems by risk and tailors duties accordingly. It prohibits some practices outright, imposes strict obligations on high-risk uses, and sets transparency rules for general-purpose and consumer tools.

  • Unacceptable risk: Certain uses are banned. These include social scoring by public authorities, systems that manipulate vulnerable users, and the creation of untargeted facial recognition databases from mass scraping. The law also places tight limits on real-time remote biometric identification in public spaces, with narrow exceptions for law enforcement. Some forms of emotion recognition face restrictions in sensitive settings.
  • High risk: AI used in areas like critical infrastructure, medical devices, education, employment screening, law enforcement, and essential services faces strict requirements. Providers must implement risk management, data governance, cybersecurity, human oversight, and quality management. They must keep technical documentation and logs, test models, and register certain systems in an EU database.
  • Limited risk: Systems must meet transparency duties. For example, AI-generated content should be labeled, and users should be told when they interact with a chatbot.
  • General-purpose AI (GPAI): Providers of large, general models must disclose capabilities and limits, share technical summaries, and comply with copyright safeguards. The most capable models face additional systemic risk obligations, including evaluations, incident reporting, and cybersecurity controls.

The law includes carve-outs for research and national security. It also attempts to support startups through regulatory sandboxes and lighter rules for small firms.

How enforcement will work

Implementation will be phased. Bans on unacceptable uses take effect first, followed by obligations for general-purpose models and then the high-risk requirements. Full application will take several years.

Enforcement will be shared. National market surveillance authorities will oversee most systems. The European Commission has set up an AI Office to coordinate, supervise general-purpose models, and issue guidance. The European Standardization Organizations are drafting technical standards to turn broad principles into checklists engineers can use.

Penalties can be significant, including fines tied to global turnover for serious violations. But much will depend on how regulators interpret terms like “systemic risk” and how quickly standards mature.

Global ripple effects

Europe is not alone. Governments around the world have moved to manage AI risks while keeping benefits in reach.

  • United States: The White House issued an Executive Order on AI in October 2023. The National Institute of Standards and Technology (NIST) published a voluntary AI Risk Management Framework to help organizations identify and mitigate harms. In February 2024, NIST launched the AI Safety Institute Consortium to coordinate testing and evaluation. NIST said the effort aims “to support the development and deployment of safe and trustworthy AI.”
  • United Kingdom: The UK hosted the AI Safety Summit in 2023 and created a national AI Safety Institute to study advanced model risks, publish evaluations, and advise regulators. The UK approach emphasizes sector regulators applying existing laws with AI-specific guidance.
  • G7 and OECD: The G7’s Hiroshima process backed voluntary codes of conduct for advanced models. The OECD updated its AI Principles to address generative AI, continuing a focus on safety, transparency, accountability, and human rights.

Industry leaders have also called for guardrails. In U.S. Senate testimony in May 2023, Sam Altman, CEO of OpenAI, told lawmakers: “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” That view helped push governments to define testing norms and reporting expectations, even before binding rules arrive.

What companies should do now

For businesses building or using AI in or for the EU, preparation starts with mapping systems and risks. The same steps will help with U.S. and UK guidance.

  • Inventory your AI: Identify models, uses, and data sources across the organization. Note where systems touch customers or critical functions.
  • Classify risk: Determine if any system could be high-risk under EU definitions, and whether consumer-facing tools trigger transparency duties.
  • Strengthen governance: Set up cross-functional oversight. Define roles for model owners, risk officers, legal, and security. Track updates and incidents.
  • Document and test: Keep technical documentation, training data summaries, and evaluation results. Establish human oversight and fallback procedures.
  • Engage with standards: Follow emerging EU standards and NIST guidance. Align with recognized practices on robustness, bias testing, privacy, and red-teaming.
  • Plan for suppliers: Update contracts to require risk information from model and data vendors. Verify claims about training data and IP compliance.

Supporters and skeptics

Backers say the AI Act will provide certainty and protect rights. They argue clear rules will open the market for trustworthy tools. Civil society groups welcome bans on the most intrusive surveillance and stronger guardrails for high-stakes decisions.

Industry groups worry about compliance costs and unclear thresholds for “systemic risk” in general-purpose models. Open-source advocates warn that sweeping duties could chill research if obligations apply to model weights released for public use. EU lawmakers added exemptions for research and free and open-source components outside a product, but debates over scope will continue during implementation.

What to watch

  • Technical standards: How European standards bodies and international groups translate principles into test methods and metrics.
  • Interoperability: Whether EU requirements align with NIST’s testing guidance and UK evaluation practices, reducing fragmentation.
  • Enforcement posture: Early cases by national authorities and how the AI Office defines systemic-risk obligations for the largest models.
  • Impact on SMEs: Whether sandboxes and guidance are accessible to startups and small suppliers.
  • Open-source treatment: Clarity on when publishing model weights triggers duties and what documentation is expected.

The EU has set a marker. Other jurisdictions are building their own tools. Together, they point toward a common direction: more testing, more transparency, and more accountability for systems that affect people’s lives. The details will matter. But the message is clear. The era of voluntary guardrails is giving way to enforceable rules.