EU’s AI Act Takes Effect: What Changes Now

Europe’s landmark Artificial Intelligence Act has entered into force, moving the world’s first comprehensive AI rulebook from negotiation to implementation. The law introduces a risk-based framework for AI, sets limits on high-risk uses, and imposes new transparency duties on powerful general-purpose models. Companies are now preparing for phased obligations and heightened scrutiny, while regulators set up new oversight structures across the bloc.
What the law does
The AI Act is designed to govern how AI systems are developed, marketed, and used within the European Union. Its purpose is stated plainly in the text: “This Regulation lays down harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems in the Union.” (EU AI Act, Article 1). Lawmakers have emphasized the dual goal: protecting people while enabling innovation.
In a statement summarizing the intent of the regulation, the European Parliament said the rules aim to “ensure that AI systems used in the EU are safe and respect fundamental rights and EU values.” That approach mirrors principles endorsed globally. The OECD’s 2019 recommendation notes that “AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.”
Risk tiers and prohibited uses
The AI Act sorts systems into risk categories, with obligations increasing as the risk rises. At the top of the scale are practices the EU considers unacceptable, which will be banned once the relevant provisions apply. These include:
- Social scoring by public authorities that leads to unjustified or disproportionate disadvantage.
- Biometric categorization based on sensitive traits such as political beliefs, religion, or sexual orientation.
- Emotion recognition in workplaces and schools.
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.
- Certain forms of real-time remote biometric identification in public spaces for law enforcement, subject to narrow and strictly regulated exceptions.
Below this tier is a large class of “high-risk” systems, such as AI used in critical infrastructure, education and exams, employment and worker management, essential services like credit scoring, border control, and certain law enforcement contexts. Providers of these systems must meet extensive requirements on data governance, documentation, human oversight, accuracy, robustness, and cybersecurity.
General-purpose AI and deepfakes
The law also addresses general-purpose AI (GPAI), including frontier models that can be adapted to many tasks. Providers face transparency obligations such as publishing summaries of training data sources sufficient to address copyright concerns, supplying technical documentation to downstream developers, and enabling detection of AI-generated content. For models deemed to pose systemic risks, the bar is higher: risk assessments, adversarial testing, incident reporting, and other safeguards coordinated at EU level.
On the content side, the Act introduces duties to label or signal synthetic media. That includes deepfakes and AI-generated images, audio, and video, so that people can understand when content is machine-produced. The aim is to reduce misinformation risks without restricting legitimate research and creative uses.
Who is affected
The rules apply to the full lifecycle of AI systems offered in the EU market, regardless of where providers are based. That includes:
- Providers that develop or place AI systems on the market.
- Deployers (users) that operate AI systems in their activities.
- Distributors and importers that make AI systems available in the EU.
- General-purpose model providers offering foundational capabilities.
Public bodies and private companies alike will face obligations aligned with their role and the system’s risk tier. Small and medium-sized enterprises can expect guidance, sandboxes, and support from national authorities, reflecting the law’s effort to avoid stifling smaller innovators.
Enforcement, timelines, and penalties
The regulation is entering into force with a phased rollout. Prohibitions on unacceptable practices will start months after entry into force. Requirements for general-purpose AI follow within the first year, and the high-risk regime is phased in over a longer period. The European Commission has established a new AI Office to coordinate enforcement, especially for GPAI and systemic-risk models, while national authorities will supervise high-risk uses within their jurisdictions.
Penalties are significant. Companies found in breach can face fines reaching into the percentages of global annual turnover, with the highest bands reserved for prohibited practices. The prospect of substantial penalties is intended to encourage early compliance and robust internal controls.
What companies are doing now
Firms operating in the EU are taking practical steps to prepare. According to the U.S. National Institute of Standards and Technology, the AI Risk Management Framework is “intended to help organizations manage risks to individuals, organizations, and society associated with AI.” Companies are using frameworks like NIST’s to structure compliance programs while they await detailed guidance and standards.
- Map your AI portfolio: Inventory systems, classify risk levels, and identify high-risk uses under the Act.
- Strengthen data governance: Document sources, ensure quality and representativeness, and address copyright for training data.
- Build documentation pipelines: Technical files, model cards, and user instructions should be complete and current.
- Set up human oversight: Define when and how humans can intervene, override, or review automated decisions.
- Test and monitor: Pre-deployment evaluations and ongoing monitoring for bias, security, and performance drift.
- Label synthetic media: Plan for detection signals and disclosures for AI-generated content.
Supporters and skeptics
Rights groups have welcomed the bans on intrusive surveillance practices and the emphasis on fundamental rights. Many researchers also see benefits in clearer standards and documentation. Industry groups generally support harmonized rules that avoid a patchwork of national requirements, but warn against overly prescriptive obligations that could slow deployment or push startups away.
Academic experts note a practical challenge: translating legal mandates into technical standards and repeatable audits. That work will involve European and international standards bodies and will shape how burdensome—or predictable—the regime becomes in practice.
Global ripple effects
The EU’s move is part of a broader wave of governance activity. The United States has advanced voluntary commitments with major AI developers and published federal guidance, while Japan, the U.K., and others emphasize innovation-friendly, non-binding approaches. Despite different tactics, there is a shared trajectory toward trustworthy AI principles: safety, transparency, accountability, and respect for rights.
Policymakers and companies are watching for convergence. If standards and assurance methods align across regions, compliance could get simpler and costs lower. If not, providers may face multiple audits and document sets for different markets.
What to watch next
- Delegated acts and guidance: Clarifications on high-risk categories, GPAI obligations, and testing methods.
- Standards and audits: How conformity assessments and post-market monitoring will work in practice.
- National supervision: The capacity of Member States’ authorities to enforce consistently.
- Innovation pathways: Regulatory sandboxes and support measures for SMEs and research labs.
The AI Act marks a turning point in how advanced software will be built and used in Europe. Its success will hinge on precise rules, workable standards, and steady enforcement. For now, one message is clear: the era of ungoverned AI in the EU is over, and the rest of the world is taking note.