EU AI Act Takes Effect: What Companies Must Do Now

Europe’s landmark Artificial Intelligence Act is now in force, setting a global benchmark for how governments will shape the technology. The law entered into force in August 2024 after formal approval earlier that year. It uses a risk-based model to define obligations. The stated aim, in the law’s opening articles, is to "lay down harmonised rules on artificial intelligence" and to ensure that AI placed on the EU market is safe and respects fundamental rights. Companies building or deploying AI in Europe now face a clear timeline to comply.

What the law does

The EU AI Act is the world’s first comprehensive AI statute. It categorizes AI systems by risk. Low-risk tools face light duties. Higher-risk tools face strict requirements. Some uses are banned outright. The structure is meant to align innovation with safety. The European Commission summarizes the approach as simple: higher risk, heavier rules.

  • Prohibited practices: The Act bans certain uses seen as incompatible with EU values. These include social scoring by public authorities, AI that manipulates people in harmful ways, and some forms of biometric classification. Use of remote biometric identification in public spaces is tightly restricted.
  • High-risk systems: AI used in areas such as critical infrastructure, education, employment, essential services, law enforcement, migration, and justice falls into a high-risk tier. Providers must meet requirements on risk management, data governance, documentation, human oversight, robustness, and security.
  • General-purpose AI (GPAI): Foundation models and other general-purpose systems must disclose technical information and follow transparency duties. Providers need to document model capabilities and limitations. They must also respect EU copyright rules and support downstream developers with guidance.

The final text also acknowledges open-source development. Developers who share models freely and do not place them on the market have lighter obligations, except where prohibited uses or high-risk deployments are involved. Regulators say this is intended to protect research while keeping guardrails for real-world use.

Key dates and timelines

Not all obligations apply at once. The law staggers duties over months and years.

  • Entry into force: August 2024.
  • Bans on prohibited uses: Six months after entry into force.
  • General-purpose AI transparency duties: Twelve months after entry into force.
  • High-risk system requirements: Thirty-six months after entry into force.

The European Commission has also created an AI Office to coordinate implementation. National supervisory authorities in each member state will oversee enforcement. Companies should expect guidance and delegated acts to clarify technical points before major deadlines arrive.

What companies should do now

For many businesses, the first task is to map where AI sits in products and operations. Legal classification depends on purpose and context of use. Documentation will be essential.

  • Inventory your AI: List models, datasets, use cases, and suppliers. Identify whether any use is prohibited or likely high-risk.
  • Assign accountability: Set up governance. Name a responsible lead for AI compliance. Train product teams on the Act.
  • Build documentation: Prepare technical files, data lineage, evaluation reports, and human oversight plans, especially for high-risk candidates.
  • Adopt standards: Use emerging norms to speed compliance. ISO/IEC 42001 (AI management systems) and the NIST AI Risk Management Framework can help structure processes.
  • Support transparency: Add user-facing notices where required. Provide clear instructions for safe use. Label synthetic content where appropriate.
  • Review contracts: Update vendor and customer terms to cover data rights, testing, incident reporting, and model changes.

Global context and convergence

Europe’s move does not stand alone. The United States is building a patchwork of rules through agencies, standards bodies, and state laws. In October 2023, the White House said its executive order "establishes new standards for AI safety and security" and directed agencies to develop testing and reporting practices. NIST is leading much of that work. Its AI Risk Management Framework says it aims "to help foster the development and use of AI systems that are trustworthy." Industry groups expect more cross-Atlantic cooperation on model evaluations, security testing, and content provenance.

Standards are likely to be a bridge. European regulators will point to harmonized standards developed with CEN-CENELEC. Internationally, ISO/IEC 42001 and related technical specifications offer a compliance toolkit for organizations. Content provenance, supported by projects like C2PA, is gaining ground as a way to signal when images, audio, or text were machine-generated.

Enforcement and penalties

Enforcement will be shared. The Commission’s AI Office will coordinate complex cases, including those involving general-purpose models. National authorities will handle most oversight and sanctions. The law allows significant fines, tied to company turnover, for serious violations. Regulators say the goal is deterrence, not revenue. Companies can also face orders to pull systems from the market or to fix defects under tight deadlines.

Concerns and criticism

Business groups warn about compliance costs, especially for small and mid-sized firms. They say documentation, testing, and legal reviews can slow product cycles. Civil society groups welcome the risk-based approach but want clearer rules on biometric surveillance and emotion recognition. They also seek strong remedies for people harmed by AI decisions. Open-source communities have pressed for clear protections so that research and model sharing remain viable. Policymakers argue the final text tries to balance these interests by focusing duties on deployers of high-risk systems and providers of powerful models.

What to watch next

The next year will be about guidance and standards. Expect drafts on technical documentation, model evaluations, and incident reporting. Providers of general-purpose models will publish system cards and training data summaries. High-risk deployers will pilot governance and human oversight playbooks. Auditors and notified bodies will gear up. The market is likely to reward tools that make compliance easier, such as automated documentation and red-teaming services.

For now, the message from Brussels is simple. The law calls for safe, fair, and transparent AI. It says it will "ensure that AI systems placed on the Union market are safe." That target sets a clear direction for the industry. The path to get there will be defined by the details that arrive in the months ahead.