AI Rules Tighten: What the EU and US Do Next

Regulators sharpen their focus on AI

Governments on both sides of the Atlantic are moving from debate to enforcement on artificial intelligence. The European Union has finalized its landmark AI Act, the first comprehensive law of its kind. In the United States, the White House has directed agencies to set safety and transparency standards. Companies that build or deploy AI now face a new reality: compliance will be as strategic as code.

Officials say the goal is not to halt innovation. It is to make sure powerful systems are built and used responsibly. The EU calls its approach “risk-based”. The Biden administration framed its 2023 executive order around “safe, secure, and trustworthy” AI. The message is clear. AI can bring benefits, but it must meet basic tests of safety, fairness, and accountability.

EU AI Act: what the law includes

The EU AI Act was proposed in 2021 and formally adopted in 2024 after lengthy negotiations. It divides AI uses into tiers and imposes stricter rules as risks rise. The law will phase in over the next two years.

  • Bans on “unacceptable risk” systems: The Act prohibits certain applications outright. These include social scoring by public authorities and AI that manipulates behavior in harmful ways. It also places tight limits on real-time biometric identification in public spaces.
  • Obligations for “high-risk” systems: Tools used in critical areas, such as hiring, education, medical devices, and essential services, must meet detailed requirements. These include risk management, high-quality datasets, logging, human oversight, and post-market monitoring. Providers must undergo conformity assessments before placing these systems on the EU market.
  • Rules for “limited risk” and transparency: Some systems must disclose that users are interacting with AI. This applies to chatbots and synthetic content. Labels and documentation are expected to help users understand capabilities and limits.
  • General-purpose AI (GPAI): Developers of broad models face transparency duties. They must publish technical documentation and summaries about training data. For very capable GPAI models that pose “systemic risk” under a compute-based threshold, the law adds extra safeguards. These include robust evaluation, adversarial testing, incident reporting, and measures to address content authenticity, such as provenance and watermarking.

Enforcement will be shared. A new EU-level office will coordinate rules for general-purpose models. National authorities will police high-risk uses in their sectors. Penalties can be significant, including fines linked to global turnover for serious violations.

United States: standards first, then oversight

The U.S. does not have a comprehensive AI law. Instead, the federal government is building a framework through executive action and agency rules. In October 2023, the White House issued an executive order on the “Development and Use of Artificial Intelligence” focused on safety, security, and civil rights.

  • Testing and reporting: Developers training very large models must share safety test results with the government. The order highlights independent “red-teaming” to probe models for dangerous capabilities.
  • Standards and guidance: The National Institute of Standards and Technology is expanding its AI Risk Management Framework and releasing evaluation guidance for advanced systems. The Department of Commerce was asked to issue guidance on content provenance and watermarking.
  • Use in government: Agencies are adopting common rules to manage AI in public services. This includes inventories of AI use, impact assessments, and limits on sensitive applications.
  • Security and bio/chem risks: The order directs new safeguards where AI could increase chemical, biological, cyber, or critical infrastructure risks.

States are also active. Several have passed or proposed laws targeting deepfakes, automated hiring tools, and consumer privacy. This patchwork is still evolving. Businesses that operate nationally must track both federal guidance and state rules.

What companies need to do now

For many organizations, the most urgent work is mapping where and how they use AI. Legal and technical teams are reviewing systems against new criteria and documenting controls. Experts say the winners will treat compliance as operational discipline, not an afterthought.

  • Inventory and classify systems: Identify AI across the organization. Determine which uses may be “high-risk” under EU rules or sensitive under U.S. guidance.
  • Build a risk program: Establish processes for data governance, model evaluation, human oversight, and incident response. Align with frameworks such as NIST’s AI Risk Management Framework.
  • Document and disclose: Maintain technical documentation, testing records, and user-facing explanations. Expect to provide “model cards”, training data summaries, and clear user notices.
  • Prepare for audits: For high-risk EU uses, plan for conformity assessments and post-market monitoring. For advanced models, prepare to share evaluation methods and results with regulators.
  • Mark synthetic media: Implement content provenance or watermarking where required to help users spot AI-generated material.

Startups face particular challenges. Compliance can be costly and complex. But investors increasingly ask about AI governance. A basic program—policies, checklists, and documented testing—can reduce both regulatory and reputational risk.

Open models, research, and the transparency puzzle

One of the most sensitive debates centers on openness. Some argue that open-weight models foster accountability and broaden access. Others warn that powerful open models could lower barriers to misuse. The EU AI Act attempts a middle path. It recognizes “general-purpose AI” while adding extra duties for models that pose systemic risks. The U.S. order pushes for rigorous evaluation and sharing of safety results with the government, while leaving room for open research.

Researchers say transparency remains hard. Training data for large models often mixes public and proprietary sources. Copyright and privacy questions are active in courts. Evaluation is also imperfect. Benchmarks may not capture real-world misuse. That is why policymakers emphasize iterative testing and post-deployment monitoring. As one NIST document notes, the aim is to manage risks to people, organizations, and society over the AI system’s life cycle.

Impact on consumers and workers

For consumers, the rules promise clearer labeling and recourse. Users should know when they are interacting with AI and how to report problems. Deepfakes will be harder to pass off as genuine in regulated contexts. For workers, the focus is on transparency and fairness in automated decisions. Systems used for hiring and evaluation will carry stronger oversight and data quality requirements in Europe, and closer scrutiny in the U.S.

Civil society groups support limits on surveillance and discrimination. Industry groups warn that rigid rules could slow innovation or entrench incumbents. Regulators say they will adjust as evidence builds. The EU’s phased approach and the U.S. reliance on standards both aim to balance speed with caution.

What to watch next

  • Timelines: EU bans on “unacceptable risk” uses come first, followed by transparency duties and high-risk obligations. General-purpose AI rules, including for “systemic risk” models, will arrive as guidance is finalized.
  • Technical standards: Expect more detail on testing, content provenance, and model reporting. Standards bodies and regulators will define what counts as sufficient “red-teaming” and robust evaluation.
  • Global alignment: The G7 and other forums are pursuing voluntary codes and interoperability. Companies will push for common definitions to reduce duplication.
  • Enforcement cases: Early actions by EU authorities or U.S. regulators will set the tone. These cases will clarify where lines are drawn and how penalties are applied.

The legal environment for AI is no longer hypothetical. It is arriving in policy memos, standards, and soon, inspections. Builders and buyers of AI systems should prepare for scrutiny. The core expectations are not exotic. Know what your system does. Test it. Document it. Give users clear information. Fix problems fast. Those who meet these simple rules will be better positioned as AI moves from promise to practice under the law.