AI Rules Tighten: From Pledges to Enforcement

A global shift from principles to practice

Governments are moving from broad promises to concrete rules for artificial intelligence. In Europe, lawmakers have approved the AI Act, a risk-based law that aims to set clear obligations for developers and deployers. In the United States, agencies are rolling out guidance under a 2023 executive order, while the National Institute of Standards and Technology (NIST) expands testing and evaluation tools. At the United Nations, member states endorsed a resolution promoting “safe, secure and trustworthy” AI for development. The result is a new phase for the industry: compliance, documentation, and measurable safety claims.

Europe sets the pace: the AI Act

The European Union’s AI Act is designed around risk tiers. It bans certain uses judged too harmful, such as social scoring by public authorities. It imposes strict duties on high-risk systems, including many AI tools used in critical infrastructure, employment, education, medical devices, and law enforcement. Providers in these categories must establish risk management processes, ensure high-quality datasets, keep detailed logs, provide transparency to users, and guarantee meaningful human oversight.

General-purpose AI models and generative systems also face obligations. Providers must disclose capabilities and limits, address systemic risks at scale, and document training data practices in a way regulators can assess. The EU plans a phased implementation over the next few years, alongside new and existing oversight bodies. The European Commission has called the AI Act “the first comprehensive law on AI in the world”, and many global companies are preparing to align with it, regardless of where they are based.

U.S. and global steps gain speed

In the United States, the 2023 AI executive order directed agencies to set standards for safety testing, watermarking of synthetic media, and federal procurement. NIST released the AI Risk Management Framework (AI RMF) in 2023 and launched the U.S. AI Safety Institute in 2024 to develop evaluation methods for advanced models. NIST describes the framework as a tool “to better manage risks to individuals, organizations, and society”.

International coordination is accelerating. In March 2024, the UN General Assembly adopted its first AI resolution, urging countries to promote “safe, secure and trustworthy AI systems” that support sustainable development. The UK convened the AI Safety Summit in late 2023, producing the Bletchley Declaration, where signatories recognized risks from powerful models and the need for shared research on safety evaluation. While the documents are non-binding, they are shaping national agendas and standards efforts.

What companies should do now

Lawyers and compliance officers say the new regime rewards preparation. Many organizations are treating AI like other regulated technologies, with risk controls embedded in everyday workflows. Common early moves include:

  • Inventory and classification: Map all AI systems in use or in development. Label them by risk category and business criticality.
  • Risk management: Adopt a process to identify, measure, and mitigate model risks, including bias, privacy, robustness, and misuse.
  • Data governance: Track training and fine-tuning datasets. Document sources, licensing, consent where applicable, and filtering of sensitive attributes.
  • Human oversight: Define clear points where people can review, approve, or override model outputs. Train staff on limitations and failure modes.
  • Transparency artifacts: Publish user-facing notices and create technical documentation, such as model cards, system cards, and change logs.
  • Evaluation and red-teaming: Test models for safety, security, and performance under realistic threats. Record methods and results.
  • Incident response: Establish a process to detect, report, and fix AI incidents, including harmful outputs or data leaks.
  • Supplier diligence: Update contracts to require disclosures, testing results, and update cadences from third-party AI providers.

The hard questions: rights, safety, and infrastructure

Copyright and data use remain contested. News organizations, authors, and image libraries have sued AI companies over training data. Plaintiffs say unauthorized use of their works violates the law and harms their markets. AI developers argue that learning from public content is lawful and socially beneficial. OpenAI has said, “We believe that training AI models using publicly available internet materials is fair use.” Courts in the U.S. and Europe are now working through these claims. The outcomes will shape how models are built and priced.

Another pressure point is safety evaluation for advanced systems. Policymakers want rigorous, repeatable tests for capabilities like code execution, biosecurity-relevant knowledge, and autonomy. The U.S. AI Safety Institute and its international counterparts are building benchmarks and testbeds. Providers are also scaling internal red teams and external audits. The aim is to move beyond marketing claims and toward verifiable evidence of safety.

AI’s physical footprint is part of the policy debate. Data centers demand electricity and water for cooling. The International Energy Agency warned in 2024 that “electricity consumption from data centres, AI and the cryptocurrency sector could double by 2026”. Utilities and regulators are watching load growth in major hubs. The industry says improvements in chip efficiency, smarter workload scheduling, and procurement of renewables can help, but timelines for grid upgrades are long. Companies are experimenting with heat reuse, liquid cooling, and siting near abundant clean power to limit emissions.

Standards will do heavy lifting

Standards bodies are turning broad principles into checks and controls. ISO/IEC 42001, published in 2023, outlines a management system for AI. It borrows from familiar quality and security frameworks, making it easier for businesses to adapt. The European Committee for Standardization and global partners are producing technical norms that regulators can reference. These documents do not replace law, but they provide a common language for audits and supplier reviews.

Watermarking and content provenance are another active area. Tech and media firms are adopting credentials that attach tamper-evident metadata to images and videos, helping users see when content was created by a model and how it was edited. Adoption is uneven, and metadata can be stripped, but it gives platforms and publishers a baseline signal. Regulators may require such disclosures for certain use cases, including political ads.

What to watch next

Over the next year, expect more guidance from regulators on how to classify AI systems, how to calculate risk, and how to prove compliance. EU authorities plan to publish implementing acts and codes of practice for general-purpose models. U.S. agencies will test AI in high-stakes settings such as health, finance, and critical infrastructure. Courts will issue early rulings on copyright and on consumer protection claims involving AI-generated content.

The broader question is whether rules can keep pace without stifling useful innovation. Advocates say guardrails will build trust and open new markets. Skeptics warn that complex requirements could entrench incumbents. Both sides agree that clarity matters. As one NIST document puts it, the goal is “to better manage risks to individuals, organizations, and society” while enabling progress. That balance will define AI’s next chapter.

For businesses, the message is clear. The era of voluntary pledges is giving way to audits, evidence, and accountability. Those that invest early in governance, documentation, and robust testing will be better placed to comply—and to compete.