Global AI Rulebook Takes Shape
Governments move from principles to practice
Artificial intelligence is entering a new phase: the rulemaking era. After years of voluntary guidelines and industry pledges, governments are translating principles into binding requirements. The European Union has adopted the AI Act, the first comprehensive law of its kind among major economies. In the United States, the federal government is leaning on standards, procurement, and enforcement, guided by a sweeping 2023 executive order. The United Nations has urged all countries to develop trustworthy AI, while standards bodies have published management frameworks. Together, these steps are setting the contours of a global AI rulebook.
For companies that build or use AI, the message is plain. Compliance is no longer optional, and documentation will be as important as code. Investors, customers, and regulators are asking the same question: can the system be trusted?
What the EU AI Act actually does
The EU AI Act takes a risk-based approach. It distinguishes between prohibited uses, high-risk systems, and lower-risk applications that carry transparency obligations. The law aims to protect fundamental rights while allowing innovation to continue. It was agreed in 2024 after intense negotiations among EU institutions, with phased obligations to follow.
- Unacceptable risk: Certain practices are banned in the EU, including public authorities social scoring and manipulative systems that can cause harm, according to the final political agreement.
- High risk: AI used in areas such as employment, education, critical infrastructure, medical devices, and law enforcement faces strict requirements. Providers must implement risk management, data governance, human oversight, robustness, and cybersecurity measures, and register systems in an EU database.
- Transparency: Users must be informed when interacting with AI chatbots, and synthetic content (deepfakes) must be labeled in most cases.
- General-purpose AI: Developers of broadly capable models are subject to documentation and safety duties, including technical papers and content provenance measures. Models deemed capable of creating systemic risks face enhanced obligations such as adversarial testing and incident reporting.
Important details, such as guidance on technical documentation and conformity assessments, will arrive through implementing acts and standards. That means 2025 and beyond will be about how companies meet the law, not whether it applies.
How firms are preparing
Large technology providers and regulated-sector companies are building internal AI governance structures. These efforts resemble the maturing of cybersecurity a decade ago: policies, inventories, controls, and audits.
- AI inventories: Cataloging models and use cases, with contacts, owner accountability, and intended purposes.
- Risk assessments: Evaluating inputs, outputs, data provenance, fairness metrics, and failure modes. Many organizations are adapting their existing model risk management from finance to AI across departments.
- Red-teaming and testing: Structured stress tests for prompt injection, jailbreaks, and safety issues before deployment and on a recurring schedule.
- Documentation: Expanding model cards and datasheets to capture training data sources, limitations, and recommended uses.
- Human oversight: Clear procedures for when people must review or can override AI outputs, especially in hiring, credit, and health.
- Incident response: Channels to report AI-related harms or model failures, with playbooks for mitigation and notification.
Smaller firms face the same direction of travel with fewer resources. They are turning to cloud providers for built-in guardrails, and to industry groups for shared checklists. Several insurers are piloting policies that require minimum AI controls, signaling that risk transfer will hinge on governance maturity.
The U.S. approach: standards, enforcement, and procurement
Rather than a single federal AI law, the United States has stitched together a policy framework. The White Houses 2023 executive order directs agencies to advance safety testing, watermarking research, civil rights enforcement, and federal use standards. The National Institute of Standards and Technology (NIST) published its AI Risk Management Framework (RMF) in January 2023, offering a common vocabulary for trustworthy AI. Agencies from the Federal Trade Commission to the Department of Justice have signaled they will use existing laws against unfair or deceptive AI practices.
The result is a softer, but still consequential, regime. Federal contractors can expect their AI to be examined against RMF-aligned controls. Consumer protection rules apply whether or not the software is AI. And public sector buyers are starting to ask for evidence of testing, bias mitigation, and transparency.
The international layer: standards and diplomacy
Global standards are emerging in parallel. ISO/IEC 42001, published in 2023, sets out a management system for AI, similar to familiar ISO security and quality standards. It gives organizations a certifiable way to demonstrate process discipline. The UN General Assembly in 2024 adopted a resolution encouraging countries to develop safe, secure, and trustworthy AI in line with human rights. While nonbinding, it reflects a growing consensus on core principles and offers a forum for cooperation.
Technical work continues on content provenance and watermarking, as researchers and media companies test tools to signal synthetic content. None is foolproof. But adoption across platforms and newsrooms could make it harder for AI-generated media to masquerade as real.
What experts are saying
Industry leaders and researchers continue to emphasize both potential and peril. At a 2023 U.S. Senate hearing, OpenAI chief executive Sam Altman warned, “I think if this technology goes wrong, it can go quite wrong,” arguing for licensing of advanced models and independent audits. Nvidia chief executive Jensen Huang, speaking at the companys GTC conference, said the computer industry is undergoing “two simultaneous transitions—accelerated computing and generative AI,” a shift that is redrawing supply chains and data center designs.
Policy officials, for their part, have framed the new rules as enablers. By setting clear lines, they argue, the law can channel investment into safer, higher-quality systems. Civil society groups welcome bans on some intrusive uses but warn of enforcement gaps. They are watching how exceptions are interpreted and whether high-risk classifications cover enough real-world harms.
Key implications for business
- Governance is now table stakes. Expect board-level oversight, dedicated AI risk teams, and regular reporting.
- Documentation will decide speed to market. Products with clear datasets, testing evidence, and user safeguards will move faster through reviews.
- Vendors become partners in compliance. Contracts will include attestations on training data, IP rights, testing, and incident handling.
- Open-source is not exempt. Even freely available models, when used in high-risk contexts, can trigger obligations on the deployer.
- Talent gap widens. Demand is rising for people who speak both machine learning and audit.
What to watch next
In Europe, look for guidance on how to classify systems and how to prove conformity. In the United States, watch procurement rules, sector-specific guidance, and enforcement actions under existing laws. Internationally, standards adoption and cross-border cooperation will signal whether the emerging rulebook converges or fragments.
Most of all, watch how governance affects outcomes. Do hiring systems get fairer? Do medical AI tools become clearer about uncertainty? Do content provenance signals help news consumers separate fact from fabrication? Regulation is not a destination. It is a mechanism to shape incentives and behavior, and it will evolve with the technology.
For now, the direction is clear. The world is moving from AI principles to AI practice. Organizations that treat compliance as a design constraint—and a market differentiator—are likely to find they can move faster by moving deliberately. In AI, as in aviation and medicine, trust is built before the takeoff.