The Global AI Rulebook Takes Shape

A patchwork of AI rules moves from talk to action
Governments are racing to keep up with artificial intelligence. The speed of change is forcing policy from white papers into law. The European Unions landmark AI Act is now entering force in stages. The United States is leaning on an executive order and standards bodies. China has binding rules for generative systems. Others are lining up behind voluntary codes. The result is a global rulebook that is forming in pieces. It is ambitious, uneven, and already reshaping how AI is built and deployed.
Policymakers say the goal is not to slow progress, but to reduce risk. As the OECD put it in 2019, “AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being” (OECD AI Principles). The question in 2025 is how to get there without stalling innovation or entrenching the largest firms.
Why now: a mix of promise and concern
Powerful models moved from labs to the public in less than two years. That unlocked new productivity for coding, search, and visual design. It also created new hazards. Misinformation spreads faster. Bias in training data can become bias at scale. The compute and energy demands of frontier models raise cost and sustainability questions. The UN Secretary-General captured the mood in 2023: “The alarms about the latest form of artificial intelligence are deafening and they are loudest from the developers who designed it” (Antonio Guterres, UN remarks).
Countries are reacting with different tools, but many share a core aim: more testing, more transparency, and clear accountability.
Europes hard law arrives
The EU AI Act is the first comprehensive AI law from a major economy. It sets common obligations across the bloc. Early prohibitionssuch as bans on social scoring and some uses of biometric surveillancetake effect first. Requirements for high-risk systems phase in over the next two years. The law also covers general-purpose AI, including the largest “frontier” models.
The text is explicit about its scope. As the regulation states, “This Regulation lays down harmonised rules on artificial intelligence” (EU AI Act). Those rules follow a risk-based approach:
- Unacceptable risk: Practices banned outright, including AI for social scoring by public authorities and manipulative systems that cause significant harm.
- High risk: Systems used in areas like employment, education, credit, and critical infrastructure. These require risk management, high-quality data, human oversight, and logging.
- Limited risk: Obligations like transparency labels for chatbots and deepfakes.
- Minimal risk: Most applications; no specific duties.
Enforcement is serious. Violations of banned practices can draw penalties up to the higher of 35 million euros or 7% of global annual turnover. Other breaches face lower tiers. National supervisors will coordinate through a new EU AI Office. For industry, the immediate task is mapping use cases to risk tiers and getting governance in place before audits begin.
Washingtons executive push and standards path
In the United States, Congress has not passed a comprehensive AI law. The White House filled the gap with an Executive Order in October 2023. It leans on existing authorities, including the Defense Production Act, to require reporting and red-team test results for certain high-risk, dual-use models.
The order frames the approach simply: “It is the policy of my Administration to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI)” (President Bidens Executive Order on AI). Agencies were tasked to set rules for safety, civil rights, and federal procurement. The National Institute of Standards and Technology launched an AI Safety Institute to develop tests, benchmarks, and guidance. The Federal Trade Commission has warned that existing consumer protection and antitrust laws apply to AI claims and conduct.
At the state level, laws on deepfakes in elections and synthetic child sexual abuse material are proliferating. Data privacy rules in California, Colorado, and other states are also shaping how AI products handle personal data.
Chinas tight reins on generative AI
China adopted binding rules for recommendation algorithms in 2022 and for deep synthesis and generative AI in 2023. Providers must register services, conduct security assessments, and maintain content controls. Systems must label synthetic media. The measures emphasize traceability and alignment with local law and values. Chinas approach is fast and centralized. It sets clear compliance gates for domestic launches. It also raises hard questions for foreign firms about market access and data handling.
International coordination is growing, but voluntary
Many governments say cooperation is essential. The G7s Hiroshima AI Process produced a voluntary code of conduct for developers of advanced systems in 2023. The UK-hosted AI Safety Summit at Bletchley Park secured a declaration on frontier risk and commitments to share evaluation results with trusted partners. Standards groupsincluding ISO/IEC and the IEEEare translating principles into technical norms for risk management, transparency, and post-market monitoring.
These efforts are not binding in most places. But they influence procurement, audits, and investor expectations. They also give firms a way to align global practices when statutes differ.
What it means for companies and consumers
For companies, the short-term impact is governance and documentation. Boards want inventories of AI use, clear model cards, and escalation paths when systems fail. Compliance teams are embedding pre-release testing, bias checks, and human-in-the-loop controls into development. Providers of general-purpose models face new reporting and evaluation duties in the EU. In the U.S., federal contractors will likely see stricter clauses on data integrity, watermarking, and safety testing.
For consumers, more labels and disclosures are coming. Expect clearer notices when content is AI-generated, ways to contest automated decisions in sensitive contexts, and channels for reporting harm. Over time, better benchmarks should make performance claims more comparable across products.
- Near term: Transparency labels, opt-outs for training data where required, clearer terms of service.
- Medium term: Audits for high-risk systems, accessible impact assessments, and stronger redress mechanisms.
- Long term: Convergence on international testing standards and cross-border enforcement cooperation.
The open questions
Three debates will define the next phase:
- Scope: How far should rules for general-purpose AI reach into downstream applications? Europes approach is broad; the U.S. favors targeted measures.
- Security and openness: Should the highest-risk models be open or restricted? Policymakers must balance research benefits against misuse.
- Compute and concentration: Safety testing and compliance are expensive. Without support for startups and academia, rules could favor incumbents.
There are also practical gaps. Watermarking of synthetic media is improving, but it is not foolproof. Evaluating long-horizon risks remains hard. Measuring and mitigating environmental impact is now part of the conversation as data centers grow.
Outlook: From principles to practice
The world is not converging on a single AI law. It is converging on a shared set of goals: safety, transparency, fairness, and accountability. The tools to get there differ by region. That will challenge global deployment. Yet common testing methods and audit practices can bridge some of the divide.
For now, one thing is clear: the era of build first and ask later is ending. The next wave of AI progress will be judged not only by what models can do, but by how reliably, safely, and fairly they do it.