The Global Scramble to Govern AI

Governments around the world are moving fast to set new rules for artificial intelligence. The push follows a surge in generative AI tools and growing public concern about bias, misinformation, and safety. Europe has adopted the most far-reaching law to date, while the United States and others are relying on standards, executive actions, and sector rules. The result is a patchwork that could shape how AI is built and used for years.
Why it matters
AI systems are no longer confined to research labs. They draft emails, analyze medical images, summarize long documents, and generate realistic images and voices. These tools bring clear benefits. They also create new risks, from inaccurate outputs to privacy breaches and convincing deepfakes.
In early 2024, voters in New Hampshire received robocalls that mimicked the voice of the U.S. president and told them not to vote. The incident fueled calls for guardrails on synthetic media. Major platforms have since introduced labeling and provenance features. Standards groups are also at work. The Coalition for Content Provenance and Authenticity (C2PA) is promoting methods to attach tamper-evident metadata to digital media.
Experts say guardrails and transparency are essential. As the OECD’s AI Principles, adopted in 2019 by many countries, put it: “AI systems should be robust, secure and safe throughout their entire life cycle.” Policymakers are now codifying that idea into binding rules.
Europe moves first
The European Union’s landmark AI Act became law in 2024. It is the first comprehensive attempt to regulate AI across sectors. Thierry Breton, the European Commissioner for the Internal Market, hailed the moment, saying, “Europe is now the first continent to set clear rules for AI.”
The AI Act follows a risk-based approach. It imposes the strictest requirements on systems that pose the highest risks to people’s safety or rights. It also sets rules for general-purpose AI, including large models that power many popular tools.
- Prohibited practices: Some uses, such as social scoring by public authorities, are banned.
- High-risk systems: Products used in areas like medical devices, critical infrastructure, and employment face strict obligations on risk management, data quality, human oversight, and documentation.
- Transparency duties: Systems that interact with humans, generate content, or detect emotions must disclose that they are AI and label synthetic outputs in some cases.
- General-purpose AI: Developers of large models must provide technical documentation and meet safety, cybersecurity, and reporting duties. Models with “systemic risk” face additional obligations.
Enforcement will roll out in phases over the next two to three years. Penalties can be severe. The law allows fines up to 7% of a company’s global annual turnover or €35 million for the most serious violations, whichever is higher. Many firms are now preparing compliance programs, data governance, and model evaluation pipelines.
The U.S. bets on standards and oversight
The United States does not have a single, sweeping AI law. Instead, the federal government is leaning on existing authorities and technical guidance. In October 2023, the White House issued an Executive Order on Safe, Secure, and Trustworthy AI. It directs agencies to develop testing standards, address national security risks, and protect consumers and workers.
The National Institute of Standards and Technology (NIST) released its AI Risk Management Framework in 2023. The framework is voluntary, but many companies use it to structure internal controls. It emphasizes trustworthiness, covering safety, security, transparency, accountability, privacy, and fairness.
Congress has debated new rules, including proposals on deepfakes, kids’ online safety, and transparency. States are also active. Some have passed laws on AI in hiring and in consumer protection. Regulators, such as the Federal Trade Commission, say they will use existing powers to target deceptive or discriminatory uses of AI.
Industry leaders have called for clear guardrails. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” OpenAI CEO Sam Altman told U.S. senators in 2023. Companies have also signed voluntary commitments to test models, share risk information, and invest in watermarking research.
UK, G7, and the push for global alignment
The United Kingdom has positioned itself as a convener on AI safety. It hosted the AI Safety Summit at Bletchley Park in late 2023, where countries and companies endorsed the Bletchley Declaration. The UK also set up an AI Safety Institute to study frontier models. A follow-up summit in Seoul in 2024 continued the focus on scientific evaluation and incident reporting.
The G7’s Hiroshima AI Process produced nonbinding principles and a code of conduct for developers of advanced AI systems. Meanwhile, the United Nations and UNESCO have urged human rights-based approaches. UNESCO’s 2021 Recommendation on the Ethics of AI seeks to “protect and promote human rights and human dignity,” language many governments now reference in their AI strategies.
What changes for companies and users
For large developers, the new landscape means more documentation, testing, and public disclosures. For deployers—schools, hospitals, banks, and small businesses—it means asking tougher questions about the tools they buy. Many are building internal AI policies and risk registers.
- Model and data transparency: Buyers will seek clear descriptions of training data, known limitations, and intended use cases.
- Evaluation and red-teaming: Structured testing for safety, bias, robustness, and cybersecurity will become routine.
- Content provenance: Labels and metadata on synthetic media will be increasingly expected, especially in elections and public communications.
- Human oversight: Organizations will define when a person must review or approve AI-driven decisions.
Users may see more on-screen notices that say when a system is AI-powered. They may also find options to report harmful outputs or request human review. In sensitive areas, such as medical or employment decisions, expect more documentation and appeals processes.
Open questions and early tests
The rules are new and complex. Several issues could shape how they work in practice.
- Interoperability: Can technical standards and assurance methods satisfy requirements across jurisdictions, so companies avoid duplicative work?
- Scope and definitions: How regulators classify “high-risk” uses and “systemic risk” models will affect compliance burdens and innovation incentives.
- Enforcement capacity: Authorities will need technical expertise to audit complex systems and investigate incidents.
- SME impact: Smaller firms worry about costs. Policymakers are exploring sandboxes, guidance, and phased timelines to help them adapt.
Independent testing will be another early test. Labs, universities, and standards bodies are designing benchmarks for safety and misuse. Transparency on test methods and known gaps will be key to building trust. As one NIST document puts it, frameworks are most effective when they are “living” and updated as systems evolve.
The bottom line
AI regulation is no longer theoretical. Europe has a binding law. The United States, the UK, and others are steering through standards, guidance, and targeted rules. The common thread is risk management: know what a system can and cannot do, test it, monitor it, and explain it. That approach is likely to endure as models grow more capable.
The next year will bring detailed rules, guidance, and early enforcement. Companies that invest now in governance, documentation, and safety engineering will be better placed to comply—and to earn user trust. The scramble to govern AI is on. The winners will be those who make powerful systems safe, transparent, and accountable without slowing useful innovation.