AI Gets Rules: Inside the New Global Playbook
Governments move from promises to policies
Artificial intelligence is no longer operating in a regulatory vacuum. Over the past year, lawmakers and international bodies have taken concrete steps to shape how powerful AI systems are built and used. The European Union has approved the AI Act, the United Nations backed a global resolution on safe and trustworthy AI, and the United States issued a sweeping executive order to guide development and use. Together, these initiatives mark a shift from voluntary pledges to formal expectations and oversight.
The pace of action reflects both opportunity and risk. Sam Altman, chief executive of OpenAI, put the stakes plainly in 2023 testimony to the U.S. Senate: ‘I think if this technology goes wrong, it can go quite wrong.’ The new rules aim to realize the benefits while reducing the chance of harm.
What the EU AI Act actually does
The EU AI Act, approved by the European Parliament in March 2024, is the most comprehensive framework to date. It uses a tiered approach based on risk, with stricter obligations for systems that can affect people’s rights, safety, or access to essential services.
- Risk tiers: The law distinguishes between minimal, limited, high, and prohibited risk systems. Minimal risk tools face few obligations. High-risk systems, such as those used in employment or access to public services, must meet rigorous requirements around data quality, documentation, transparency, human oversight, and testing.
- Bans on certain uses: Practices like social scoring by public authorities are prohibited. Real-time remote biometric identification in public spaces is tightly restricted to narrow, legally defined circumstances.
- General-purpose AI: Developers of general-purpose or foundation models face new transparency and safety duties, scaled to the models’ capabilities and the risks they pose.
- Enforcement: The Act provides for significant penalties for violations. National authorities and a new coordination structure will oversee compliance across the bloc.
Thierry Breton, the EU’s internal market commissioner, celebrated the political deal that unlocked the law, saying Europe is ‘the first continent to set clear rules for the use of AI.’ Supporters argue the Act offers predictability and protects fundamental rights. Critics warn that broad definitions and compliance burdens could slow innovation, especially for smaller firms and open-source projects.
A wider global push on AI safety
The EU is not acting alone. In March 2024, the United Nations General Assembly adopted a consensus resolution urging the safe, secure, and trustworthy development of AI. While nonbinding, the resolution signals growing international alignment on principles such as human rights protections, transparency, and risk management.
In the United States, an October 2023 executive order directed federal agencies to set guardrails for advanced systems. It called for standardized safety testing, guidance on watermarking and content provenance, and assessments of AI’s impact on critical infrastructure and the workforce. The order built on earlier voluntary commitments from major AI companies to conduct red-teaming, invest in cybersecurity, and label AI-generated content.
Globally, the U.K. convened the 2023 AI Safety Summit, where governments and companies endorsed the Bletchley Declaration acknowledging risks from frontier AI and the need for international cooperation. Standards bodies, including NIST in the U.S. and international groups such as ISO/IEC, are developing technical benchmarks to translate principles into practice.
Industry reaction: relief, caution, and open questions
Tech companies have asked for clarity, and they are getting it. But questions remain about how the rules will be interpreted and enforced. Developers emphasize the difficulty of measuring risk in complex, adaptable systems, particularly general-purpose models that are used in many contexts. Open-source advocates worry about obligations falling on upstream model developers rather than on downstream deployers who control real-world applications.
Geoffrey Hinton, a pioneer of neural networks who left Google in 2023, has warned that rapid progress raises novel risks. Reflecting on his role in the field’s rise, he told the New York Times, ‘I console myself with the normal excuse: If I hadn’t done it, somebody else would have.’ His comments underscore the tension between scientific advance and societal safeguards.
For industry leaders, the long view is striking. Google’s Sundar Pichai has called AI ‘more profound than fire or electricity,’ a reminder of the scale of anticipated change. That promise heightens the stakes for getting governance right. Companies want consistent rules across markets and a level playing field for both proprietary and open approaches.
What changes for businesses and users
- More documentation and testing: Firms deploying high-risk AI in areas like hiring, credit, education, or public services will need robust documentation, data governance, and human oversight. Expect more model cards, risk assessments, and audit trails.
- Clearer labels and provenance: Government guidance and industry standards are pushing toward content labeling and watermarking, making it easier to identify AI-generated text, images, and audio. Media and social platforms are likely to expand provenance tools.
- Procurement pressure: Public-sector buyers will increasingly require compliance with safety and transparency standards, influencing the broader market.
- User rights: People affected by high-risk systems should see more explanations and avenues to contest decisions, especially in the EU.
- Security by design: Expect more emphasis on adversarial testing, red-teaming, and protections against prompt injection, data poisoning, and model theft.
Why it matters now
AI capabilities have advanced quickly, moving from experimental demos to products used by hundreds of millions of people. The risks are concrete: bias in automated decision-making, misinformation at scale, privacy incursions, and new cyber threats. Regulators are responding with a mix of principles, technical standards, and enforcement. The goal is to steer innovation toward safe and beneficial uses without freezing progress.
The early evidence suggests regulation and innovation can coexist. Cloud providers, startups, and open-source communities continue to release new models while incorporating testing, safety layers, and provenance tools. Still, the costs of compliance will be uneven, and smaller firms may need support to meet new obligations.
The road ahead
The next phase is implementation. Agencies must translate legal texts into guidance, audits, and day-to-day oversight. Companies must build compliance into product lifecycles. Standards bodies will refine metrics for robustness, fairness, privacy, and transparency. Internationally, governments will need to coordinate on cross-border issues, from AI in online content to safety testing for frontier models.
- Key milestones to watch:
- EU guidance on high-risk classifications and obligations for general-purpose models.
- U.S. standards for red-teaming, watermarking, and critical infrastructure risk management.
- New industry benchmarks for model evaluation and content provenance.
- Public-sector procurement rules that set de facto market standards.
The rulebook for AI is taking shape. It will evolve as the technology does. The challenge for policymakers and developers alike is to keep learning, measuring, and adjusting. As Altman’s warning suggests, the cost of failure could be high. But with clear rules, rigorous testing, and transparency, the gains from AI can be shared more widely and more safely.