Governments Race to Write the Rules for AI

Policymakers move to catch up with rapid AI advances
Governments are accelerating efforts to regulate artificial intelligence as powerful systems spread across workplaces, classrooms, and social media feeds. The European Union has adopted the AI Act, a sweeping law that creates obligations based on risk. The United States has issued a presidential executive order directing agencies to set safety standards for advanced models. China has enforced rules for generative tools since 2023. The goal is the same: capture the benefits of AI while reducing harm.
The push reflects a simple reality. AI adoption has arrived faster than most expected. Generative tools can write code, draft legal memos, and create photorealistic images. They can also produce convincing falsehoods at scale. Policymakers see both opportunity and risk in critical areas such as healthcare, finance, and elections. Many are building new oversight structures while relying on standards bodies to turn broad principles into concrete guidance.
What the new rules say
The EU AI Act, finalized in 2024, is the world’s most comprehensive AI law to date. It bans certain uses, such as social scoring by public authorities, and imposes strict controls on systems deemed high risk. It also introduces requirements for general-purpose AI models, including large foundation models that can be adapted for many tasks.
- Risk tiers: High-risk systems in areas like medical devices, employment, and critical infrastructure must meet obligations on data quality, transparency, human oversight, and robustness.
- General-purpose AI (GPAI): Providers must prepare technical documentation, disclose capabilities and limits, respect EU copyright rules, and share information with downstream developers. Systems with systemic risks face enhanced testing and reporting duties.
- Penalties: Non-compliance can trigger fines that scale with global revenue, with the highest tier reaching up to 7% for certain violations, according to the final compromise text.
The U.S. Executive Order on AI, issued in 2023, uses existing authorities to steer safety practices while Congress debates legislation. It directs the Commerce Department and the National Institute of Standards and Technology (NIST) to develop evaluation standards and red-teaming guidelines. It also requires developers of the most capable systems to report safety test results to the government under the Defense Production Act. The order promotes work on content authenticity, worker protections, and privacy-enhancing technologies.
In China, the 2023 Interim Measures for generative AI require security assessments, real-name registration, and content moderation aligned with national rules. Providers must address bias and protect personal data. Enforcement has included takedowns and fines for violating content standards.
Why now: speed, scale, and social impact
Since late 2022, generative AI systems have moved from labs to everyday use. They can summarize complex documents, automate routine tasks, and accelerate software development. But misuse has also grown. Deepfake scams target consumers and businesses. Synthetic media blurs lines in public discourse. In healthcare and finance, biased or inaccurate outputs can have serious consequences.
Regulators are responding to these dual realities. They want safe deployment without stifling innovation. The tension is clear. Companies seek clarity and a level playing field across markets. Startups worry about compliance costs. Civil society groups warn that marginalized communities may bear the brunt of failures if systems are released without sufficient guardrails.
Industry reacts with pledges and tools
Major AI firms have adopted voluntary commitments in the United States, including security testing, reporting on capabilities and limitations, and work on watermarking. Companies are testing content provenance tools, such as “content credentials” promoted by the Coalition for Content Provenance and Authenticity (C2PA). These tools can attach tamper-evident metadata to images, audio, and video to show how files were made and edited.
Sam Altman, the CEO of OpenAI, told U.S. senators in 2023, “We think that regulatory intervention by governments will be critical.” His testimony captured a growing view in parts of the industry that baseline rules can support trust and reduce systemic risks while allowing competition on features and performance.
Standards bodies are central to turning principles into practice. NIST describes its AI Risk Management Framework as “a living document” that organizations can adapt to their contexts. That flexible approach aims to keep pace with rapid technical change while promoting common terminology and processes for identifying, measuring, and managing risk.
What it means for businesses and users
Companies developing or deploying AI face new duties that vary by jurisdiction. For many, the near-term work is practical: understand inventory, classify risks, and document controls.
- Governance: Map AI use cases, appoint accountable leaders, and set escalation paths for incidents. Align with standards such as NIST’s AI RMF and ISO/IEC guidance where relevant.
- Data and evaluation: Track datasets and training pipelines. Test for bias and performance drift. Maintain reproducible evaluation methods and red-teaming protocols.
- Transparency and user control: Label synthetic content where required. Provide users with clear instructions and opt-outs. Keep human oversight in decision loops for high-risk contexts.
- Vendor management: Update contracts with model providers. Request technical documentation, safety test summaries, and support commitments.
- Incident response: Monitor for misuse and security breaches. Report significant events to authorities when rules require it. Fix failures quickly and document lessons learned.
For consumers, the most visible change may be more labels and disclosures. Users may see notices when content is AI-generated or when chatbots handle sensitive topics. In workplaces, employees can expect guidance on when and how to use AI tools, including rules for confidential data.
Open questions and risks
Key debates remain unresolved. Open-source developers worry that broad obligations for general-purpose models could slow research or favor large firms with compliance teams. Small businesses fear that unclear thresholds will create uncertainty. Privacy advocates argue that model training must respect consent and data minimization rules. Security experts see growing attack surfaces as AI is embedded into critical systems.
Cross-border coordination is another challenge. AI systems operate globally, but rules differ. The G7’s Hiroshima AI process and other forums seek common ground on testing, reporting, and incident disclosure. Companies will need to tailor compliance strategies across markets while pushing for interoperable standards.
The road ahead: from law to practice
The next phase is implementation. The EU is standing up an AI Office to supervise general-purpose models and coordinate national authorities. Member states will build enforcement capacity and issue guidance. In the United States, agencies will translate the executive order’s directives into measurable requirements, from safety test protocols to content authentication pilots. China is continuing audits and platform-level checks.
Success will depend on translation from principles to checklists that engineers and product teams can use. That means specific tests, documentation formats, and reporting channels. It also means continuous learning. As models evolve, oversight will have to adapt. Public trust will hinge on clear communication about capabilities and limits, timely correction of mistakes, and independent evaluations.
AI remains a general-purpose technology with far-reaching promise. The regulatory wave is not about stopping progress. It is about setting guardrails so that progress is safer and more reliable. Policymakers, companies, and researchers now share a common task: make the rules workable, keep them updated, and measure what matters.