Governments Race to Set the Rules for AI

Policymakers move as AI adoption surges
Governments across the world are writing new rules for artificial intelligence. The push follows the rapid rise of generative AI tools since late 2022. Investors poured billions into startups. Cloud budgets swelled. Chips grew scarce. Now regulators are trying to catch up, aiming to protect consumers without stifling innovation.
The European Union has taken the most sweeping step so far. In 2024, lawmakers approved the EU AI Act, the world’s first broad framework for AI. In the United States, the White House issued an executive order in 2023 to promote safe, secure, and trustworthy AI. The United Kingdom hosted a global summit on AI risk. Standards bodies released guidance. Companies are hiring compliance staff and building internal guardrails.
What the EU AI Act does
The EU law uses a risk-based approach. It places the strictest rules on systems used in sensitive areas such as health, employment, and public services. Some practices are banned, including social scoring by governments and certain uses of biometric data. General-purpose AI models also face requirements around transparency and safety.
Enforcement will phase in over time. Bans on prohibited uses come first. Obligations for high-risk systems roll out later. Fines can be steep and are tied to a company’s global turnover. Businesses selling AI in the EU will need to inventory their systems, assess risks, keep logs, and provide clear information to users.
Industry groups welcome clarity but worry about cost. Civil society groups say the law is a start but want stronger limits on biometric surveillance. The EU will publish standards and guidance to help companies comply. National authorities will supervise the rules, with a new European AI Office coordinating cross-border cases.
The US leans on guidance and oversight
The United States has not passed a comprehensive AI law. Instead, it has used a mix of executive action, sector rules, and voluntary frameworks. In October 2023, the White House issued a sweeping executive order on AI. It calls for safety testing, reporting, and safeguards for advanced models. Agencies must update privacy protections and civil rights guidance. The order also directs investment in research, cybersecurity, and the AI workforce.
At the same time, the National Institute of Standards and Technology released the AI Risk Management Framework in early 2023. It is voluntary but influential. Many companies use it to structure governance. It urges organizations to measure and manage risks across the AI lifecycle, from data collection to deployment and monitoring.
Lawmakers are debating next steps. At a Senate hearing in 2023, OpenAI chief executive Sam Altman said, ‘We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.’ Industry leaders want clear, flexible rules. Consumer advocates warn of bias and misinformation. Both sides agree that transparency and accountability are needed.
Global approaches diverge
The UK has set a ‘pro-innovation’ approach. It asked existing regulators to apply AI principles in their sectors instead of creating a single new law. In November 2023, the government hosted the AI Safety Summit at Bletchley Park, where countries agreed to deepen cooperation on frontier risks.
Canada has proposed the Artificial Intelligence and Data Act. It would regulate high-impact systems and set enforcement powers. Debate continues over definitions and scope. In Asia, several countries have issued targeted rules. China has interim measures on generative AI that require security assessments and content controls. Singapore offers voluntary model governance guidance.
These differences matter for global companies. AI models are trained and deployed across borders. Compliance teams must map use cases to each jurisdiction. Firms are also watching standard-setting bodies. Technical standards will shape how concepts like transparency and robustness are measured.
Why this is happening now
AI capabilities have improved quickly. Large language models can draft text, write code, and pass exams. Image and audio tools can create realistic content. With that power comes risk. Policymakers cite fraud, bias, privacy, and safety as top concerns.
In 2023, a one-sentence statement from the Center for AI Safety warned, ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ The line drew headlines and split opinion. Some researchers called it a necessary alarm. Others warned it could distract from present harms like discrimination and labor impacts.
The US executive order framed the government’s goal as safe, secure, and trustworthy AI. The phrase reflects a consensus emerging in policy circles. Systems should work as claimed. They should protect data and rights. They should include human oversight, clear documentation, and recourse when something goes wrong.
What companies are doing
Enterprises are moving from pilots to production while building guardrails. Many are appointing AI leads. Compliance, security, and product teams are working together. The aim is to ship value and manage risk at the same time.
- AI inventories: Firms are cataloging models and use cases across business units. Shadow projects are being pulled into official oversight.
- Risk assessments: Teams score use cases by impact and context. High-risk deployments face extra checks, including bias testing and human review.
- Data governance: Legal and engineering groups are tightening data sourcing, consent, and retention. Synthetic data is being explored where appropriate.
- Technical safeguards: Companies are adding rate limits, content filters, watermarking, and model monitoring. Incident response plans are being updated for model failures.
- Transparency: Model cards, user disclosures, and clear opt-outs are becoming standard. Procurement contracts now include AI clauses.
Vendors are also adapting. Cloud providers offer tools for red-teaming and evaluation. Chipmakers are expanding capacity. Startups are specializing in compliance, testing, and alignment.
What to watch next
The next two years will be about implementation. The EU will issue guidance and technical standards. US agencies will publish rules under the executive order. The UK will test regulator coordination. More countries will fill gaps. Courts will weigh in on copyright and data scraping.
Experts say companies should prepare now. Legal uncertainty will persist, but the direction is clear. Systems will need to be explainable, resilient, and monitored. Documentation will matter. Boards will ask for dashboards and assurance. Investors will ask about supply chains for compute and data.
There are open questions. How will rules handle open-source models? What thresholds will trigger the strictest controls? How will governments test and audit large models? Policymakers say they want to support innovation. Developers say they need clear, predictable requirements.
The stakes are high. AI is moving into healthcare, education, finance, and public services. It can speed discovery and improve access. It can also scale errors. The challenge for governments is to set guardrails that protect people and encourage useful progress. That balance will shape how and where AI gets built and used.