Governments Race to Rein In AI: What Changes Now

From Washington to Brussels, governments are moving fast to set ground rules for artificial intelligence. New policies in the United States and Europe signal a shift from voluntary guidelines to enforceable standards. Officials say the goal is to capture the benefits of AI while reducing risks to safety, rights, and markets. Companies now face clearer expectations. So do public agencies. The next year will test whether these rules work in practice.

A regulatory sprint gathers pace

AI systems moved from labs to daily life at historic speed. Chatbots, coding assistants, and image generators are now part of work and school routines. This surge has renewed calls for oversight. As AI pioneer Andrew Ng once said, “AI is the new electricity.” Many policymakers agree, but they also warn that power needs safeguards.

In the United States, the Office of Management and Budget (OMB) issued a government-wide policy in 2024 to manage AI risks across federal agencies. The memo, known as M-24-10, requires new governance, testing, and transparency steps for systems that affect rights or safety. It states plainly: “Each agency shall designate a Chief Artificial Intelligence Officer (CAIO).” That role must oversee inventories of AI use, risk assessments, and compliance.

Europe moved in parallel. The European Union’s AI Act, approved in 2024, is the world’s first comprehensive AI law. It introduces a risk-based framework with bans on a few practices, strict duties for high-risk uses, and transparency rules for general-purpose models. Fines can reach up to 7% of global turnover or €35 million for the most serious violations. The law will phase in over the next two years, with some bans taking effect sooner.

International efforts continue as well. The United Kingdom hosted the 2023 AI Safety Summit, where countries signed the Bletchley Declaration on AI safety. The G7 launched the Hiroshima AI process to develop shared principles. Standards bodies, including ISO and IEC, are shaping technical norms. The U.S. National Institute of Standards and Technology (NIST) is building out an AI Safety Institute and has published a voluntary AI Risk Management Framework to guide industry.

What the new rules require

  • EU AI Act risk tiers: The law bans certain “unacceptable risk” practices, such as social scoring by public authorities and AI that manipulates children. It classifies other applications as high-risk if they affect critical sectors like health, education, employment, or public services. High-risk providers must implement risk management, rigorous data governance, technical documentation, logging, human oversight, and robust accuracy and cybersecurity controls.
  • General-purpose AI (GPAI): Developers of large models face transparency duties. They must disclose technical capabilities and limitations, and provide information to downstream deployers so they can manage risk. Expectations include safety testing and mitigation of systemic risks for the most capable models. Details will evolve through standards and codes of practice.
  • U.S. federal agencies (OMB M-24-10): Agencies must appoint a CAIO. They must maintain public inventories of AI use cases. For “rights-impacting” or “safety-impacting” systems, they need independent testing, impact assessments, and safeguards such as human fallback. Agencies should provide notices to the public when AI is in use and, where feasible, opt-outs or alternatives. The policy also covers procurement, requiring vendors to supply documentation and risk information.
  • NIST guidance and testing: The NIST AI Risk Management Framework offers a voluntary structure to map, measure, manage, and govern risks. NIST is developing evaluation methods, red-teaming guidance, and benchmarks with the AI Safety Institute and an industry-academia consortium.

Taken together, these measures push providers and deployers toward evidence-based claims about safety and performance. They also encourage more disclosure to users about how systems work and what limitations they have.

Industry reaction and civil society concerns

Technology firms have largely welcomed clearer rules, saying they can invest with more certainty. At the same time, many warn of compliance costs and the risk of conflicting standards across borders. Smaller companies fear that documentation and testing requirements could favor the largest platforms unless regulators tailor obligations.

Rights groups see progress but argue that loopholes remain. They have raised alarms over biometric surveillance and law enforcement exemptions in Europe. They also want stronger protections against algorithmic discrimination and fraud. In the U.S., civil society organizations say transparent agency inventories and public notices are helpful, but enforcement and oversight will matter most.

Some AI leaders stress both potential and peril. OpenAI chief executive Sam Altman told U.S. senators in 2023, “If this technology goes wrong, it can go quite wrong.” Geoffrey Hinton, a pioneer of deep learning, said when he left Google the same year, “I left so that I could talk about the dangers of AI.” These warnings underpin the push for independent testing and audits before systems are deployed at scale.

What it means for consumers and developers

For consumers, the new rules promise clearer labeling and more avenues for recourse. Users should start to see notices where AI filters job applications, screens loan applicants, or supports public decisions. High-risk systems must enable human oversight, and providers will need to document limitations and known failure modes.

  • Transparency: Expect more disclosures about data sources, training methods, and intended uses, especially for high-risk applications.
  • Safety and quality: Pre-deployment testing and ongoing monitoring are set to become standard. Agencies and companies will have to prove that systems meet accuracy and robustness thresholds.
  • User rights: In some settings, people may get explanations, appeal options, or human review. Public agencies will be pushed to provide alternatives where AI could affect rights.

For developers, compliance will be a technical and organizational task. Teams will need to deepen model evaluations and ensure traceability across the supply chain.

  • Documentation: Create and maintain model cards, data sheets, and change logs. Be ready to share summaries with regulators and buyers.
  • Testing and red-teaming: Build repeatable safety tests for bias, robustness, and misuse. Use external evaluators when required.
  • Data governance: Track provenance and quality of training data. Mitigate harmful content and address representational gaps.
  • Human oversight: Design clear intervention points and fallbacks. Measure how often human reviews occur and their effect on outcomes.
  • Vendor management: When using third-party models or services, demand security attestations and risk information. Align contracts with regulatory duties.

What to watch next

Implementation will decide the impact. In Europe, guidance, standards, and new supervisory bodies will clarify how to classify risk and measure compliance. National regulators will coordinate with a European-level office to supervise general-purpose models. In the United States, the OMB policy sets expectations for federal agencies, while Congress continues to debate broader legislation. NIST’s test methods and benchmarks will shape how companies prove safety claims.

Coordination across borders will be crucial. Companies that operate globally will push for interoperable rules. Policymakers say they want innovation as well as safety. The coming year will show whether these frameworks can deliver both. The stakes are high, but so is momentum. As one policy memo put it, agencies must move from principles to practice. Now the hard work begins.