AI Rules Take Shape: Who Sets the Standard?

A global push to govern fast-moving AI

Governments are moving quickly to set rules for artificial intelligence. The aim is to capture economic gains while reducing risks. The effort has gathered speed since the surge of large language models in 2023. Policymakers now face a core question: who sets the standard for safe and trustworthy AI?

The European Union has approved the AI Act, the first comprehensive law focused on AI systems. The United States is leaning on existing laws, a White House executive order, and federal agencies. The United Kingdom and others are building testing hubs and safety institutes. Industry is adapting, but it is also warning about costs and compliance burdens.

Why regulation is arriving now

AI is spreading across workplaces, schools, hospitals, and media. It is writing code, drafting reports, answering questions, and generating video. Along with benefits, it raises concerns.

  • Safety and reliability: Models can be wrong, confident, and hard to predict.
  • Bias and discrimination: Systems trained on skewed data can produce unfair outcomes.
  • Security and misuse: Tools can help craft scams, malware, or deepfakes.
  • Transparency: People often cannot see how systems reach a result.
  • Copyright and provenance: Creators question how training data is collected and used.

Public agencies and researchers have urged caution. In a 2019 paper title that became a rallying cry, Duke University professor Cynthia Rudin argued, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.” The message is clear: when critical rights are at stake, opacity is a risk in itself.

Europe’s AI Act sets a new bar

The EU’s AI Act, approved in 2024, takes a risk-based approach. It classifies AI uses from minimal to unacceptable risk. Some practices are banned, such as social scoring by public authorities and certain forms of untargeted facial scraping. Real-time remote biometric identification in public spaces faces tight limits. High-risk systems, like those used in hiring or essential services, must meet strict requirements for data quality, testing, human oversight, and documentation.

Generative models are addressed with transparency duties. Providers must disclose AI-generated content and share summaries of copyright-protected training data, among other steps. National authorities and a new EU office will coordinate enforcement. Non-compliance can trigger hefty fines.

Lawyers say the law will take effect in phases. That gives companies time to audit systems and adjust development pipelines. It also gives regulators time to issue guidance and technical standards. The Act’s influence could extend beyond Europe as global firms align products to one rulebook. This is the so-called Brussels effect.

The U.S. leans on agencies and standards

In the United States, Congress has debated bills but not passed a sweeping AI law. The White House issued an Executive Order in 2023 on safe, secure, and trustworthy AI. It directed agencies to set testing, reporting, and privacy protections. The National Institute of Standards and Technology (NIST) released a voluntary AI Risk Management Framework in 2023 that many companies now use.

NIST’s framework defines characteristics of trustworthy AI. They include: “valid and reliable,” “safe,” “secure and resilient,” “explainable and interpretable,” “privacy-enhanced,” and “fair—with harmful bias managed.” Federal regulators have also warned that existing laws still apply to AI products. As the U.S. Federal Trade Commission put it in a 2023 post, “There is no AI exemption to the laws on the books.”

Some states have moved on privacy and automated decision rules. Enforcement actions have targeted deceptive claims, biased outcomes, and weak data safeguards. Industry groups say a national law could unify a growing patchwork.

Company leaders have also asked for guardrails. OpenAI chief executive Sam Altman told U.S. senators in 2023, “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” His view is widely shared among large model developers, even as they disagree on how strict the rules should be.

Safety institutes and international pledges

The UK hosted a landmark AI Safety Summit in 2023. Countries signed the Bletchley Declaration to pursue shared testing and oversight for advanced models. The UK set up an AI Safety Institute to evaluate model capabilities and risks. The United States has supported a compatible approach through NIST and its AI Safety work with partners.

International bodies have added ethical guardrails. UNESCO’s 2021 Recommendation on the Ethics of AI calls on members to “protect and promote human rights and human dignity.” The World Health Organization has urged careful validation of AI in clinics and warned against bias. The G7 and OECD have updated AI principles to reflect generative models.

What companies are doing now

Major developers are building internal safety teams, toolkits, and reporting practices. Many publish system cards, risk assessments, and usage policies. Teams are expanding red-teaming and incident response. Watermarking research and content provenance standards, such as those from the Coalition for Content Provenance and Authenticity (C2PA), are spreading in media workflows.

Enterprises deploying AI are mapping use cases to risk levels. They are setting approval gates for high-impact applications. Typical steps include:

  • Inventory: Track models, data sources, and vendors.
  • Testing: Evaluate performance, bias, and robustness before launch.
  • Guardrails: Apply content filters and monitoring for misuse.
  • Human oversight: Keep people in the loop for critical decisions.
  • Documentation: Record purpose, limits, and user instructions.

Smaller firms say costs are a concern. Compliance requires staff, tooling, and audits. Advocates counter that clear rules lower long-term risk and enable responsible scaling. Insurers and investors are starting to ask for proof of controls.

Impacts for consumers and workers

For consumers, clearer labels and disclosures could reduce confusion. People should see when they are interacting with an AI and what data it collects. Complaint channels and appeals will matter when automated decisions have real effects.

For workers, AI promises productivity and new roles. But it also raises questions about surveillance, deskilling, and job quality. Regulators are watching workplace monitoring tools and automated hiring systems closely. Labor groups want transparency, audit rights, and training support.

What to watch next

Over the next two years, expect detailed rulemaking and standards to fill in the frameworks. The EU will draft guidance and designate high-risk categories. U.S. agencies will continue enforcement and update testing protocols. More countries are likely to adopt rules aligned with one of the major models.

The central debate remains balance. Too little oversight could erode trust and cause harm. Too much could slow innovation and limit competition. The likely path is iterative: test, measure, adjust. As one regulator noted, laws already on the books still apply. And as one industry leader told Congress, public rules are part of the solution. The race now is not only to build the most capable systems, but to build them safely, openly, and for the public good.