Governments Sharpen AI Rules: What Businesses Must Do

New guardrails arrive for a fast-moving technology

Governments are moving to put boundaries around artificial intelligence. The European Union has adopted a comprehensive law for AI. U.S. agencies have released guidance that aims to steer industry practice. Companies now face a clearer picture of what safe, responsible AI should look like. They also face new work to comply.

The EU AI Act sets binding rules across the bloc. It takes a risk-based approach. Uses that pose an unacceptable risk will be banned. High-risk systems will face strict obligations before they can be sold or deployed. The European Commission says the goal is to “ensure that AI systems placed on the Union market are safe and respect fundamental rights and EU values”.

In the United States, the National Institute of Standards and Technology (NIST) has published a voluntary framework. It defines what trustworthy AI should be and how to get there. The document states: “Trustworthy AI is valid and reliable, safe, secure, and resilient”. It also stresses accountability, transparency, privacy, and fairness.

Why now: rapid adoption, rising risk

AI has moved from labs into daily life. Generative tools draft emails, summarize documents, and create images. Banks use models to score credit. Hospitals test AI scribes and decision support. The benefits are real. So are the risks. Models can make up facts. Data can encode bias. Attackers can manipulate inputs to trigger errors.

Geoffrey Hinton, a pioneer of the field, warned in 2023: “It is hard to see how you can prevent the bad actors from using it for bad things”. Policymakers are trying to reduce those harms without blocking progress.

What the EU AI Act requires

The law applies to providers and users of AI systems in the EU. It creates several layers of obligations:

  • Bans on certain practices. These include some forms of social scoring by public authorities. The law also restricts biometric categorization that uses sensitive traits, and certain kinds of untargeted facial recognition database building.
  • High-risk systems face rigorous controls. These systems include AI used in critical infrastructure, education, employment, law enforcement, and other sensitive areas. Providers must implement risk management, quality data governance, technical documentation, logging, human oversight, robustness, and cybersecurity. Many systems will need a conformity assessment before entering the market.
  • Transparency duties. Some AI tools that interact with people must disclose that they are AI. Systems that generate or manipulate content may need to indicate that content is AI-generated.
  • General-purpose models (including large foundation models) carry specific obligations. These may include technical documentation, model evaluation, and information-sharing with downstream developers to support safe deployment.
  • Enforcement and penalties. National regulators will oversee compliance. The law foresees significant fines for serious violations, scaled to a company’s global turnover.

The rules will phase in over time, starting with bans and transparency duties, and later the full high-risk regime. The Commission and national authorities plan to offer guidance and regulatory sandboxes to help firms test new systems under supervision.

The U.S. approach: a playbook, not a mandate

The U.S. has opted for a toolbox rather than a single law. NIST’s AI Risk Management Framework (AI RMF) gives a common language for risk. It centers on an AI lifecycle: map, measure, manage, and govern. It lists attributes of trustworthy systems and suggests practical steps for organizations.

  • Map: Understand context, impacts, and stakeholders. Identify who could be harmed and how.
  • Measure: Test for accuracy, robustness, bias, privacy, and security. Use quantitative and qualitative methods.
  • Manage: Mitigate risks, monitor performance, and respond to incidents.
  • Govern: Assign roles, document decisions, and ensure accountability across teams.

Federal agencies have also issued sector guidance. Financial, health, and consumer protection regulators are signaling that existing laws on discrimination, safety, and advertising still apply to AI. Industry standards bodies are working on documentation, audits, and benchmarks.

What this means for businesses

Firms that develop or deploy AI will need to tighten their processes. The changes are less about one-off fixes and more about how teams build and run systems over time.

  • Inventory systems. Map where AI is used, who owns it, and what risks it poses. Include vendor models and generative tools employees use.
  • Adopt a risk program. Align with NIST’s lifecycle. Set thresholds for accuracy, robustness, and bias. Decide when to stop a deployment if metrics fall short.
  • Document training data sources, intended use, limitations, and known failure modes. Keep logs for traceability.
  • Test and monitor. Run pre-deployment and ongoing evaluations. Include red-teaming for safety and security. Monitor drift and model updates.
  • Enable human oversight. Define when a person must be in the loop. Train staff on how to interpret and challenge AI outputs.
  • Engage legal and compliance early. The EU AI Act and sector rules may trigger conformity assessments, notices, or impact assessments.
  • Be transparent with users. Label AI-generated content where required. Offer clear, plain-language explanations of what the system does and its limits.

Supporters and critics both see high stakes

Supporters say clear rules build trust and reduce the chance of harmful mistakes. They argue that safe, reliable systems will scale faster. Consumer groups welcome bans on intrusive uses and stronger oversight in high-risk areas.

Industry has warned about costs and uncertainty. Startups fear heavy documentation and assessment burdens. Companies building general-purpose models want clarity on tests, thresholds, and how to share technical details without exposing trade secrets. Regulators say they will consult and publish guidance. They also point to sandboxes to reduce compliance friction for small firms.

What remains unclear

Several issues are still evolving:

  • Technical standards. Many obligations will rely on harmonized standards for testing robustness, bias, and transparency. These are in development.
  • Model evaluations. Independent benchmarks for general-purpose models are emerging, but they remain in flux. Agreement on safety thresholds will take time.
  • Global interoperability. Firms operate across borders. Aligning EU rules, U.S. guidance, and other regimes will be a challenge.
  • Liability. When AI causes harm, who is responsible? Providers, deployers, and integrators may share duties under different laws.

The bottom line

AI governance is entering a more mature phase. Europe is setting binding rules. The United States is offering a playbook many companies already follow. The direction is consistent: build systems that are safe, fair, transparent, and accountable. Organizations that invest now in risk management will move faster later. They will be better placed to answer a simple, critical question from regulators and customers: Does your AI work as intended, and what happens when it does not?

The new guardrails will not stop innovation. They will put more responsibility on those who build and deploy AI. That is the trade-off policymakers are making. As Hinton warned, bad actors will try to misuse the technology. The task for everyone else is to build it well, test it hard, and tell the truth about what it can and cannot do.