AI Rules Take Shape: What Changes for Business

A new era of AI oversight arrives

Governments are moving fast to set rules for artificial intelligence. The European Union adopted the AI Act in 2024. The United States issued a sweeping executive order in late 2023. The United Kingdom, China, and others have also outlined approaches. Policymakers say they want to protect the public without stalling innovation. Companies now face a clearer message: prepare to prove your AI is safe, fair, and accountable.

Europe’s AI Act sets a global marker

The EU AI Act is the most comprehensive law of its kind to date. It uses a risk-based framework. The law bans certain applications, places strict duties on high-risk systems, and calls for transparency from general-purpose AI models. The rules will phase in over the next two years.

The Act prohibits some practices outright. These include social scoring by public authorities and several forms of biometric surveillance. For high-risk uses—such as AI in medical devices, employment, education, and critical infrastructure—providers must meet tight requirements. These include documented risk management, data governance, human oversight, and robust testing. General-purpose and generative AI providers face transparency and safety obligations, scaled by model capabilities and the risks they pose.

Supporters say the EU is setting a baseline others will follow. Critics warn the compliance load could weigh on small developers. Many international firms expect the law to shape their global product design. That is what happened with the EU’s privacy law, the GDPR, in 2018.

Washington’s mix: an executive order and standards

The United States does not have a single AI law. Instead, the Biden administration issued an executive order in October 2023. It directs agencies to develop safety, security, and civil rights guidance. It also pushes for privacy protection and competition.

According to a White House fact sheet, the order “establishes new standards for AI safety and security” and “promotes innovation and competition.” It instructs the National Institute of Standards and Technology to advance testing and evaluation. It tells federal agencies to appoint chief AI officers and inventory AI use cases. It encourages content provenance and labeling for synthetic media, especially around elections and consumer protection.

Congress is debating broader legislation, but timelines are uncertain. In the meantime, sector regulators are active. The Federal Trade Commission has warned against deceptive AI marketing and unfair data practices. Financial and health regulators have reminded firms that existing safety and discrimination rules still apply to AI systems.

Other models: UK, China, and global forums

The United Kingdom favors a “pro-innovation” approach. Instead of a single AI law, it asks existing regulators to apply AI principles in their domains. In 2023, the UK hosted a global AI Safety Summit and brokered the Bletchley Declaration, a nonbinding pledge for international cooperation on frontier risks.

China issued rules for generative AI in 2023. They require providers to conduct security assessments, label AI-generated content, and respond to user complaints. Several jurisdictions in Asia and the Middle East have also published guidance or voluntary codes. The result is a patchwork. But common themes are emerging: transparency, testing, accountability, and protections for consumers and workers.

Why this matters to companies now

  • Documentation is no longer optional. Regulators want clear records. That includes data sources, model design choices, testing methods, and post-deployment monitoring.
  • Human oversight must be defined. Firms need to specify who can intervene, when, and how. That includes fail-safes for high-risk decisions.
  • Data quality and bias controls are central. Expect scrutiny of training data, labeling, representativeness, and bias mitigation steps.
  • Transparency is widening. Users and regulators may require explanations of system limits and performance. Generative outputs may need labels or provenance signals.
  • Supply chain accountability is rising. Contracts with model providers and data vendors should address rights, safety assurances, and incident response.
  • Security is part of compliance. Adversarial testing, model hardening, and abuse monitoring are becoming baseline expectations.

Supporters, skeptics, and the open-source debate

Companies building AI systems say they want clarity. Many have welcomed risk-based rules that focus on uses rather than banning tools outright. At a 2023 U.S. Senate hearing, OpenAI CEO Sam Altman said, “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”

Startup founders and open-source developers worry about costs. They argue that heavy requirements could entrench the largest firms. Policymakers have responded by tailoring duties based on risk and scale. The EU’s law, for example, includes lighter obligations for lower-risk uses and aims to preserve open-source research while placing tighter rules on very capable general-purpose systems deployed at scale.

Civil society groups push for stronger safeguards. They want tighter limits on opaque surveillance and clearer redress for people harmed by automated decisions. Industry groups warn that rigid rules may freeze current methods in place, reducing flexibility as the science evolves. The debate will continue as regulators write technical standards and guidance.

What to watch next

  • EU implementation. The AI Act will roll out in stages. Expect guidance on high-risk classifications, notified bodies for conformity assessment, and rules for general-purpose models.
  • U.S. agency actions. Watch NIST testing frameworks, FTC enforcement on deceptive AI claims, and sector rules in finance, health, and employment.
  • Labeling and provenance. Tech companies are adopting content credentials standards. Labels help, but experts note that watermarks can be removed and detection remains imperfect.
  • State and local rules. U.S. states have moved on deepfakes, hiring tools, and consumer notices. Multistate compliance will remain complex.
  • International coordination. Forums like the G7 and OECD are pushing shared principles. Companies operating globally should expect converging expectations, even without identical laws.

How to prepare

  • Map your AI portfolio. Inventory systems, purposes, data sources, and users. Identify high-risk uses and public-facing generative features.
  • Build a risk management playbook. Define testing, approval gates, red-teaming, and incident response. Assign accountable owners.
  • Invest in data governance. Track provenance, consent, and licenses. Document cleaning, labeling, and bias mitigation steps.
  • Strengthen transparency. Provide plain-language model cards, limitations, and user guidance. Add content credentials where appropriate.
  • Review contracts. Update vendor and customer agreements to cover AI safety assurances, IP, and security obligations.
  • Train teams. Educate product, legal, security, and support staff on new duties and escalation paths.

The bottom line

AI oversight is shifting from principles to practice. The EU has codified a broad rulebook. The U.S. is moving through agency actions and standards. Other governments are adding their own models. The trend is clear. If a system can shape someone’s health, job, finances, or rights, regulators will ask for proof it works as intended and can be controlled. For most companies, that means turning voluntary AI policies into auditable processes—before the rules make it mandatory.