AI Rules Are Coming: What Changes Now
Regulators move from promises to enforcement
After years of pledges about responsible artificial intelligence, governments are now writing rules with teeth. The European Union’s landmark AI Act is entering phased implementation. The United States has issued an executive order on AI safety. China and other major markets are tightening requirements on generative systems. Companies that built fast now face a new reality: document, test, disclose, and explain.
The momentum reflects a simple tension. AI is powering new products and productivity gains. It is also raising risks to privacy, safety, and fairness. As Google chief executive Sundar Pichai said in 2018, “AI is one of the most important things humanity is working on. It is more profound than electricity or fire.” That promise has a shadow. OpenAI’s Sam Altman told the U.S. Senate in 2023, “I think if this technology goes wrong, it can go quite wrong.”
The path to today’s rules
Early efforts focused on guidance. In 2019, the OECD adopted principles that many governments now cite. One of them states, “AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.” UNESCO followed with a global recommendation on AI ethics. The G7 launched the Hiroshima AI Process in 2023, encouraging voluntary codes of conduct. The United Kingdom convened the AI Safety Summit at Bletchley Park the same year, where countries endorsed cooperation on frontier risks.
Policy moved faster after the rise of large language models. In October 2023, the White House issued an executive order titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” It called for safety testing, watermarking guidance, and federal procurement standards. China adopted rules for generative AI services that require security assessments and content moderation. The EU pressed ahead with a horizontal law that spans sectors and systems.
What the EU AI Act requires
The EU AI Act uses a risk-based approach:
- Prohibited practices: Certain uses, such as social scoring by governments, are banned.
- High-risk systems: AI used in areas like critical infrastructure, medical devices, employment, education, and essential services must meet strict obligations.
- Limited-risk systems: Tools like chatbots face transparency rules that inform people they are interacting with AI.
- General-purpose models: Providers of powerful foundation models face transparency, documentation, and, for the most capable models, extra safety and cybersecurity measures.
High-risk providers will need to implement risk management, ensure quality data, provide technical documentation, enable human oversight, and maintain post-market monitoring. Many systems will require conformity assessments and CE marking before entering the EU market.
Penalties are significant. For banned uses, fines can reach up to €35 million or 7% of global annual turnover, whichever is higher. Other violations can draw up to €15 million or 3%. Supplying incorrect information to regulators can bring up to €7.5 million or 1%. The law phases in over time, with bans taking effect before full high-risk obligations.
What companies are doing now
Firms deploying AI in the EU are taking stock. Many are mapping their systems to risk categories and building governance programs.
- Model documentation: Teams are producing system cards that describe capabilities, limits, and training data practices.
- Testing and red teaming: Companies are stress-testing models for prompt injection, jailbreaks, unfair bias, and safety failures.
- Data governance: Legal and data teams are adding provenance checks, consent records, de-identification, and copyright compliance, including honoring opt-outs for training data where required.
- Human oversight: Interfaces are adding review steps so humans can correct or override AI decisions in sensitive settings.
- Incident response: New playbooks define how to detect, report, and remediate harmful outputs and model drifts.
Suppliers of general-purpose models face a transparency push. They are being asked to share information with downstream developers so customers can meet their own duties. That includes details on capabilities, known risks, and how to integrate safeguards.
What the U.S., U.K., and China are emphasizing
Policy is diverse, but themes rhyme.
- United States: The 2023 executive order directs agencies to set rules for safety testing, secure model development, and AI in critical uses like health and finance. It leans on standards bodies, including NIST’s AI Risk Management Framework, and uses federal procurement to spread best practices.
- United Kingdom: The U.K. has favored a context-specific approach, asking sector regulators to apply AI principles rather than creating a single AI statute. Its safety summits focus on frontier model risks and international evaluations.
- China: Rules on generative AI require providers to conduct security assessments, label content, and align outputs with existing content laws. Providers must file with authorities before offering public services.
The stakes for sectors
Healthcare, finance, and hiring are likely to see the most immediate effects. High-risk classification and sector rules overlap in these areas. That means tighter validation, clinical or statistical evidence, and clear accountability chains.
Small and medium-sized enterprises face a resource challenge. Compliance demands documentation, testing, and legal review. Support may come from open standards, shared toolkits, and industry groups. Policymakers say they aim to avoid chilling innovation while raising the floor on safety.
Supporters and critics
Advocates argue that common rules will build trust and unlock adoption. Shared baselines can reduce the risk of a few high-profile failures souring public opinion. Consumer groups welcome requirements for transparency and the right to contest important automated decisions.
Industry voices worry about fragmentation across borders. A patchwork of laws can raise costs and slow deployment. Open-source developers warn that sweeping obligations could burden research. Civil society groups warn that narrow definitions could leave gaps, especially in areas like surveillance and workplace monitoring.
How to prepare
- Inventory systems: List AI tools in use and map them to risk categories where applicable.
- Adopt standards: Use widely accepted frameworks for risk management, security, and transparency.
- Strengthen data practices: Track data sources, licenses, and opt-outs. Improve quality controls.
- Build “human in the loop”: Ensure meaningful oversight for impactful decisions.
- Test and monitor: Red-team models before launch and monitor after deployment.
- Align contracts: Update agreements with vendors to include documentation, support, and audit rights.
What to watch next
- EU guidance: More guidance from Brussels on high-risk categories, general-purpose models, and conformity assessments.
- Standards race: Technical standards for watermarking, provenance, and evaluations that could become de facto global norms.
- Cross-border cooperation: Moves by the G7, OECD, and other forums to align safety testing and reporting.
- Enforcement cases: Early actions by regulators that will set precedents for fines and remediation.
The arc is clear. The AI era is moving from promise to proof. Policymakers want systems that are safe, fair, and explainable. Developers want room to experiment. Both aims can coexist if rules are clear and tools are practical. The challenge now is execution—turning principles into engineering, procurement, and everyday practice.