AI Rules Tighten: EU Act and US Order Explained
Governments move from principles to enforcement
Regulation of artificial intelligence is entering a new phase. The European Union’s AI Act took legal effect in 2024, setting the first comprehensive framework for AI in a major market. In the United States, the White House issued an executive order calling for “safe, secure, and trustworthy” AI, and tasked agencies with concrete steps. Other governments are following with their own rules. Companies now face stricter expectations, more audits, and higher penalties if they get AI wrong.
The changes reflect a broad shift. Policymakers are moving beyond voluntary pledges to binding requirements. The focus is risk, transparency, and accountability. The pace is quickening as AI spreads across sectors, from finance and health to media and public services.
What the EU AI Act does
The EU AI Act is built on a risk-based approach. It classifies AI systems by the potential harm they can cause and tailors obligations to the risk level. The law applies to providers and users placing AI on the EU market or using it in the region, even if they are based elsewhere.
- Prohibited practices: Certain uses are banned, such as social scoring by public authorities and AI that exploits vulnerabilities of specific groups. The law also restricts remote biometric identification in public spaces with narrow exceptions.
- High-risk systems: AI used in critical areas—like medical devices, employment, credit scoring, transportation, and essential services—must meet strict requirements. These include risk management, high-quality training data, technical documentation, human oversight, and post-market monitoring.
- General-purpose AI (GPAI): Developers of large, general models face added duties, including transparency, technical documentation, and model evaluation. Models deemed to pose systemic risk are subject to extra safeguards and reporting obligations.
- Penalties: Fines can reach up to a significant percentage of global turnover for serious breaches. Lesser violations still draw substantial penalties.
- Timeline: Obligations are phased in over several waves, with bans on the most harmful practices entering first, and high-risk system requirements and GPAI duties following over 2025–2026.
Industry groups have asked for clarity on definitions and testing methods. Civil society organizations have pushed for strong enforcement and remedies for individuals. The European Commission has set up structures to support implementation, including a European AI Office to coordinate oversight.
What the US executive order changes
The US approach relies on agency action, standards, and procurement. The Biden administration’s 2023 executive order directs federal agencies to advance AI safety, security, privacy, and civil rights. It invokes defense and commerce authorities for certain reporting obligations.
- Safety testing: Developers of powerful models are directed to conduct red-team testing and share results with the government under defined thresholds and rules.
- Standards and guidance: The National Institute of Standards and Technology is expanding its AI Risk Management Framework with guidance on red-teaming, evaluations, and secure development.
- Critical infrastructure and cybersecurity: The Department of Homeland Security and sector regulators are adapting AI guidelines for high-stakes use.
- Privacy and data: The order encourages privacy-preserving techniques and calls for protections against algorithmic discrimination in housing, hiring, and credit.
- Government use: Federal procurement policy will require vendors to meet safety, security, and transparency standards when supplying AI to agencies.
The White House framing emphasizes “safe, secure, and trustworthy” AI as a national priority. The shift gives regulators more tools to demand evidence of safety and to hold public-sector deployments to account.
The global picture: converging on risk and transparency
Governments are coordinating across borders. The G7’s Hiroshima process produced a voluntary code of conduct for advanced AI firms. The United Kingdom convened the AI Safety Summit in 2023, with countries endorsing the Bletchley Declaration to work together on frontier risks. The Organisation for Economic Co-operation and Development updated its AI Principles, highlighting “human-centered values and fairness” and accountability. China’s rules for recommendation algorithms and generative AI set disclosure and security requirements, especially for public-facing services.
While legal details differ, the themes align: identify risks, test systems, keep humans in the loop, and document how models work and are used. That is reshaping product roadmaps and corporate governance in technology and beyond.
Expert voices on the stakes
Researchers have warned about both current harms and speculative risks. In a 2023 public statement led by the Center for AI Safety, prominent scientists said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Others emphasize immediate issues like bias, privacy, and misuse in scams and disinformation.
Standards bodies also stress practical risk management. NIST frames its AI guidance as a resource to help organizations map risks, measure impacts, and manage them through governance, testing, and monitoring. The message is consistent: document decisions, test systematically, and learn from incidents.
What companies should do now
Compliance is becoming a core part of AI strategy. Firms that prepare early will have an advantage when audits, tenders, or cross-border sales require evidence.
- Inventory and classify: Create an AI use inventory. Map each system to a risk level. Identify which uses may be high-risk under the EU Act or subject to sector rules.
- Data governance: Track data lineage, consent basis, and quality controls. Document how you address bias and representativeness in training data.
- Model evaluations: Adopt standard testing for robustness, fairness, privacy, and security. Record test plans, results, and mitigation steps.
- Human oversight: Define who can intervene, with clear escalation paths. Train staff on limitations and failure modes.
- Vendor and open-source diligence: Request model cards, system cards, and security attestations. Track licenses and usage constraints for third-party models and datasets.
- Incident response: Set up logging, monitoring, and a process to report serious incidents to regulators where required.
For startups, the administrative burden is real. But many controls are scaled measures of good engineering and responsible product development. Building them in early tends to be cheaper than retrofitting later.
Unresolved questions
Important issues remain. Regulators must build technical capacity to audit complex systems. Thresholds for what counts as a systemic-risk model will evolve as compute and techniques advance. Debate continues over how rules apply to open-source models and research release practices. Cross-border data flows and differing national standards also complicate compliance for global products.
Industry leaders warn that miscalibrated rules could stifle innovation. Advocacy groups warn that gaps could leave communities exposed to discrimination or surveillance. Lawmakers say the goal is balance. The EU Act and the US order are attempts to set guardrails without freezing progress. The next tests will be in enforcement, courts, and real-world outcomes.
Why this matters for the public
AI is becoming part of everyday life, often behind the scenes. Better rules can reduce errors in health care, make credit scoring fairer, and curb deceptive content. They can also bring more transparency to automated decisions. That makes it easier to challenge mistakes.
The stakes are high, and the pace is fast. As countries align on standards and obligations, the era of loose promises is ending. What comes next is rigorous testing, documented risks, and clearer accountability. That is how policymakers hope to turn AI’s promise into safe, reliable systems that people can trust.