New AI Rules Are Coming: What to Expect
Governments are racing to set rules for artificial intelligence. The European Union has adopted the first broad law for AI. The United States has moved with an executive order and new federal guidance. The G7 and the U.K. have pushed voluntary codes and safety pledges. The result is a fast-rising bar for how AI is built, tested, and used. Companies, researchers, and the public will feel the change.
A fast-moving rulebook
The European Union’s Artificial Intelligence Act entered into force in 2024. It creates a risk-based system. Uses deemed unacceptable are banned. High-risk systems face strict checks and documentation. General-purpose models must meet transparency and safety duties. Enforcement will ramp up in stages over the next few years.
In the United States, the White House issued an Executive Order on AI in late 2023. It directs agencies to set standards for safety, security, and privacy. It also uses the Defense Production Act to require developers of the most powerful models to report safety test results to the government. The National Institute of Standards and Technology (NIST) is shaping technical guidance. Its AI Risk Management Framework urges organizations to make systems more secure, reliable, and fair. In March 2024, the Office of Management and Budget told federal agencies to appoint Chief AI Officers, inventory AI use cases, and apply safeguards.
International efforts are adding pressure. The G7’s Hiroshima Process produced a voluntary code of conduct for advanced AI developers. The U.K. convened the AI Safety Summit in 2023 and launched a safety institute. Industry groups have pushed content provenance standards so people can see when images or audio are AI-generated.
Timelines and scope
Many rules do not apply overnight. But deadlines are coming.
- EU bans on the most harmful uses: These take effect first, months after the law’s entry into force. Examples include certain forms of social scoring and manipulative biometric identification.
- General-purpose AI duties: Disclosure, technical documentation, and model testing obligations follow. The law sets extra requirements for models that pose systemic risks.
- High-risk systems: Most compliance duties for high-risk applications start later, generally in the two- to three-year window. Providers must perform conformity assessments and ensure human oversight, quality data, and robust logging.
- U.S. federal rules: Agencies are phasing in risk controls, impact assessments, and transparency for government AI uses. The Executive Order’s reporting and safety testing provisions apply to companies training very large models.
These timelines mean organizations have months, not years, to prepare. Small tweaks will not be enough for many products. Documentation, governance, and testing will need to improve.
What companies need to do now
- Map your AI: Create a full inventory of models, data sources, and use cases. Identify where your systems fall on EU risk tiers and under U.S. agency guidance.
- Harden your pipelines: Tighten data governance. Track data lineage. Filter sensitive attributes. Document synthetic data use. Log model changes and deployments.
- Test and red-team: Build regular adversarial testing into the lifecycle. Probe for safety, security, and bias issues. Record findings and fixes.
- Explain and disclose: Prepare plain-language summaries of capabilities, limits, and intended use. Add user notices where required. Label AI-generated media and adopt content provenance tools.
- Enforce human oversight: Define when a human must review or override AI outputs. Train staff. Set escalation paths for harmful or high-impact outcomes.
- Monitor in the wild: Track performance after deployment. Capture feedback and incidents. Update models responsibly. Keep audit trails.
- Plan for audits: Assemble technical documentation and risk assessments. Align with the NIST AI Risk Management Framework. Be ready to show regulators how you manage risk.
The open questions
Important debates are unresolved. Copyright law is still being tested. News outlets, authors, and artists have sued over the use of their works in training. AI companies argue that training on public data can be fair use. Courts will decide. Competition is another fault line. One agency in the U.S. has signaled interest in AI chip and cloud market power. The EU is watching how large platforms integrate generative AI into core services.
Transparency is also contested. Watermarking and content provenance are improving. But watermarking can be removed. Provenance metadata can be stripped. Standards, such as those backed by the Coalition for Content Provenance and Authenticity, are spreading. They are not yet universal.
Energy and infrastructure are constraints. Advanced AI needs massive compute. Analysts say data center electricity demand is rising fast, with AI as a driver. That raises climate and cost questions. It also affects where and how companies build models.
Why it matters for the public
People will see changes in daily life. Some systems will be labeled more clearly. Apps will explain when they use AI and what it can and cannot do. High-risk tools in health care, hiring, credit, and public services will face tighter checks. That could reduce unfair bias and errors. Political and scam deepfakes are a growing threat. Regulators are moving against AI voice cloning in robocalls and deceptive election content. Enforcement will be tested in the next election cycles.
Privacy remains a core concern. Training and fine-tuning rely on vast data. New rules push for data minimization, stronger security, and user control. But trade-offs exist. Stricter limits can improve rights but also slow innovation. Policymakers are trying to balance both.
Expert voices
U.S. President Joe Biden framed the moment in plain terms when he signed the 2023 Executive Order. “To realize the promise of AI and avoid the risk, we need to govern this technology.”
OpenAI chief executive Sam Altman offered a similar warning in a U.S. Senate hearing that year. “If this technology goes wrong, it can go quite wrong.”
Supporters of new rules say they bring clarity. A single, predictable framework can help honest actors. Critics worry about compliance costs, especially for startups and open-source projects. They fear rules could favor incumbents with deep pockets and legal teams.
The bottom line
AI is moving from a free-for-all to a governed space. The EU AI Act sets a global marker. The U.S. is building a web of standards, reporting, and agency oversight. Other countries are crafting their own paths. For companies, the message is simple. Treat AI risk like financial or cybersecurity risk. Build controls now. For citizens, oversight should bring more transparency and safety. It will not eliminate harm. But it can make that harm rarer, easier to spot, and easier to fix.
The next 12 to 24 months will be critical. Technical standards will harden. Laws will bite. Court rulings will draw lines on copyright and data. The winners will be those who can innovate and comply at the same time.