AI Rules Go Global: 2024’s Big Governance Push
Regulators race to keep up with AI’s breakneck rise
Governments across the world moved fast in 2024 to set new rules for artificial intelligence. The European Union advanced the first comprehensive AI law. The United States turned a sweeping presidential order into agency action. The United Nations adopted a global resolution. Together, these steps mark a shift from debate to implementation. They also raise practical questions for developers, businesses, and users.
Officials stress a common goal. As a White House fact sheet put it, the aim is to be “seizing the promise and managing the risks of AI.” The message is similar in other capitals. Leaders want innovation. They also want safeguards for safety, privacy, and rights.
Europe sets a marker with the AI Act
The EU’s Artificial Intelligence Act is the most visible change. Lawmakers endorsed the package in 2024 after years of negotiation. The law uses a risk-based approach. It sets strict duties for high-risk systems and bans a narrow set of practices.
- Prohibited uses: Social scoring by public authorities. Untargeted scraping of faces for databases. Biometric categorization using sensitive traits. Most real-time remote biometric identification in public spaces, with narrow law enforcement exceptions.
- High-risk obligations: Providers must assess and manage risks, use high-quality data, keep logs, and ensure human oversight. They must register high-risk systems in an EU database and allow scrutiny.
- General-purpose AI (GPAI): Developers of large base models face transparency duties. The most capable models, which may pose systemic risk, face additional testing and reporting.
- Enforcement and fines: National authorities will supervise. Non-compliance can trigger multi-million-euro penalties tied to a company’s global turnover.
Rollout will be staged. Bans take effect first. Rules for high-risk systems and general-purpose models follow over the next one to two years. The European Commission will issue guidance and standards to clarify obligations.
Industry groups call the law a landmark. Some startups warn that compliance could be costly. Civil society groups welcome the bans but worry about carve-outs, especially for law enforcement. EU officials argue the law balances innovation and rights. They say it gives companies clarity while setting clear red lines.
U.S. policy shifts from principles to practice
The United States does not have a single AI law. Instead, it is using existing powers and a 2023 executive order to push common standards. Federal agencies spent 2024 turning those directives into rules, testing programs, and guidance.
- Testing and safety: The National Institute of Standards and Technology (NIST) launched the U.S. AI Safety Institute and a large public-private consortium. Its work builds on NIST’s AI Risk Management Framework, which defines traits of trustworthy AI such as “valid and reliable” and “secure and resilient.”
- Security: The Department of Homeland Security directed critical infrastructure sectors to assess AI-related risks. The Commerce Department advanced reporting rules for powerful models under the executive order.
- Government use: The Office of Management and Budget set rules for federal AI use, including impact assessments, human oversight, and public inventories.
U.S. officials say the approach is pragmatic. It uses existing tools while Congress debates new laws. The White House says the goal is AI that is “safe, secure, and trustworthy,” a phrase echoed in many agency notices.
A global chorus: UN and the Bletchley Declaration
In March 2024, the UN General Assembly adopted the first global AI resolution. It is not legally binding. But it signals broad agreement on core ideals. The text urges the “safe, secure and trustworthy” development of AI in support of human rights and sustainable development. More than 120 countries backed the move, according to UN records.
Months earlier, the UK hosted an AI Safety Summit at Bletchley Park. Countries, including the U.S., EU members, China, and others, signed the Bletchley Declaration. The statement calls for “international cooperation to address the risks of frontier AI” and to deepen shared understanding of those risks. It also set the stage for follow-on summits and technical work on evaluations and incident reporting.
What changes for companies and developers
The policy wave carries practical consequences. Some are already here. Others will phase in over the next two years.
- Due diligence becomes routine: Expect more model and system testing, from bias checks to cybersecurity reviews. EU rules require extensive documentation. U.S. guidance points to structured risk assessment.
- Supply chain pressure: Large buyers, including governments and cloud platforms, are adding AI clauses to contracts. Vendors will need to show evidence of safety and data governance.
- Transparency by default: The EU and several national regulators want clear labeling for AI-generated content in some contexts. Model developers face disclosure duties on training data practices and capabilities.
- Monitoring after deployment: The EU Act and U.S. guidance emphasize post-market surveillance. Logs, incident reports, and update policies are becoming standard.
Developers of general-purpose models face extra scrutiny. They may need to share technical summaries, cooperate with audits, and manage downstream risks. Application builders should map their use cases against the EU’s risk tiers and sector rules in their home markets.
Supporters, critics, and open questions
Supporters argue the new rules provide clarity and protect the public. They say guardrails will build trust and unlock investment. Consumer advocates welcome bans on harmful uses. They also call for stronger enforcement and accessible complaint channels.
Critics warn of burdens on smaller firms. They say compliance costs could entrench large players and push research to looser jurisdictions. Some researchers fear that strict liability could chill open-source work. Others counter that the toughest obligations fall on the largest models and high-risk uses, not on basic research.
There are unresolved issues. Copyright remains a flashpoint, as courts weigh how training on public data intersects with intellectual property law. Cross-border enforcement will be complex. So will the task of measuring model risks, where methods are still evolving.
How citizens may notice the changes
- More notices and labels: Apps may add clearer signals when content is AI-generated. Some services will offer ways to opt out of certain AI features.
- Stronger identity checks in sensitive uses: Healthcare, hiring, and education systems will face tighter oversight. Expect more human review in high-stakes decisions.
- Channels to report harm: New complaint portals and hotlines will open as regulators set up enforcement teams.
What to watch next
- Standards and tests: NIST, the EU, and international bodies will publish benchmarks for safety, robustness, and disclosure. These will shape audits and procurement.
- Enforcement cases: The first fines and corrective orders under the EU AI Act will set the tone. U.S. regulators may also bring cases under existing consumer protection and civil rights laws.
- Elections and misinformation: Several countries vote in 2025. Platforms and campaigns face pressure to label synthetic media and curb deepfakes.
- Global coordination: Follow-up summits to the Bletchley meeting will test whether countries can agree on shared evaluation protocols and incident reporting.
The bottom line
AI governance is moving from principles to practice. The EU’s law, U.S. agency actions, and international statements point in the same direction: promote innovation while managing risk. As one UN resolution puts it, the goal is “safe, secure and trustworthy” AI that serves people. The real test starts now, as rules meet real systems and real markets.