AI Rules Tighten: What Companies Need to Know
Regulation catches up with AI’s fast growth
Artificial intelligence is moving from experimental to essential. Regulators are moving with it. Over the past year, governments and standards bodies have started to turn broad principles into detailed rules. The goal is to make AI safer and more reliable, without freezing innovation. The result is a new compliance landscape that many businesses are still mapping.
The European Union’s AI Act is the most comprehensive example. In the United States, the White House issued an executive order on AI in 2023. It set a whole-of-government approach and asked agencies to produce technical guidance. International groups, including the OECD and the G7, have also weighed in. The common themes are clear: risk management, transparency, and accountability.
The EU AI Act: staged obligations and high stakes
The EU AI Act entered into force in 2024 with a phased rollout. Prohibited uses apply first. High-risk obligations arrive later. The law bans practices such as government “social scoring” and places strict limits on “real-time remote biometric identification” in public spaces, with narrow exceptions. It also introduces duties for general-purpose AI models.
Under the Act, most requirements depend on risk. High-risk AI includes systems used in areas like critical infrastructure, employment, education, and essential services. Providers must put in place quality management, data governance, documentation, and human oversight. Conformity assessments and post-market monitoring are part of the package. General-purpose models must offer technical documentation and share information with downstream deployers. Transparency duties also apply in areas like content labeling.
Deadlines are staggered. The shortest timelines apply to prohibited systems. General-purpose model rules come next. Most high-risk obligations take effect later. Companies operating in or supplying to the EU should plan for this sequence. Legal exposure is significant, and enforcement will involve national authorities working with a new EU-level office for AI supervision.
U.S. approach: standards first, enforcement through agencies
In the U.S., the federal strategy relies on existing laws and sector regulators. The 2023 executive order set out the aim of “safe, secure, and trustworthy AI.” It directed the National Institute of Standards and Technology (NIST) to advance testing and evaluation. It also asked agencies to issue guidance in health, finance, and employment.
NIST’s AI Risk Management Framework offers a practical model. It organizes AI governance around four functions: “Govern, Map, Measure, and Manage.” The framework is voluntary, but it is becoming a common reference. Agencies and auditors have started to align guidance with it. The approach emphasizes socio-technical risk, not just raw model performance. It encourages documentation, human oversight, and continuous monitoring.
Other federal moves include reporting requirements for certain high-compute training runs, security reviews for models with dual-use potential, and steps to address deepfakes. The Federal Trade Commission has warned that existing rules against unfair or deceptive practices apply to AI products. State laws are also emerging, especially on biometric privacy and automated decision notices.
Global principles are converging
Internationally, the policy conversation is aligning around a few core ideas. The OECD AI Principles, endorsed by dozens of countries, call for “human-centered values and fairness,” “transparency and explainability,” “robustness, security and safety,” and “accountability.” The G7’s Hiroshima process and the UK-led AI Safety Summit in 2023 highlighted shared concerns about “frontier AI” risks. Standards bodies such as ISO and IEC are developing technical norms that map to these themes.
This convergence matters for companies that operate across borders. While the legal details differ, the controls are similar. Documentation, evaluations, incident handling, and clear user communication form a common baseline. Over time, mutual recognition of assessments may reduce friction, but firms should not assume one country’s compliance will automatically satisfy another’s.
What industry is saying
Developers welcome clarity but warn about costs and ambiguity. Smaller firms worry that compliance burdens could favor larger rivals. Open-source communities have asked for rules that recognize transparent development models. Consumer and rights groups argue that strong guardrails are overdue. They point to risks in areas like employment screening, tenant selection, and law enforcement. They also highlight the spread of synthetic media in politics and scams.
Security teams are focused on model misuse and data leakage. Alignment teams are expanding “red-teaming” to test systems for harmful outputs and deceptive behavior. Product managers are adding disclosures and opt-outs where automated decisions affect people. Many organizations have created AI governance committees that include legal, security, and domain experts.
Five steps organizations can take now
- Inventory your AI systems. Keep an up-to-date list of models, datasets, and uses. Note where systems touch customers or make decisions about people. Classify use cases by risk and geography.
- Adopt a risk framework. Align policies with NIST’s “Govern, Map, Measure, Manage” functions or an equivalent. Write down roles and escalation paths. Make sure business owners, not only data scientists, are accountable.
- Strengthen data governance. Document data sources, consent, and license terms. Track synthetic data and model-generated content. Build processes to correct or remove problematic data.
- Test before and after release. Use structured evaluations, including safety and bias tests. Red-team models for jailbreaks and misuse. Monitor in production with clear incident response playbooks.
- Be transparent with users. Provide plain-language notices when AI meaningfully influences outcomes. Offer explanations that match the context. Give people a channel to appeal or request human review.
What to watch next
Enforcement is the key unknown. In the EU, regulators are setting up the structures needed to oversee complex supply chains. Guidance on general-purpose models and high-risk classification will shape how compliance works in practice. In the U.S., agency actions and court cases will determine how far existing laws reach into AI. Internationally, technical standards and safety testing protocols will influence what “state of the art” looks like.
Another area to watch is content authenticity. News publishers, platforms, and AI developers are experimenting with labels and watermarking. The goal is to help people understand when content is generated or altered by AI. This ties into election integrity and fraud prevention. It also affects creative industries, where credit and compensation remain contested.
The bottom line is that AI governance is becoming part of normal operations. The policy direction is consistent across regions, even if the details differ. Organizations that build risk management into their development processes will be better positioned. Those that wait may find deadlines arrive sooner than expected.
As one international principle neatly puts it, AI should advance “human-centered values and fairness” while being “robust, secure and safe.” That is now more than a slogan. It is the test that many regulators—and customers—will use.