AI Rules Tighten as Industry Races Ahead

Governments move from debate to enforcement

Policymakers are shifting from broad promises to detailed rules for artificial intelligence. The European Union has approved the first comprehensive AI law, the EU AI Act, with phased obligations expected to roll out over the next two years. In the United States, a 2023 executive order set new expectations for transparency and safety testing, building on the National Institute of Standards and Technology’s AI Risk Management Framework. The United Kingdom has positioned itself as a convening force after hosting an AI Safety Summit in 2023. The G7 and OECD continue to coordinate on common principles.

For companies deploying generative AI, the message is clear: compliance is no longer optional. Regulators now demand evidence of risk controls, documented data practices, and clear accountability. This shift marks a new phase in the technology’s rapid rise.

Why the stakes are high

Artificial intelligence is powering search, code generation, customer support, and scientific discovery. The scale and speed of adoption is unusual. McKinsey reported in 2023 that more than a third of surveyed organizations used generative AI in at least one function. Chipmakers and cloud providers have invested heavily to meet demand for training and inference.

Industry leaders have framed the technology’s importance in bold terms. Google CEO Sundar Pichai said in 2018 that AI is “more profound than electricity or fire,” underscoring the belief that the technology will touch every sector. AI pioneer Andrew Ng has called AI “the new electricity,” highlighting its general-purpose nature.

The optimism is tempered by risk. At a 2023 U.S. Senate hearing, OpenAI chief executive Sam Altman warned, “If this technology goes wrong, it can go quite wrong.” That view helped push safety, testing, and accountability to the center of the policy agenda.

What the new rules require

While approaches differ, common threads are emerging across jurisdictions. Regulators are asking for proof, not promises. They want to see how risks are identified and mitigated throughout the AI lifecycle.

  • Risk-based controls: The EU AI Act classifies systems by risk and imposes stricter obligations on high-risk uses, such as requirements for data quality, human oversight, and incident reporting.
  • Transparency and disclosures: Many regimes call for clear labeling of AI-generated content and for users to be informed when they interact with AI systems.
  • Testing and evaluation: Governments are embracing independent testing, “red teaming,” and documentation of model limitations.
  • Security and safety: The U.S. executive order encourages sharing of safety test results for powerful models and directs standards bodies to develop guidance on secure development and deployment.
  • Governance and accountability: Organizations are expected to assign roles, keep audit trails, and maintain processes for responding to incidents and user complaints.

As NIST put it in its 2023 framework, “Managing risks to individuals, organizations, and society that stem from AI systems is essential to achieve trustworthy AI.” That language guides many of the checklists and templates now circulating in boardrooms.

Business impact: costs and clarity

Compliance will add costs, but it may also bring clarity. Companies have asked for rules that are predictable and technology-neutral. Many executives say they prefer consistent requirements across markets, rather than a patchwork of regional mandates.

For smaller firms, the burden can be significant. Documentation, evaluations, and legal reviews require time and expertise. For larger firms, early investment in governance may protect market access and reduce legal risk. Insurers and auditors are beginning to ask detailed questions about model oversight, data provenance, and incident response.

Still, the market incentives remain strong. AI can reduce manual work, unlock new products, and speed research. Clear standards could increase trust among customers and regulators, supporting wider adoption.

Technology is evolving faster than policy

The pace of AI research remains rapid. Models are becoming more capable, more multimodal, and easier to integrate through application programming interfaces and agent frameworks. That creates a moving target for policymakers and risk managers. Methods that worked for last year’s systems may not apply to next year’s models.

Standards bodies are trying to keep up. ISO/IEC 42001, published in 2023, established a management system for AI, similar to ISO standards in information security and quality. The goal is to provide a repeatable way to show that an organization meets best practices, even as technical tools change.

Key concerns raised by civil society

Outside industry, researchers and advocates warn about harms that do not always show up in laboratory tests. They call for stronger protections around privacy, labor, and fairness, and for enforcement with real consequences.

  • Bias and discrimination: AI systems can reflect historical inequities present in training data. Advocates want impact assessments and remedies for affected groups.
  • Privacy: Generative tools can memorize or reveal sensitive information. Regulators are watching how data is collected, labeled, and retained.
  • Disinformation: Synthetic media can mislead at scale. Several jurisdictions now encourage labeling of AI-generated images, audio, and text.
  • Workforce impacts: Automation may displace tasks, even as it creates new roles. Workers want retraining and transparency about how AI is used in hiring and evaluation.

Proponents argue that responsible deployment can address many of these risks, and that strong processes will separate trustworthy providers from the rest.

What organizations can do now

Whether building or buying AI, organizations can reduce risk and improve outcomes by taking a structured approach.

  • Map use cases: Create an inventory of AI systems, including vendors, data sources, and business owners.
  • Adopt a framework: Align policies with recognized guidance such as NIST’s AI RMF or ISO/IEC 42001 to standardize controls.
  • Test and monitor: Red team high-impact systems, document limitations, and monitor for drift and incidents after deployment.
  • Clarify accountability: Define roles for product, security, legal, and ethics teams. Ensure escalation paths for issues.
  • Engage stakeholders: Communicate with customers, employees, and regulators. Publish model cards or system factsheets where appropriate.

The road ahead

The next phase will test how rules work in practice. Regulators must staff up and issue detailed guidance. Companies will need to translate legal concepts into engineering tasks. Auditors and insurers will refine the evidence they request. International coordination will be critical to reduce friction for cross-border products.

Despite the challenges, the direction is set. Governments want trustworthy AI, businesses want clarity, and users want useful tools that are safe by design. The technology will keep advancing. The policy and practice around it must advance too. As Pichai and others have noted, the stakes are large because the opportunity is large. The test now is whether society can capture that opportunity while keeping risks in check.