Regulators Draw Lines in Open vs Closed AI

Governments move as AI debate sharpens

Policymakers around the world are setting new rules for artificial intelligence. A central debate has emerged: should the most capable models be open or closed? The answer will shape how quickly AI spreads, who controls it, and how society manages risk. Regulators say they want innovation. They also want safety and accountability. Companies and researchers are split on how to get there.

The European Union finalized the AI Act in 2024, the first broad law of its kind. It takes a risk-based approach. It bans some uses, sets strict duties for high-risk systems, and adds transparency rules for many others. The United States issued an Executive Order in late 2023 that directs agencies to build standards, testing, and reporting for powerful systems. The United Kingdom launched a public institute to evaluate advanced models. The Group of Seven agreed on voluntary principles. The direction is clear: scrutiny is rising.

What open and closed mean in practice

Open models share weights or code under licenses that allow use, study, and adaptation. Closed models keep weights private and are offered through APIs or hosted services. Supporters of openness say it spreads knowledge and enables faster fixes. Critics warn it can lower the barrier for misuse. The difference matters most for so-called frontier models, which are trained with huge data and compute budgets and can be adapted for many tasks.

Andrew Ng, an AI pioneer, once said, “AI is the new electricity.” He argued that access would fuel growth across the economy. Others stress the danger side. Entrepreneur Elon Musk warned in 2014, “With artificial intelligence we are summoning the demon.” These views shape the policy arguments today, even as the technology has changed.

What the new rules require

The EU AI Act classifies systems by risk. Some practices are banned, including certain forms of social scoring and abusive biometric surveillance. High-risk systems must meet requirements on risk management, data quality, documentation, transparency, human oversight, accuracy, and cybersecurity. General-purpose AI models, including those used to build many applications, face transparency and safety duties, with tougher rules for the most capable systems.

In the United States, the 2023 Executive Order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” directs agencies to set testing and reporting regimes. It tasks the National Institute of Standards and Technology (NIST) to develop evaluation guidelines and red-teaming methods. It also uses existing legal authorities to seek reports from developers training very large models. The U.K. established the AI Safety Institute to independently evaluate advanced systems.

NIST’s AI Risk Management Framework lists the traits of trustworthy AI as “valid and reliable; safe; secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair with harmful bias managed.” That checklist underpins many of the new audits and benchmarks now in development.

The core dispute: innovation versus control

Open-source advocates argue that openness improves security through transparency. Bugs are easier to spot. Small firms and public-interest groups can test and adapt systems. They say this builds competition and reduces dependence on a few large vendors. It also helps education and research. Companies like Meta and Mistral have released models with varying degrees of openness. Google has offered smaller open-weight models for research and development.

Those favoring closed releases say the risks are different at the frontier. They argue that open weights can be fine for small models but are too dangerous when capabilities scale. They cite threats like automated malware generation, targeted deception, or the acceleration of biological misuse. Some researchers note that safety techniques, including content filters, are easier to maintain when the model weights are not widely available.

  • Pro-open case: faster innovation, wider oversight, more competition, lower costs for startups and researchers.
  • Pro-closed case: stronger control of misuse, more reliable safety filters, easier compliance with strict rules.

Signals from Brussels, Washington, and London

The EU AI Act does not ban open-source AI. Lawmakers say obligations should reflect risk and actual deployment context. Still, general-purpose models above certain capability thresholds face extra duties, including reporting on training data and safety testing. Providers must share technical information with downstream developers so they can comply with the law.

In the U.S., the Executive Order centers on evaluations and information sharing for very capable models and critical applications. It also promotes watermarking research for AI-generated media and guidance for the use of AI in sectors like health, finance, and education. The approach favors standards over a single federal AI law, at least for now.

The U.K. chose a sector-led model with a central safety institute. It has run structured tests of leading models in closed settings. Early reports emphasize both impressive capability and uneven reliability, underscoring the need for guardrails.

Industry adapts, cautiously

Big AI labs say they support reasonable rules. They are investing in red-team exercises, content provenance tools, and model cards. They are also lobbying for clarity on liability and for predictable methods to demonstrate compliance. Smaller firms worry about compliance costs. Open-source communities fear that broad rules could make it hard to publish or host research models.

Many firms now publish system cards that describe capabilities, limits, and known risks. Some are testing watermarking for images and audio. Others are building provenance signals into file formats. None of these steps are foolproof. They reduce but do not remove risk. Audits and independent evaluations are becoming a standard part of high-stakes deployments.

What organizations should do now

  • Map your models: inventory all AI systems in use, including third-party APIs and open-weight models.
  • Assess risk: rate use cases by impact on people, safety, finance, and compliance obligations.
  • Implement controls: use human oversight, logging, and access limits. Test prompt injection and data leakage.
  • Document: keep model cards, data sourcing notes, and evaluation results up to date.
  • Monitor: track performance drift, incident reports, and security advisories. Update safeguards as models and threats change.

What to watch next

Three questions will define the next phase. First, how regulators set thresholds for “systemic risk” in general-purpose models. The details will decide which developers must meet the toughest rules. Second, whether standard evaluations can keep up with new capabilities. Benchmarks are improving, but attackers adapt. Third, how courts interpret liability when AI contributes to harm.

The direction is toward more testing, more transparency about limits, and more accountability at deployment. The open-versus-closed divide will not disappear. It will likely narrow into a spectrum, with hybrid releases and controlled access for researchers and auditors. The policy task is to preserve the benefits of openness while managing the risks of scale. As NIST’s framework suggests, trust comes from proven practices, not promises.

The stakes are high. AI tools now influence how people learn, work, and vote. The choices made by lawmakers and labs in the coming years will help decide who benefits, who bears risk, and how fast the technology advances. The next chapter will be written not only in code, but also in standards, audits, and law.