The Regulatory Void: Why Current Laws Can’t Contain AI
The Mismatch
When Anthropic released Claude Cowork on January 31, 2026, it took exactly three days for global markets to lose $285 billion. Three days from announcement to economic catastrophe.
When the U.S. Nuclear Regulatory Commission wants to approve a new nuclear reactor design, the process takes years, sometimes over a decade. When the FDA reviews a new pharmaceutical drug, clinical trials alone span multiple years before approval. When financial regulators implement new banking rules after a crisis, the rulemaking process unfolds over years of study, comment periods, and phased implementation.
But when an AI company wants to release autonomous agents that can navigate computer systems, access files, and perform professional work at scale, work that could displace millions of jobs and reshape entire industries, there’s no approval process. No safety review board. No mandatory testing period. No requirement to demonstrate safety before deployment.
Just a blog post announcement and immediate availability.
This is the regulatory void. And it’s not an oversight, it’s a fundamental mismatch between 20th-century governance structures and 21st-century technology speeds.
The Speed Problem
AI moves at software speed. Regulation moves at government speed. The gap is unbridgeable under current systems.
Consider the timeline for Claude Cowork:
- Concept to development: Months
- Internal testing to deployment decision: Weeks
- Announcement to public availability: Hours
- Market recognition of implications: Days
- Economic impact: Immediate
Now consider the timeline for regulatory response:
- Recognition of problem: Months (after damage is done)
- Internal agency discussion: Months
- Draft regulation development: 6-12 months
- Public comment period: 60-90 days minimum
- Final rule promulgation: 6-12 months
- Legal challenges: Years
- Full implementation: Years
By the time regulators can respond to an AI capability, that capability is already deployed, entrenched, and generating billions in revenue for companies that will fight any restrictions.
This isn’t a problem of lazy regulators or slow bureaucracy. It’s a structural mismatch. The U.S. Administrative Procedure Act requires notice-and-comment rulemaking. The process is designed to be deliberative, inclusive, and careful, all admirable qualities when regulating slow-moving industries.
But AI development happens faster than notice-and-comment can function. OpenAI released GPT-4 in March 2023. By March 2026, we’re looking at GPT-5.2 and Claude Opus 4.6. Multiple generations of transformative capabilities have been released before regulators even finished studying the previous generation.
The EU AI Act, the most comprehensive AI regulation attempted anywhere, took from April 2021 (proposal) to June 2024 (final approval) to become law. Over three years. In those same three years:
- ChatGPT launched and gained 100 million users
- AI image generation went from primitive to photorealistic
- AI coding assistants became ubiquitous
- Multimodal AI (text, image, voice, video) became standard
- Autonomous AI agents emerged
- AI-generated content flooded the internet
It’s not even fully implemented yet. The phased implementation is expected to continue until August 2027. Do you see where I’m going? The legislation would arguably be obsolete before taking full effect.
The Knowledge Gap
Even if regulators could move at AI speed, they face a more fundamental problem: they don’t understand what they’re trying to regulate.
When the FDA regulates pharmaceuticals, agency scientists have deep expertise in biochemistry, pharmacology, and clinical medicine. When the NRC regulates nuclear power, staff include nuclear engineers and physicists who understand reactor dynamics. When financial regulators oversee banks, they have economists and financial experts who grasp systemic risk.
But AI is different. The technology is:
- Rapidly evolving – What regulators learn this year is outdated next year
- Highly technical – Understanding requires advanced mathematics, computer science, and specialized AI knowledge
- Black box – Even AI creators often can’t fully explain how their models work nor every possible output
- Multidisciplinary – Spans computer science, statistics, neuroscience, philosophy, economics, sociology
Regulatory agencies need to be staffed staffed with a broad range of experts; from lawyers and policy experts to AI researchers and machine learning experts. Tough call when top AI labs are paying millions of dollars for this same talent.
Legislators who don’t know what a neural network cannot be realistically expected to write laws about transformer architectures and reinforcement learning.
This knowledge gap creates information asymmetry that AI companies can ruthlessly exploit.
When Sam Altman testifies before Congress, he can say “AGI is still decades away” or “current safety measures are robust” and lawmakers lack the expertise to challenge him. When Anthropic publishes a “Responsible Scaling Policy,” regulators can’t assess whether it’s actually effective or just marketing.
AI companies hold all the cards:
- They have the technical expertise
- They have the actual models and training data
- They have the testing results (they can selectively share)
- They have unlimited resources for lobbying and shaping narrative
Regulators bring… good intentions and subpoena power they’re afraid to use.
The knowledge gap also enables regulatory capture. When agencies need AI expertise, where do they turn? To the very companies they’re supposed to regulate. OpenAI executives advise the White House. Anthropic researchers join government AI initiatives. Google AI scientists consult for regulatory agencies.
This isn’t necessarily corruption, it’s necessity. The expertise exists primarily in industry. But it creates obvious conflicts of interest. The people teaching regulators about AI safety are the same people whose companies profit from less safety regulation.
The Jurisdiction Problem
AI is global. Regulations are national. This creates a massive coordination problem.
The Arbitrage Problem:
If the U.S. imposes strict AI safety requirements, companies can:
- Incorporate in jurisdictions with looser rules
- Deploy first in permissive markets
- Route services through favorable legal regimes
- Use foreign subsidiaries to circumvent U.S. law
We’ve seen this movie before with:
- Tax havens for tech companies
- Data privacy jurisdiction shopping
- Social media companies choosing incorporation locations strategically
- Cryptocurrency exchanges registering in permissive jurisdictions
Singapore, UAE, UK are all positioning themselves as “AI-friendly” regulatory environments. Countries seeking economic growth will offer light regulation to attract AI companies.
The result: A race to the bottom. The most permissive jurisdiction sets the effective global standard because companies deploy there first, build user bases, and then expand to stricter jurisdictions with political and economic leverage.
The Extraterritoriality Challenge:
Even if the U.S. creates strong AI regulations, they only bind:
- U.S. companies
- Foreign companies operating in U.S. markets
- Activities occurring in U.S. territory
They don’t bind:
- Chinese AI labs
- European startups (until EU AI Act enforcement)
- Startups in regulatory havens
- Open-source AI development globally
If Chinese AI companies deploy capabilities the U.S. has restricted, American companies face competitive disadvantage. This creates political pressure to loosen restrictions; the “but China” argument against any safety regulation.
The Coordination Failure:
Effective AI governance requires international coordination. But we’re heading the opposite direction:
- U.S.-China AI competition intensifying
- Brexit complicating UK-EU coordination
- Different regulatory philosophies (EU principles vs. U.S. innovation focus)
- No global AI safety treaty
- Nationalism and technological sovereignty concerns
The International Council for Harmonization (ICH) took decades to achieve pharmaceutical coordination. We don’t have decades for AI safety. But without coordination, we get fragmentation. 51 different state laws, multiple national regimes, no global standards.
Current Legal Frameworks Are Inadequate
Let’s examine why existing law fails to address AI risks:
Copyright Law:
Problem: AI training uses massive amounts of copyrighted material without permission or compensation. Claude, GPT, and every other large language model trained on copyrighted books, articles, code, and creative works.
Why Current Law Fails:
- Copyright law was written for direct copying, not pattern learning
- Fair use doctrine ambiguous for AI training
- Digital Millennium Copyright Act didn’t anticipate machine learning
- International copyright frameworks even less equipped
- Enforcement is impossible, can’t “unlearn” from trained models
Result: Massive wealth transfer from creators to AI companies, with no legal remedy.
Privacy Law:
Problem: AI systems process personal data at unprecedented scale, make inferences about individuals, and can de-anonymize supposedly private information.
Why Current Law Fails:
- U.S. has no comprehensive federal privacy law
- GDPR in Europe has “legitimate interest” loopholes AI companies exploit
- Privacy laws assume identifiable “data controllers”; AI training is decentralized
- Consent frameworks break down with AI; users can’t consent to all possible uses
- Right to deletion is meaningless when data is trained into model weights
Result: Surveillance capitalism on steroids, with AI able to infer intimate details from minimal data.
Employment Law:
Problem: AI is displacing workers at unprecedented scale and speed, from entry-level to professional roles.
Why Current Law Fails:
- No protection against technological unemployment
- WARN Act requires notice for mass layoffs; doesn’t cover AI-driven “efficiency” reductions
- Unemployment insurance designed for temporary displacement, not structural obsolescence
- No requirement to assess job impact before AI deployment
- No obligation to retrain or support displaced workers
Result: Mass unemployment with no safety net or transition support.
Product Liability Law:
Problem: When AI systems cause harm such as biased hiring, medical misdiagnosis, autonomous vehicle crashes, who’s liable?
Why Current Law Fails:
- Traditional products have clear manufacturers; AI involves multiple parties (model creator, fine-tuner, deployer)
- Product liability requires proving defect – but AI operating “as designed” can still cause harm
- AI systems constantly learn and change – hard to establish what version caused harm
- Proving causation is difficult with probabilistic AI decisions
- Companies disclaim liability in the terms of service – unclear if enforceable
Result: Harms with no accountability or compensation.
Antitrust Law:
Problem: A handful of companies (OpenAI, Anthropic, Google, Microsoft) control AI development, with network effects and data advantages creating barriers to entry.
Why Current Law Fails:
- Current antitrust focuses on consumer prices – many AI services are “free”
- Network effects and data moats don’t fit traditional monopoly frameworks
- Acquisition of AI startups by big tech flies under merger review thresholds
- “Potential competition” doctrine weak in fast-moving tech
- Global nature makes U.S. antitrust insufficient
Result: AI oligopoly with limited competition and innovation.
Financial Regulation:
Problem: AI trading algorithms, risk models, and financial decision-making operate at speeds and scales that can trigger market chaos.
Why Current Law Fails:
- Financial regulations designed for human decision-makers
- AI can execute millions of trades per second – oversight impossible
- “Flash crashes” can occur faster than circuit breakers trigger
- AI-AI interactions create emergent risks no single actor intends
- Global markets but national regulators
Result: Systemic financial risks from AI no one fully understands or can control.
National Security Law:
Problem: AI enables new forms of warfare, surveillance, disinformation, and cyberattacks.
Why Current Law Fails:
- Export controls designed for physical goods; AI is software, easily copied
- Offense-defense imbalance with AI heavily favors offense
- No international norms or treaties for AI weapons
- Attribution is difficult for AI-enabled attacks
- Dual-use nature – same AI for civilian and military applications
Result: AI arms race with no governance framework.
The Black Box Problem
Perhaps the most fundamental legal challenge: We can’t regulate what we can’t understand.
Traditional regulation assumes you can:
- Understand how a system works
- Define safe operating parameters
- Test whether those parameters are met
- Verify ongoing compliance
But modern AI, particularly large language models, are black boxes. Even their creators often can’t explain:
- Why a model gives a particular output
- What features it’s relying on
- What biases are encoded in it
- How it will behave in novel situations
- What failure modes exist
You can’t write “the AI must not exhibit bias” into law when there’s no reliable method to measure or verify bias. You can’t mandate “safe” AI when “safe” can’t be precisely defined or tested. You can’t hold companies liable for “defects” when the model is working as designed but producing harmful outcomes from emergent properties.
The EU AI Act tries to address this with “explainability” requirements for high-risk AI. But this assumes explanations are possible. For frontier models, they often aren’t. Not because companies are hiding information, but because the information doesn’t exist in interpretable form.
Regulation requires:
- Clear standards
- Measurable metrics
- Verifiable compliance
- Predictable behavior
AI offers:
- Unclear mechanisms
- Unmeasurable properties
- Unverifiable claims
- Unpredictable emergent behavior
The gap is fundamental.
Self-Regulation Has Failed
Faced with these challenges, policymakers have largely taken a “wait and see” approach, letting industry “self-regulate” through voluntary commitments.
How’s that working out?
OpenAI’s Journey:
- Founded as nonprofit to ensure AGI benefits “all of humanity”
- Converted to capped-profit structure to raise capital
- Released GPT models with minimal safety delays
- Disbanded Superalignment team when it got in the way
- Removed mission alignment team
- Fired or lost multiple safety researchers
- Now pursuing IPO at $200B+ valuation
From “humanity first” to “profit first” in five years.
Anthropic’s Journey:
- Founded explicitly as “safety-first” alternative to OpenAI
- Promised responsible scaling policies
- Released Claude Cowork without adequate job impact assessment
- Lost head of Safeguards Research who cited difficulty letting “values govern actions”
- Pursuing $350B valuation with commercial product blitz
From “we’ll do it right” to “same pressures, same problems” in three years.
The White House Voluntary Commitments (July 2023): 15 AI companies signed commitments to:
- Internal and external security testing
- Information sharing about managing AI risks
- Investment in cybersecurity and insider threat safeguards
- Facilitation of third-party discovery and reporting of vulnerabilities
- Development of robust technical mechanisms like watermarking
What Actually Happened:
- No enforcement mechanism
- No verification of compliance
- No penalties for non-compliance
- Companies define what “testing” means
- No transparency about what risks were found
- Watermarking standards still not implemented
- Third-party auditing still not standard
It was performative.
The pattern is clear:
When voluntary commitments conflict with:
- Competitive pressure
- Investor expectations
- Revenue growth
- Product timelines
- Regulatory uncertainty
Voluntary commitments lose. Every time.
Mrinank Sharma’s warning was precisely about this: “I’ve repeatedly seen how hard it is to truly let our values govern our actions” when facing “pressures to set aside what matters most.”
Self-regulation requires:
- Intrinsic motivation to prioritize safety over profit
- Ability to resist competitive pressure
- Long-term thinking despite short-term incentives
- Transparency that may reveal weaknesses
- Willingness to slow down when rivals are accelerating
Market incentives provide:
- Pressure to prioritize speed and profit
- Punishment for unilateral safety investments
- Short-term focus demanded by investors
- Opacity to hide safety shortcuts
- Rewards for moving fast and breaking things
These forces are incompatible. Companies cannot self-regulate in races where every safety measure is a competitive disadvantage.
Why the Void Persists
If current legal frameworks are so inadequate, why hasn’t Congress acted?
Reason 1: Captured by Industry Narrative
AI companies spend hundreds of millions on lobbying and narrative shaping:
- “Regulation will help China win AI race”
- “Innovation requires regulatory freedom”
- “We’re the good guys, trust us”
- “Current rules are sufficient”
- “Wait for technology to mature before regulating”
These talking points dominate policy discussions because AI companies have resources to make them dominant.
Reason 2: Legitimate Uncertainty
Some caution is warranted. Regulators genuinely don’t know:
- What harms are most critical to address
- What regulations would be effective vs. counterproductive
- How to write rules flexible enough for rapidly evolving technology
- What metrics to use for safety verification
This uncertainty creates paralysis, fear of regulating wrong is used to justify not regulating at all.
Reason 3: Political Polarization
AI regulation has become politically fraught:
- Right: Sees regulation as government overreach, stifling innovation
- Left: Worried about AI’s impact on workers and inequality
- Libertarians: Want no regulation at all
- Progressives: Want comprehensive restrictions
Result: Gridlock. No coalition can pass legislation.
Reason 4: Catastrophic Risk Seems Distant
Existential AI risks feel like science fiction to most policymakers. Job displacement feels more pressing but politically difficult. So nothing happens.
Reason 5: Federal Preemption Uncertainty
States are trying to regulate AI (California SB-1047 died, but others pending). But federal preemption is unclear; if feds eventually act, will state laws survive? Uncertainty freezes state action.
Reason 6: International Coordination Challenges
U.S. reluctant to regulate unilaterally if it means China gains advantage. But can’t coordinate with China due to geopolitical tensions. European rules viewed as too restrictive. So U.S. does nothing.
The Result: Accelerating Into the Unknown
Without effective regulation:
AI capabilities are accelerating: From GPT-3 to GPT-4 to GPT-5, capabilities double every 12-18 months. Autonomous agents are emerging. Multimodal AI approaching human-level performance in specific domains.
Safety measures are not keeping pace: Companies appear to be shipping faster, testing less, and cutting safety corners under competitive pressure.
Safety researchers are leaving: The brain drain documented in Article Series 3 continues. Expertise is flowing away from companies, not into them.
Economic disruption is intensifying: Claude Cowork crashed software stocks. The next capability release could crash more. Job displacement accelerating.
Misinformation and manipulation are proliferating: AI-generated content flooding internet. Deep fakes are indistinguishable from reality. AI-powered disinformation campaigns targeting elections.
Existential risks are growing: As AI approaches AGI, risks of misalignment, loss of control, and catastrophic failure increase.
And we have no regulatory framework to manage any of it.
This is the void. A space where world-altering technology develops and deploys with less oversight than a new medical device or nuclear power plant.
The question isn’t whether this is sustainable. It obviously isn’t.
The question is whether we regulate proactively, while we still can, or whether we wait for catastrophe to force our hand.
This content is for information and entertainment purposes only. It reflects personal opinions and does not constitute legal advice.