The AI Policy Vacuum

Big Tech spent $50 million in nine months trying to kill state AI regulations. Steve Bannon is calling AI “the most dangerous technology in the history of mankind.” David Sacks says it’s the new space race. They’re both Republicans. As lobbying dollars flow, release timelines accelerate, and uncertainty grows, Democrats are surprisingly absent from the debate.

This newsletter typically covers how AI changes the practice of persuasion. But this week, we’re stepping back to look at the regulatory push and national political mood around AI.

Thanks for reading The Influence Model! Subscribe for free to receive new posts and support my work.

, a Senior Fellow at the Foundation for American Innovation, remarked on X that a Democratic elected official told him, “The problem [the Democratic Party] has is nobody knows where we stand on AI. I replied that the problem is that nobody cares where they stand.”

This week’s signals show what that competition actually looks like.


1. Big Tech’s AI Lobbying Hits $150 Million as State Preemption Battle Intensifies

AI companies and their backers have spent $150 million on lobbying efforts centered on federal preemption of state AI laws. The scale of spending reveals how quickly AI has moved to the center of American politics.

OpenAI increased its lobbying spend to $7 million. Meta deployed $24.6 million. The money accelerated through the year—earlier analysis from Issue One showed $50 million spent just in the first nine months of 2025, meaning the final quarter saw spending surge.

Two opposing coalitions have formed, each with structured networks of Super PACs and advocacy groups. Leading the Future launched in August with $100 million from Silicon Valley investors including Marc Andreessen, OpenAI co-founder Greg Brockman, and Perplexity. Their advocacy arm, Build American AI, started a $10 million national campaign calling for federal preemption of state regulations. Their first target: New York Assemblymember Alex Bores, who sponsored the RAISE Act requiring AI labs to have safety plans.

On the other side, Public First—a bipartisan initiative led by former Representatives Chris Stewart (R-Utah) and Brad Carson (D-Okla.)—opposes federal preemption without meaningful national safeguards. They’ve launched two affiliated Super PACs and expect to raise at least $50 million for the 2026 cycle. The group backs stronger export controls on advanced chips, transparency requirements for AI labs, and state-level regulations addressing risks to children and workers.

The split runs through the Republican Party itself. Stewart holds dual roles—leading Public First while also heading AFPI’s AI team, which favors rapid innovation and federal preemption. This reflects the deeper fracture between national security conservatives who support AI safeguards and pro-business conservatives who want acceleration.

The fight centers on whether Congress includes preemption language in the National Defense Authorization Act. Over 1,080 AI-related bills were introduced across state legislatures this year. A 10-year moratorium on state AI regulation appeared in the reconciliation bill this summer before being stripped out 99-1.

Now the White House is preparing an executive order titled “Eliminating State Law Obstruction of National AI Policy” that would withhold federal funding from states with “overly punitive” AI regulations. More than 200 state legislators signed a letter opposing federal preemption, arguing “state experimentation and varied approaches to AI governance help build a stronger national foundation for sound policymaking.”

Something to think about: Big Tech’s strategy is federal preemption—pass a weak federal law that kills 1,080 state bills simultaneously. The executive order threat puts states in a bind: regulate AI and lose federal funding, or stand down and hope Congress eventually passes something.

(Read Forbes analysis) | (Read Issue One report)


2. Science Study Proves Attitudes Are More Malleable Than We Think

Researchers published findings in Science showing that feed algorithms causally shift partisan attitudes through content ranking.

In a 10-day field experiment with 1,256 participants on X during the 2024 campaign, they used large language models to rerank posts expressing “antidemocratic attitudes and partisan animosity” (AAPA).

The results were stark: decreasing AAPA exposure shifted out-party feelings warmer by 2 points on a 100-point thermometer. Increasing AAPA exposure shifted feelings colder by 2 points. That’s equivalent to approximately three years of historical polarization drift—compressed into 10 days.

The breakthrough is methodological. Researchers built a browser extension that intercepted feeds in real time, bypassing platform cooperation entirely. They’ve proven that algorithmic ranking directly shapes political attitudes at measurable scale.

Something to think about: If 10 days of algorithmic reranking produces three years of polarization shift, it’s clear the feed architecture matters even more than the message quality. Messages that algorithms suppress might as well not exist. But finding what resonates requires trial and error through flooding the zone. (Read more)


3. UK Government Releases AI Playbook for Public Services

The UK’s Government Digital Service published the AI Playbook, providing departments and public sector organizations with technical guidance on safe AI adoption.

The guidance identifies three primary areas where government agencies can deploy AI:

Citizen-facing services: Improving access to information and service delivery
Internal operations: Increasing efficiency and reducing administrative burden
Feedback analysis: Using AI to analyze citizen and user feedback loops at scale

The Playbook updates old guidance to cover machine learning, deep learning, natural language processing, computer vision, and speech recognition beyond just generative AI. It includes an appendix with case studies from teams across UK government and practical checklists for AI project development.

The release coincides with broader UK public sector AI initiatives, including AI “exemplars” for health diagnostics, teacher lesson planning, and planning record digitization. The government is establishing an AI adoption unit specifically to build and deploy AI into public services.

Something to think about: For vendors and consultants, this roadmap shows which use cases survive pilot-to-scale transitions. Build for the three categories that work (service delivery, operations, feedback) rather than chasing novel applications. (Read more)


4. MAGA Base Fractures Over AI as Bannon Declares War on “Tech Bros”

Steve Bannon is mobilizing the MAGA base against AI acceleration, calling it “probably the greatest crisis we face as a species” and “the most dangerous technology in the history of mankind.”

Bannon’s War Room podcast has made AI opposition its main focus, with dedicated segments from “transhumanist editor” Joe Allen warning of job apocalypse and spiritual threats.

The fight pits Bannon and populist MAGA voices against Trump administration officials like AI czar David Sacks, who frames AI development as essential to beating China. Popular conservative influencers Matt Walsh and Tucker Carlson have joined Bannon’s side, warning AI will eliminate jobs and reshape society.

Polling shows the ground shifting. Pew Research found that Republicans and Democrats are now equally concerned about AI in daily life (50% vs 51%). But this represents a notable shift: Republican concern has actually decreased 9 points since 2023, while Democratic concern has grown from 31% in 2021 to 51% today. The parties differ sharply on who should regulate AI, with 54% of Republicans trusting the U.S. government to oversee the technology versus just 36% of Democrats.

By Bannon’s estimate, “an overwhelming majority among rank-and-file Trump supporters has grown to loathe the push behind AI.” He argues there’s “a deeper loathing in MAGA for these tech bros than there is for the radical left, because they realize that radical left is not that powerful.”

The movement scored a major win killing the 10-year moratorium on state AI regulation from Trump’s legislative package. Bannon is now building ground-up opposition, sending Allen to churches and MAGA gatherings nationwide to spread the warning.

Something to think about: Internal Republican fractures over AI create exploitable asymmetries. Democrats lack a coherent position. Republicans are fighting themselves. Operators who can speak credibly to both populist concerns (job displacement, corporate power) and accelerationist arguments (China competition, economic growth) can position themselves as neutral experts while everyone else picks sides. The faction that figures out how to bridge this divide first controls the narrative. (Read more)


5. The Left’s AI Positioning Challenge Creates Strategic Opening

Joshua Achiam, Head of Mission Alignment at OpenAI, kicked off an X thread last week: “The left has completely abdicated their role in the discussion [about the impacts of super intelligent AI]. A decade from now this will be understood on the left to have been a generational mistake; perhaps even more than merely generational.”

Dean W. Ball offers a compelling set of reasons why the left struggles to engage on AI. While Republicans fight themselves over acceleration versus resistance, Democrats face a different problem: finding entry points into a conversation that doesn’t map neatly to traditional progressive frameworks.

Ball identifies several structural barriers:

Academic hesitation: Progressive policy development often relies on expert consensus before moving forward. Concepts like AGI remain contested in academia, creating uncertainty about which experts to follow.

Interest group structure: Democratic coalition politics are built around established constituencies. AI as a civilization-scale question doesn’t have natural champions within labor unions, advocacy groups, or other traditional Democratic power centers.

Tech skepticism: Years of fighting Big Tech over privacy, antitrust, and content moderation have created reflexive skepticism about Silicon Valley’s next big thing, making it harder to engage with AI development on its own terms.

Policy clarity: Outside national security (where there’s bipartisan alignment), the left hasn’t articulated what transformative AI would require them to do differently beyond their existing agenda.

Something to think about: When a major party needs coherent AI messaging, they’ll need validators, narratives, and talking points. The groups building that infrastructure now will define how Democrats talk about AI for years.

(Read Ball’s thread) | (Read Achiam’s thread)


What This Means

It is early days and the debates are far from settled. Will AI labs achieve AGI? If so, when? How will it disrupt our job market and democracy? Should AI policy live at the state or federal level? How do we avoid the same policy mistakes as we made with the social media companies?

Whether you’re working on AI policy or just using AI tools, it’s important to know where your audience lands in the split.

The vacuum won’t last. Someone will fill it.


Thanks for reading this edition. Hit reply and let me know what you’re seeing.

And if someone on your team should be tracking this, forward it their way.

Best,
Ben

Thanks for reading The Influence Model! Subscribe for free to receive new posts and support my work.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *