A network of 150 accounts hijacked keywords around Keir Starmer during the UK election, flooding social platforms with AI-generated videos until they’d racked up billions of views. The Guardian documented the operation this week: coordinated timing, repetitive messaging, volume designed to trigger algorithmic amplification before platforms could respond.

Narrative dominance has become an ignition problem. Light the right fires in the right places and the system does the rest.

Thanks for reading The Influence Model! Subscribe for free to receive new posts and support my work.

For years, this practice of “flooding the zone” worked on both offense and defense. Influence campaigns could overwhelm opponents with volume to dominate share of voice. Targets of investigation could bury journalists in documents, buying time while reporters sorted through the pile. LLMs collapsed this tactical advantage. On offense, a small coordinated network can now ignite at scale. On defense, the New York Times reports that 23,000 Epstein files got analyzed in hours, not months. Volume no longer creates confusion. It just gives your opponents more material to work with, faster.

The signals this week show what that shift looks like in practice.


The Signals This Week

Seven signals of the future of opinion shaping and what they mean for people inside the Beltway.

1. 150 Accounts Drove Billions of Views in UK Election Manipulation Campaign

Investigation reveals coordinated network used AI video and keyword hijacking to set narrative before enforcement could respond

The Guardian documented a coordinated anti-Labour campaign that generated billions of views across social platforms during the UK election. The operation ran on roughly 150 accounts using AI-generated video content and aggressive keyword hijacking, producing something like 15,000 mentions of Keir Starmer through repetition and timing rather than organic support.

Platforms attempted enforcement, but the speed differential favored the attackers. By the time content moderation caught up, the narratives had already seeded.

Something to think about: Agenda-setting has become an ignition problem, not a persuasion-at-scale problem. You don’t need millions of influencers to dominate the narrative layer. You need enough coordinated activity to trigger algorithms that reward volume and engagement. The same infrastructure that enabled this campaign will come to legitimate advocacy operations.

Read more: The Guardian


2. Foreign Actors Weaponize AI to Seed False Flag Narratives After Bondi Beach Attack

AI-generated image of wounded influencer spreads to nearly a million views as disinformation operators exploit confirmation bias

In the wake of Sunday’s Bondi Beach antisemitic terrorist attack that killed at least 16 people at a Hanukkah celebration, adversarial actors moved quickly to shape the information space. The account AdameMedia, which we covered in the Network Contagion Research Institute’s “False Flags and Fake MAGA” report as an England-based account likely commandeered by Russian operators, began seeding narratives that the attack was a false flag operation by Israel and Mossad.

The account used AI to generate a fake behind-the-scenes image of Arsen Ostrovsky, an influencer who was injured during the attack with a grazing head wound. The image has spread to nearly a million views across X on this account alone. A zoom on the shirt reveals fake characters, a telltale sign of AI generation.

Something to think about: If it confirms what people already suspected, then people are willing to look past obvious fakeness. AI becomes a narrative accelerant for confirmation bias. Build your rapid response infrastructure for defense, but understand the offense too. When a crisis hits, the information space will be contested within hours.

Read more: AdameMedia post on X | Ostrovsky’s original post | False Flags and Fake MAGA report


3. Shopify Builds AI Agents That Critique Your Store Before Humans Ever See It

SimGym trains AI agents to behave like real shoppers and evaluate content before launch

Shopify announced a suite of agent-native features this week, and one stands out for us: SimGym. The platform trains AI agents to behave like real shoppers, visiting stores and evaluating layouts, messaging, and flows before any live traffic sees them.

The system answers questions like: Is the layout compelling? Are the topics resonant? How could you improve before launch? Merchants can now run synthetic audiences through their content and iterate based on simulated reactions.

Something to think about: Ecommerce is first, but watch for this to come to political messaging by 2026. What if you could send AI agents representing your target audience through your landing pages, your ads, your talking points before they go live? The infrastructure for synthetic focus groups is being built right now in consumer tech. It will migrate to campaigns.

Read more: Tobi Lütke on X

4. 24,000 Posts Tied Taylor Swift Album to Far-Right Imagery Using Outrage as Amplifier

Coordinated campaign used accounts posing as left-wing critics to increase salience through opposition engagement

The Guardian reported on a coordinated online attack seeking to align Taylor Swift and her latest album with Nazi and far-right imagery. Accounts feigning leftist critique designed content specifically to encourage outrage. Gudea, an AI-driven behavioral intelligence platform, analyzed more than 24,000 posts and 18,000 accounts across 14 platforms during the album’s release window.

The mechanism is instructive, even accounts commenting against the ideas increased their salience. Algorithms reward engagement, whether that engagement is supportive or furious.

Something to think about: Opposition engagement can be the distribution strategy. When your critics amplify your content while trying to debunk it, you’ve created a self-sustaining loop. The play is to engineer shareability through predictable outraged reactions. Sometimes the best way to spread a narrative is to make it something people can’t resist attacking.

Read more: The Guardian


5. US Requires AI Vendors to Measure Political Bias for Federal Contracts

Government formalizes bias evaluation as condition of procurement, with exemptions for national security

The US government announced that AI vendors must evaluate and disclose political bias in their systems to qualify for federal contracts. National security applications are exempt. The requirement formalizes bias measurement as part of AI procurement.

Early in the current AI era, guardrails over-corrected. Google’s first image generator famously produced images of the founding fathers as Black and brown men and women. That was embarrassing, and they were ridiculed for it. The administration’s response is to require vendors to demonstrate objectivity before selling to government agencies.

Something to think about: Whoever defines and measures “bias” shapes how machine-mediated reality is constructed. Chatbots aren’t just sitting between users and information. They’re forming that information. There’s also a fascinating angle here about the unitary executive: if you replace people in government with AI agents, what the president says could be acted on exactly, because the agents could be given instruction sets that human bureaucrats would interpret or resist.

Read more: Reuters


6. Newsom’s AI Clapback Outperforms White House Meme Post

Governor responds to administration meme with AI-generated video in under 48 hours

After the White House posted a “cuffing szn” post of ICE arrests using SZA audio on Monday, Governor Gavin Newsom posted an AI-generated response video on Wednesday depicting Trump, Hegseth, and Stephen Miller in handcuffs. I should pause here and say, welcome to late 2025. The rapid turnaround demonstrated what compressed production cycles enable.

Traditional video production takes weeks of scripting, shooting, and approval. AI video generation compresses that to hours. Newsom’s team moved from “opponent post” to “reply in native meme grammar” in under 48 hours.

Something to think about: Response velocity is critical. If you want to compete in this environment, you need a safe pipeline: pre-approved templates, clear approval rules, and rapid deployment capability. Speed without guardrails creates brand and legal exposure. But guardrails without speed means you’re always responding to yesterday’s narrative.

Read more: Cuffing SZN (White House) | Newsom on X | The Independent


7. “Flooding the Zone” Is No Longer an Effective Media Strategy

New York Times AI editor argues document dumps now favor investigators, not leakers

Rubina Madan Fillion, associate editorial director of AI Initiatives at The New York Times, wrote for Nieman Lab that AI has inverted the traditional power dynamic of strategic leaks. A few years ago, a drop of 23,000 documents would have been considered “flooding the zone” because it would have taken journalists months to identify what was newsworthy.

AI changed that equation. Now news organizations can rapidly extract text, analyze content, and search for relevant topics and people. The Times has used AI for dozens of reporting projects, always supplementing rather than replacing journalist expertise at every stage.

Something to think about: If you’re relying on volume to create confusion, that defense is weakening. Investigators can now process at scale. But if you’re the investigator, or if you need to monitor regulatory filings, congressional testimony, or competitor communications, the same tools give you ears everywhere. The asymmetry is collapsing. What matters now is who can analyze faster and act on what they find.

Read more: Nieman Lab


Thanks for reading this edition of The Influence Model.

Best,

Ben

Thanks for reading The Influence Model! Subscribe for free to receive new posts and support my work.