The Chipmunk Problem

I’d like to introduce you to something I call the Chipmunk problem.

Associates at government affairs firms spend entire days watching congressional hearings, briefings and other video content on 2x speed where every member sounds like a chipmunk, just to pull out the sections that matter.

Thanks for reading The Influence Model! Subscribe for free to receive new posts and support my work.

Zack Dareshori came back to government affairs after years building machine learning systems at the Department of Defense and expected to find firms using modern tools. Instead, he found people doing the work exactly the way they did in 2015.

Before writing a single line of code, he talked to potential customers about the problem, and one firm told him directly: “If you built this, we would pay for it.” That was rocket fuel. Zack spent every free hour building Hill Genius, and he couldn’t have done it without AI—not just the product itself but forming the LLC, figuring out bookkeeping, recording every sales call and feeding it to AI to understand workflows. AI became what he calls a co-founder.

But before we dive deeper into that story, let’s look at this week’s signals. Fox news is building a digital twin of their newsroom and Google’s new image model is breaking the internet. Russia actually attempts to break the internet with a network of propaganda outlets meant to poison AI training and a controversial deepfake finds it’s way into the Georgia Senate race. In welcome news, the ADL finds that AI can make people less antisemitic—and the learnings from their study apply to other extreme beliefs.


The Signals


1. Fox News Builds an AI Mirror of Its Newsroom

Fox News partnered with Palantir to create a “digital twin” of the network’s operations. The system mirrors workflows, data, and tools the company uses to produce digital journalism. They built three main tools. “topic radar” generates custom briefings so reporters can get up to speed on stories in minutes, “text editor” evaluates copy for style and checks for broken links, and “article insights” analyzes performance to suggest optimizations.

Unlike text-based newsrooms that struck licensing deals with AI firms, Fox’s broadcast business gave them leverage to pay for enterprise tools without giving up content.

Something to think about: This is the first mainstream example of a newsroom turning its entire operation into an AI-mirrored system. That model transfers directly to Washington: any comms shop, association, or GR team can build an internal “issue twin” that briefs staff on new developments, pre-writes statements, and flags story angles instantly.

Read more: Fox News hires Palantir to build AI newsroom tools


2. DebunkBot Reduces Antisemitic Conspiracy Beliefs by 16%

Researchers affiliated with the ADL’s Center for Antisemitism Research trained an AI chatbot to counter antisemitic conspiracy theories. They invited 1,224 U.S. adults who endorsed at least one of six antisemitic conspiracies to interact with the system.

Users who chatted with “DebunkBot” reduced their belief in antisemitic conspiracy theories by 16 percent. Favorability toward Jews increased by 25 percent among initially unfavorable participants. The effect persisted even when users believed they were speaking with a human expert rather than AI. Effects remained strong one month later without further engagement.

Interestingly, the AI didn’t rely on emotional appeals or empathy-building exercises. It provided accurate information and evidence-based counterarguments. Factual debunking works even for conspiracy theories with deep historical roots and strong connections to identity and prejudice.

Something to think about: AI can actually reduce extreme views. The future of persuasion is in the conversation, not the broadcasting of information. DebunkBot demonstrates this shift applied to one specific issue—antisemitism—but the model works for any topic where you need to soften entrenched positions. Organizations can deploy AI rebuttal agents trained on climate, vaccines, immigration, tax policy, union rights, whatever issue you’re working. The persuasion happens in guided dialogue before a human organizer enters the conversation.

Read more: AI has a reputation for amplifying hate. A new study finds it can weaken antisemitism, too.


3. Russia-Linked Sites Poison AI Training Data

NewsGuard investigation uncovered hundreds of English-language websites linked to a Russia-aligned “Pravda network.” The operation published 3.6 million articles in 2024, seeding politicized narratives designed to contaminate open-web training data. The content appears in search results and web crawlers that feed AI model training.

When NewsGuard tested ten leading AI chatbots—including ChatGPT, Gemini, and Claude—the models repeated Pravda’s false narratives 33 percent of the time when prompted with relevant questions. Seven chatbots cited specific Pravda articles as sources. The network operates in 49 countries, publishing content in multiple languages to increase credibility and reach.

Most Pravda sites receive fewer than 1,000 monthly unique visitors. Their power lies in gaming AI training pipelines and search algorithms rather than reaching human audiences directly.

Something to think about: Change what the model reads, change what it says. This is mass narrative poisoning at scale. The Russians figured out something fundamental: if you can own your niche of an issue through sheer footprint, you control the narrative. They flooded the web with 3.6 million articles to dominate the training data. White-hat operators should learn the lesson. Build your footprint in your issue space. Publish consistently. Get cited by credible sources. The organization that owns the most reliable, most-cited content on their topic will shape what AI says about it. This matters more than most people realize.

Read more: Russia flooding training data to influence chatbots like ChatGPT, Claude


4. Georgia Senate Race Deploys AI Deepfake Attack Ad

Rep. Mike Collins’ campaign released an AI-generated deepfake video showing Sen. Jon Ossoff saying things he never said—marking one of the first major uses of synthetic impersonation in a top-tier U.S. Senate race. The video shows an AI-generated Ossoff claiming to support the government shutdown and saying he’s “only seen a farm on Instagram.”

The ad includes a small on-screen disclaimer that it’s AI-generated, avoiding violations of Georgia and federal law. Collins defended the approach: “Our team is doing it just like the White House. You’re not going to stop technology. Just embrace it and go with it.”

Ossoff’s campaign responded that “the only reason a candidate would need to use a deepfake to make up an opponent’s words is if they didn’t think they could win on their own.” The ad tests Georgia’s new anti-deepfake law currently working through the legislature.

Something to think about: Legislation alone will not stop AI-generated content—the technology is too accessible, the enforcement too difficult, and the use too normalized. Here’s an interesting question: are you prepared with a deepfake of your opponent, ready to deploy on offense if needed? Mutually assured destruction might be the only deterrent that works. Some campaigns will build these assets and keep them in reserve, never using them unless forced to respond. Others will go first and test the boundaries. Either way, the defensive infrastructure needs to exist before you get hit.

Read more: Georgia Rep. Mike Collins’ campaign uses AI-generated deepfake of Senator Jon Ossoff


5. Google Launches Nano Banana Pro with 4K Generation

Google released Nano Banana Pro, an upgraded image generation model built on Gemini 3 Pro. The model produces 2K and 4K images with improved text rendering, multi-language support, and professional-grade controls for camera angles, lighting, depth of field, focus, and color grading.

The system can blend up to 14 distinct objects within a single composition while maintaining visual consistency across five different people. It connects to Google Search to gather real-time information for graphics like weather updates, sports scores, and educational infographics. It’s crazy.

Something to think about: Professional-quality synthetic content now takes minutes to produce. No specialized skills required, no expensive equipment, no creative bottleneck. Deepfakes and AI-generated visuals are easier to create than authentic footage in many cases. The organizations that figure out responsible deployment of these tools will have significant advantages in 2026.

Read more: Google releases Nano Banana Pro, its latest image-generation model


The Chipmunk Problem

During our conversation, I asked Zack if good public affairs people think like data scientists. His answer was absolutely.

“People in government affairs are extremely clever when it comes to using the resources that they have. If you’re going to be a successful government affairs firm, you have to be really good at finding all the relevant information and just taking that raw data and turning it to insight for your clients. That’s exactly what data scientists do as well.”

Created with Nano Banana (look at that detail!)

Watch the full episode here.

One firm didn’t tell their summer interns about Hill Genius for the first few weeks because they wanted them to have the experience of watching entire hearings manually first. When they finally showed them the platform, the interns said it changed everything.

Natural Language Data Is the Goldmine You’re Not Mining

Most firms don’t realize every sales call, every client meeting and every hearing contains structured information you’re throwing away.

Zack records every sales call and feeds it to AI to understand workflows, and the transcripts inform how he develops features because he has complete information about how his customers actually work. Without recording it, he would have forgotten most of it.

Natural language data is extraordinarily rich—the words themselves, how long the conversation runs, what gets emphasized, what gets avoided. If you’re not capturing it, you’re missing the entire tail of data.

Hill Genius does this for congressional hearings with voice identification for every statement and timestamps linking every claim to the exact moment in video. When you hover over a citation in a hearing memo, it shows you the transcript with relevant parts highlighted, and clicking takes you straight to that specific moment in the video.

The hearing memo feature launched this week with early feedback showing 75% time savings on memo production. You input topics relevant to your clients, the system generates a draft memo in your firm’s voice, structured around those topics instead of a generic summary.

The broader principle applies everywhere: if you’re having meetings without capturing the natural language data, you’re doing the analysis once and throwing away the raw material.

When You Can Listen to Everything, What Do You Ignore?

Zack’s long-term vision is Hill Genius as the operating system for government affairs where you log in, see what’s new and directly relevant to you and your clients, and information comes to you proactively instead of you hunting for needles in haystacks.

Zack described using clustering techniques to analyze how members’ stances shift over time, looking for patterns in voting behavior and changes in rhetoric. Instead of meeting with an entire delegation telling the same story, you focus on the two or three members whose positions are actually movable based on the data.

“We’re looking at everything holistically,” he told me. “What are the patterns there and how is that relevant to the goals we’re trying to accomplish for our customers?”

This is where the data science shows up. Everyone will eventually have access to comprehensive intelligence, so advantage shifts to synthesis: connecting dots others miss, identifying persuadable targets before consensus forms, and understanding which signals predict action versus noise.

With AI, the data science work you’re already doing just gets faster, the natural language data you’re already generating becomes usable, and the time you spend on toil becomes time you spend doing the work that actually requires you.

Watch the full conversation on Spotify.


Thanks for reading this edition of The Influence Model.

Best,
Ben

One more thing: If you’re reading this, you’re part of a small circle in Washington thinking seriously about how AI is reshaping opinion formation. If someone else should be in this conversation, feel free to pass this along.

Thanks for reading The Influence Model! Subscribe for free to receive new posts and support my work.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *