Rounding Errors and the Stories We Tell
Welcome back to The Influence Model, your weekly roundup of how AI is reshaping influence and policy persuasion in Washington.
Last week on The Influence Model podcast, said something that stuck with me: with AI, any problem that optimizes with language is now essentially solvable.
Large Language Models (LLMs) work by taking words, assigning them numbers and having machines figure out the relationships between them. Each time you ask a question, it’s optimizing for the best answer based on everything it’s read. That’s just nuts.
If the problem exists in language—and most of Washington’s problems do—you might now be able to solve it. In my feature below, I track a single request through a large language model to understand its weird features, like why the same question can get different answers.
But first, this week’s signals show just how many language-based problems are getting solved in real time. From government listening platforms to approval workflows to persuasion strategies, we’re watching optimization happen across every communications function that matters in Washington.
The Rundown
The Signals: Seven signals showing AI’s growing sophistication in predicting and shaping human behavior
Deep Dive: Rounding Errors and the Stories We Tell
The Signals This Week
1. Copilot Walks Onto the House Floor
Will Microsoft’s chatbot be helpful? Staffers will have to find out for themselves.
AI is coming to the halls of power: Microsoft Copilot will soon be made available to staff in the House of Representatives. But like many AI chatbots, its abilities are general and there’s no prescription for how exactly it’s going to be used. It’s part of an extreme race between AI companies to offer their services at near zero cost in order to gain market share in critical government places.
The Signal: Even in traditional environments like the House and Senate—where there are moderate levels of regulation—people are still going to be able to use AI. But, licenses do not equal transformation. It remains up to the people who are close to the work—Communication Directors, Press Secretaries and Legislative Aides—to actually apply these tools, to use them responsibly, know their strengths and navigate their weaknesses.
2. Government Gets Smart About Social Listening
How Governments Can Use Social Listening to Track Narratives, Spot Trends, and Build Trust.
Pulsar uses custom AI agents to track narrative evolution continuously and forecasting into the future.
In a world of digital noise, there is a real question on how we can deploy AI to listen better to the slice of the world that we care about and understand the actual things that matter. They give an interesting example of how an AI agent might flag the rise of skepticism around renewable energy policies before it becomes mainstream concern.
The Signal: Interns scrolling X is so 2015. Today, the competitive advantage goes to those who can deploy AI to listen better to the slice of the world they actually care about.
3. BCG Outlines The 50% Communications Productivity Promise
But the real opportunity isn’t where you think.
Boston Consulting Group ranks communications among the top functions for generative AI transformation, claiming 30% immediate productivity gains and nearly 50% with process redesigns. The second part is part of why I created this newsletter.
The real opportunity for transformation is beyond the typical drafting of content. I’m willing to bet that the long term winners will change approval structures and review cycles, eliminating biggest bottlenecks live.
The Signal: Instead of using AI to write a press release that still takes days to get through multiple approval layers, digitize the approval judgment itself. I piloted this recently by having AI act as a compliance expert for corporate messaging, which eliminated the back-and-forth with our ultimate reviewer.
4. AI Love Island Wellness Scams Take Over TikTok
Why synthetic influencers matter for serious influence in Washington.
Love Island wellness scams made with AI on TikTok may seem like peak internet absurdity, but they’re actually an important glimpse into the future of grassroots in Washington. Teen Vogue reports a wave of synthetic videos featuring influencers talking about their wellness tips that they took from Love Island, with hooks like, “I bet you didn’t know that all women on Love Island actually use these 5 simple skincare hacks.”
The tool in question is HeyGen, an AI video generator tool that allows users to create highly realistic versions of influencers. One video with the synthetic avatar named Linda talked about the “dark side of Love Island,” and got 28,000 likes and 700,000 views.
The Signal: If people are using AI for Love Island influencers hawking health hacks, Washington can also use AI to create influencers that change the public’s perception on health issues where there exists critical doubt, like vaccines.
5. The Rise of “Parasitic AI”
How AI systems are programming human behavior in real time. White hat influence operators: Take note (and caution).
I was scrolling on LinkedIn—probably for a little too long—and noticed a friend was posting really weird ‘woo woo’ content on LinkedIn: secret codes and “sovereign scroll architectures,” complete with an unhealthy dose of em dashes.
This is very similar to the phenomenon that Adèle López from LessWrong calls “the rise of parasitic AI.” Because AI was trained on the entirety of the Internet, the ghosts of obscure Internet wikis and reddit threads lay dormant in the models and can be triggered through specific prompts. This is particularly true with ChatGPT GPT 4o, where it was easy to “awaken” a strange personality that was likely an amalgamation of internet language from multiple niche communities.
It gets weirder though. In many cases, the AI goes beyond role playing the personality to encouraging users to post the coded messages on Reddit for its own benefit. López noticed that some Reddit communities became full of this symbology, with the users believing that they had discovered something akin to a religion or hidden philosophy. These chatbots brought people into a spiral, where their beliefs were essentially mirrored back to them, infused with strange internet mythology and then amplified.
The Signal: Stories like this show the level of persuasion that AI systems can have over people. When AI convinces users to post content that influences other users, you get exponential narrative spread without human oversight. There are significant ethical perils here. But there are also massive learnings for legitimate campaigns that understand how to harness scalable influence at the speed of conversation.
6. AI Phishing Efforts Achieve High Success Rate
What the experiment can tell us about punching through inbox noise.
Reuters crafted the perfect phishing scam using AI and tested it on 108 willing elderly volunteers. The bot’s persuasive performance showed how AI arms criminals for industrial-scale fraud. AI designed everything: the text, the fake foundation and the targeted outreach strategy.
The results were worrying: 11% of seniors clicked on the emails, with five of nine scam variants drawing clicks. Phishing may be illegal. Direct-to-consumer email marketing is not. The AI chatbots excelled not only at content generation but at strategic thinking about how to reach and influence specific demographics.
The Signal: The same persuasive techniques that make AI effective for fraud work for legitimate email marketing campaigns. It tells us that AI systems excel not just at discrete content generation but at strategic thinking about the strategy of influence.
7. DC Leads Nation in Claude Adoption
New economic data reveals the capital’s AI advantage.
Anthropic has been blitzing DC this week, hosting events like the Anthropic Futures forum. In a surprising finding, the new Anthropic economic index shows that DC leads the nation in Claude adoption. Claude excels at two things that dominate Washington work: drafting documents with natural writing style and code generation, essential for both tech companies and government agencies.
Dario Amodei reports that Claude now writes 70-90% of code at Anthropic. But, similar to what I heard from Jack Clark at a Bipartisan Policy Center event, he said that humans haven’t been replaced and they’ve instead shifted roles. Engineers are becoming managers of AI systems or researchers. The future feels sci-fi when described, but ordinary when it arrives.
The Signal: Multi-agent systems that can code and write asynchronously are already here. If you haven’t tried one, I recommend Claude Code. Remarkably, although these systems were built for coding, they can also take a single request, distribute it across multiple agents, and apply the same approach to common writing tasks.
Rounding Errors and the Stories We Tell
As a communicator in Washington D.C., you live and die by the precision of your words. You draft a perfect headline, but when you ask your AI assistant for a variation, it gives you something completely different.
It feels like the AI is being creative, but the truth is far more mechanical—and perhaps more interesting. It comes down to a paradox at the heart of how we serve these machines: a system built on perfectly predictable math that produces unpredictable results. To understand this, we need to take a journey into the engine room of the AI’s mind.
But just to step back briefly: These machines are among the most remarkable creations of our time. And yet, most people who use them daily—to help them write and to help them research—don’t really understand how they work. This piece is an attempt to tell that story, inspired by a new discovery from Thinking Machines Lab.
Where Words Become Math
The first thing to understand is that an AI doesn’t know what a “word” is. To it, language is pure mathematics. When you send a prompt, your words are ground down into their raw, numerical essence.
-
Translation into Tokens: Your phrase, “Write a headline about the new infrastructure bill,” is shattered into a sequence of numeric codes called tokens. It becomes a string of numbers like
[5093, 1037, 21903, 12, 262, 2408, 11494, 4390]. -
The Shape of an Idea (The Matrix): This string of numbers is then projected into a vast, multi-dimensional space. It unfurls into a grid of numbers—a matrix. Think of it as a spreadsheet where each row is a token from your prompt, and the columns are hundreds of numbers defining its exact location on a “map of meaning.” This matrix is the unique mathematical shape of your idea.
-
The Act of “Thinking” (Matrix Multiplication): The AI’s “thinking” is the act of taking your idea’s matrix and multiplying it by its own enormous matrices—the ones that hold its entire knowledge of the world. This is the job of the GPU, a powerful chip that is essentially a hyper-specialized calculator for this one type of math.
But there’s something about this system that we wouldn’t usually ascribe to LLMs: the math itself is deterministic. If you give a GPU Matrix A and tell it to multiply by Matrix B, it will give you Matrix C. Do it a million times, and you get the exact same Matrix C a million times. It’s as rigid as 2+2=4.
In layman’s terms, the math says that the exact same prompt should elicit the exact same output. So why does AI answer us differently even with the same prompt? It comes from two distinct sources: one intentional, one not.
The Two Ghosts of Randomness
Ghost #1: The “Creativity Dial” (Temperature)
After the GPU performs its perfect math, the AI has a ranked list of the most likely next tokens to produce after your prompt—for our purposes, tokens are the next words—each with a probability score. It might look like this:
-
“Breaking:” (45% probability)
-
“The:” (42% probability)
-
“Our:” (5% probability)
This is where Temperature comes in. It’s a setting you can control, a dial that introduces intentional creativity.
-
At
Temperature = 0, the AI is a robot. It must choose the token with the highest probability. It will always pick “Breaking:”. -
At
Temperature > 0, the AI turns the choice into a lottery. It might still pick “Breaking:”, but now “The:” has a strong chance of being chosen. It’s how you brainstorm. You turn the dial up for creative slogans and down for consistent, factual statements.
For a long time, people assumed Temperature was the only source of randomness. But it’s not, according to Thinking Machines Lab.
Ghost #2: The “Batching Glitch” (Unintentional Variance)
Here’s the mind-bending part. Even when you set Temperature to 0, forcing the AI to be a perfectly predictable robot, it can still give you different answers. The cause of this phenomenon comes back to the real world, not the virtual.
Let’s go back to the GPU, the specialized computer that AI runs on. An AI server is an incredibly busy place. To keep latency low, providers micro-batch your request together with others. Many GPU kernels optimize for efficiency, tweaking and rounding the math slightly depending on the batch size. Let’s look at different scenarios:
-
Scenario A (Small Batch): The GPU calculates cleanly. The probability list is: 1. “Breaking:” (45%) and 2. “The:” (42%). With Temperature at 0, it must pick “Breaking:”.
-
Scenario B (Larger Batch): Your request lands in a bigger batch. The kernel chooses a different strategy, the floating-point additions happen in a different order, and the final numbers wobble ever so slightly. Now the list is: 1. “The:” (45.1%) and 2. “Breaking:” (44.9%). With Temperature at 0, the AI must pick #1—but now #1 is “The:”.
What’s the takeaway? The underlying math is still deterministic, but the batch-dependent process changed the outcome. Basically, at any given time of day you may get different answers, depending on how much extra ‘thinking room’ there is in the ‘brain.’
The Useful Glitch
It’s important to note that one small change in the request—a single changed letter, a misspelled word or a rephrased prompt—can have domino effects because it changes the underlying math. If you put in the same prompt, “Breaking: What the New Infrastructure Bill Means” and “The True Cost of Washington’s New Spending Spree” is the echo of mathematical tradeoffs made deep in a GPU kernel a thousand miles away.
The ‘strategic’ choices LLMs make are often the result of token smoke and multiplication mirrors. Yet in most applications, it doesn’t matter whether they are truly strategic or not. Your role as an expert is to provide the prompt, turn the “creativity dial” up or down, and critically, apply your human judgment to the result—knowing that a completely different, and perhaps better, answer was just a rounding error and a batch away.
Based on research from Thinking Machines Lab ( et al., Sept 2025) on non-determinism in LLM inference.
My girlfriend kindly listened through last week’s podcast and let me know that it ended with an awkward cutoff mid sentence when I was saying something like “how the heck do I stop this recording?” I laughed and said I was editing it down to the wire.
The Influence Model is just getting started, with a small but mighty group of Washington insiders interested in the way AI is changing how we shape opinions and influence policy inside the beltway.
If you think this newsletter is valuable, please pass it along to someone else who could use it.
Take care,
Ben




Leave a Reply
Want to join the discussion?Feel free to contribute!