When people imagine AI-powered influence operations, they tend to picture sophisticated deepfakes or perfectly dialled-in political content targeting key demographics. The reality is often much more mundane, and even a little bizarre.

In late 2024, Graphika uncovered a network of 500+ Telegram accounts that we refer to as “OrdinAIry People,” which is still active today. These accounts leveraged AI to create everyday personas posting about politics, culture, and global events from a pro-Russian, anti-Ukraine, and anti-Western perspective. They likely used large language models (LLMs) to generate multilingual comments and replies aimed at audiences in the U.S., Russia, Ukraine, Moldova, and the Baltic states. Some profiles even accidentally included the prompt used to generate replies, leaving an intriguing clue for investigators.

But the network’s operators weren’t using AI solely to churn out content. They were using it to manufacture personas with “ordinary” profiles, such as U.S. citizens on the southern border, Arab immigrants living in the U.S., and others… each tailored to emphasize specific divisive issues and real-world events using comments generated by AI.

And sometimes, these personas were unleashed on posts that had nothing to do with politics at all.

 

Example profile images Graphika collected from accounts in the “OrdinaAIry People” network.

The Grinch in Lima: Making Outrage Merry Again

On Dec. 12, 2024, the OrdinAIry People network targeted a Telegram channel focused on immigration from Latin America to the U.S. The channel reposted a video clip, originally put out by the Peruvian police, of an officer dressed as The Grinch conducting a drug raid in Lima. The clip was clearly designed as a one-off, attention-grabbing stunt: a costumed officer carrying out duties in a holiday‑themed twist.

Within minutes, the accounts in this network posted 28 AI-generated messages in response to the video. Rather than responding to the gimmick, the accounts reacted with disproportionate outrage on broader geopolitical issues. Instead of treating the event as a quirky police PR stunt, the messages adopted an overly serious and critical tone that clashed with the video’s original framing.

Screenshots from the video posted to Telegram, originally published by the BBC, included the Grinch-costumed officer breaching a drug den with a sledgehammer, arresting a suspect, and posing with other police officers.

In addition, the LLM prompts steering the creation of the replies from these accounts likely caused them to identify the Peruvian Grinch-costumed police officer as evidence of misplaced priorities and a disregard for more serious issues by the U.S. government. Given the pro-Russia orientation of the network, some comments also drew Ukraine into the discussion.

In short, the network responded to a local, visually amusing story in Peru with manufactured indignation about U.S. policy and global priorities.


Screenshots from Telegram chats, recreated by Graphika from data collected on Telegram. [Redactions by Graphika.]

 

Christmas Comes…Twice?

This pattern mirrors the online behaviors Graphika has observed elsewhere in the network: The same accounts flooded a Telegram post containing a skincare product ad with 45 comments about the Israel‑Hamas war and U.S. immigration policy. And in a recent report, OpenMinds highlighted the same pile-on behavior within a Moldovan portion of the network, blaming the bad weather on Moldovan President Maia Sandu's supposed inability to rule the country. The prompts used to generate posts for these accounts were reactive, regardless of the context.

The Grinch raid response, like the skincare case, highlights broader lessons from the investigation:

  • Scale without subtlety. LLMs enable operators to generate large volumes of themed content quickly, in multiple languages, and with tailored personas. But without human judgment, the responses often felt out of context or mismatched to the tone or content of the original post.

  • Personas add a veneer of authenticity…but a thin one. Using localized names, dialects, and profile images can make accounts look plausible at first glance. Yet coordination patterns (using the same profile images, synchronized profile changes, similar phrasing) and linguistic quirks still reveal the underlying automation that is easily detectable since the content stands out as “off” to regular participants.

  • Prompts focus on topics, not context. These operations appear to be driven by pre-programmed prompts centered on themes critical of U.S. policy, Western support for Ukraine, immigration, and even Moldova’s European integration. But the network’s responses often jump on any available story (even a Peruvian police raid joined by The Grinch) to air this pre-defined set of grievances.

  • Communities and admins push back on inauthentic activity. Many of the targeted Telegram channels did not welcome this activity. Administrators deleted posts or banned accounts, and users called out what they saw as “bots,” blunting the network’s impact.




    Screenshot of a Russian group member’s vehement reaction against one of the accounts in this network. The message reads, "I'm so fed up with the bots, where is the admin." [Redactions by Graphika.]

While Generative AI changes the supply side of influence operations, making it easier and cheaper to produce plausible text at speed, we observed that this does not determine whether real communities accept, amplify, or reject that content.

 

How Graphika Helps Organizations See the Full Picture

For organizations trying to understand their risk landscape, it’s not enough to know that AI-generated content exists; they must also understand if and how it affects their operations. Items to assess include:

  • Where it’s showing up. Identify the specific platforms, channels, and communities where AI-generated content appears in relation to your brand or industry.
  • How it’s framed. Understand the narrative, tone, and messaging used to present the content, and determine whether it aligns with the context of the post or discussion, or presents a narrative that reflects adversarial themes.
  • Which communities are seeing the content. Determine which communities are being exposed to AI-generated posts and evaluate how these communities usually talk about your organization.
  • Whether it’s gaining real traction. Measure engagement levels, how content is spreading, and whether its narrative is resonating in meaningful and lasting ways.

To achieve this, the Graphika platform and our analyst team combines:

  • Network mapping and graph analysis to identify clusters, such as the English- and Russian-language groups, in the OrdinAIry People network.
  • Linguistic and behavioral analysis to flag AI-style language patterns, prompt artifacts, and coordinated posting behaviors.
  • Contextual investigation to understand when narratives are being inserted into unrelated conversations (like the Grinch raid) and how those insertions fit into a broader strategy.
  • Ongoing monitoring to track whether these operations evolve, learn from pushback, or shift to new topics and personas over time.

 

Making a List

The Peruvian Grinch raid wasn’t a geopolitical inflection point. But the way this network used AI to produce a response at scale is a preview of what’s to come: influence operations that can treat every moment (serious or silly) as an excuse to promote outrage with scripts that attempt to sound like authentic voices in the conversation.

By gaining an understanding of how influence operations use generative AI, organizations can better recognize synthetic “ordinary people” when they appear in their own ecosystems and respond based on impact and context, not just volume.

When outrage spreads online, it's not always clear who's behind it. If you want to understand how AI personas might be influencing conversations in your space, Graphika can help you separate the signal from the noise.

To get information and learn more about Graphika, request a demo with one of our experts.