When online outrage against a company or brand pops up, it can be easy to assume these incidents are solely due to armies of bots and inauthentic accounts. Our research at Graphika has shown that focusing on bots alone oversimplifies the complexity of online narratives and communities, obscuring the actual undercurrents that make certain narratives resonate.
In our Ask an Expert webinar, Bot or Not: Understanding Automated Attacks and When They Matter, Graphika Principal Analyst Cristina López G. and Intelligence Specialist Angie Waller discussed the obstacles of detecting bot accounts and how they should be considered within the context of online communities and conversations.
The Challenge of Bot Detection
Detecting bots is increasingly difficult as account patterns and tactics differ by platform, and the motivations of bot network operators vary. The increasing use of AI also complicates matters. Our November 2025 report, Cheap Tricks: How AI Slop Is Powering Influence Campaigns, outlines how AI tools enable inauthentic accounts to create impersonating and misleading content in greater volumes.
Even so, several signals can tip off that an account may not be what it initially seems: high ratios of reposts to original content, frequent changes in usernames or geographic/language settings, and posting patterns at consistent and unusual times — such as an account claiming to be U.S.-based posting around 4 a.m. daily. Yet, these characteristics alone cannot reliably identify inauthentic accounts.
Consider Botometer, a once widely used tool for estimating bots on Twitter (now X). When researchers tested its scores against known accounts, they found it classified roughly half of U.S. Congress members as bots and wrongly labeled 18% of Reuters journalists as inauthentic. While it was a helpful first step at detecting bots, these false positives demonstrate that signals that may indicate an account is a bot also exist in legitimate human behavior.
Fandoms illustrate this perfectly. K-pop communities generated over 6 billion tweets in 2019 and 100 million during the Mnet Asian Music Awards — numbers that can look like bot activity. Yet real humans were behind this "spammy" behavior, leading to real organizing power. K-pop fandoms mobilized to reverse Twitter's removal of Kim Jong-hyun's account and coordinated during Black Lives Matter protests. The same high-volume posting that traditionally flags accounts as inauthentic is how these communities actually organize.
At Graphika, we closely study how narratives travel through communities, and understanding the norms of how those communities engage online is crucial for determining whether a backlash is taking hold or is just a blip in the daily churn of online activities. Companies investing in bot detection without considering community networks and behaviors waste resources chasing false positives rather than addressing actual threats.
Who is Driving the Narrative?
By examining brand backlashes, foreign influence operations, and the spread of other online narratives, Graphika has found that participation from automated or inauthentic accounts is rarely sufficient to truly drive a conversation. Bots and inauthentic accounts can certainly contribute activity around a narrative, but they often lack a core feature needed to foster community engagement: trust.
Instead, our intelligence team has regularly found that brand backlashes often originate from influential individuals within an online community reacting to company actions, such as product launches or leadership changes, or broader culture war flashpoints. These individuals communicate to their audiences their perception that the brand has done something wrong, and the audiences further spread the narrative. If a backlash garners enough attention, other like-minded influencers, communities that disagree with the backlash, and news outlets can pick it up, furthering the breakout.
Bots or inauthentic accounts can then latch onto narratives already in motion, led by real people with influence in a community. They may make a narrative appear larger than it is, but this doesn’t necessarily equate to impact and action. When researchers measured exposure rather than volume within vaccine-focused communities, they found that the vast majority of people never engaged with bot-generated posts that were critical of vaccines. Instead, they engaged with human-authored content.
Our research into Spamouflage, a China state-linked influence operation that Graphika has monitored since 2019, further demonstrates this point. In our September 2024 report, The #Americans, we documented accounts tied to Spamouflage impersonating “ordinary” Americans while posting inflammatory political content designed to amplify partisan divisions. Yet, these posts gained minimal organic engagement. A YouTube spokesperson told Reuters that a channel we flagged was terminated as part of YouTube's "investigations into coordinated influence operations" and also noted that the channel had "a very small number of views.”
When Backlashes Occur, Graphika Considers the Whole Ecosystem
Graphika takes a comprehensive approach to examining online narratives, including brand backlashes. By combining advanced cyber threat intelligence and AI-driven social media analysis with the subject matter expertise of our analysts and open-source investigative methodologies, Graphika goes beyond measuring the volume of a conversation. Our platform transforms billions of online interactions into real-time, actionable insight, helping organizations monitor emerging narratives, detect threats, and safeguard their brand before issues turn into headlines.
Our research covers how a narrative originated, how it spread through the communities that matter most to our clients, and whether the narratives may have any lasting impact. This deep and detailed understanding of how communities interact, what motivates them, and their historical activities provides clients with the essential context needed to respond. Our analysis enables companies to be strategic rather than reactive, moving beyond the narrow framing of counting inauthentic accounts and seeing the full landscape of the challenge at hand.
To get information and learn more about Graphika, request a demo with one of our experts.
