Wednesday August 12, 2020
Spamouflage Goes to America
Pro-Chinese Inauthentic Network Debuts English-Language Videos
Social media accounts from the pro-Chinese political spam network Spamouflage Dragon started posting English-language videos that attacked American policy and the administration of U.S. President Donald Trump in June, as the rhetorical confrontation between the United States and China escalated.
The videos were clumsily made, marked by language errors and awkward automated voice-overs. Some of the accounts on YouTube and Twitter used AI-generated profile pictures, a technique that appears to be increasingly common in disinformation campaigns. The network did not appear to receive any engagement from authentic users across social media platforms, nor did it appear to seriously attempt to conceal its Chinese origin as it pivoted toward messaging related to U.S. politics.
Spamouflage Dragon’s politically focused disinformation campaigns appear to have started in the summer of 2019. It began in Chinese by attacking the Hong Kong protesters and exiled Chinese billionaire Guo Wengui, a frequent critic of the Chinese Communist Party (CCP). In early 2020, it started commenting on the coronavirus pandemic, praising the CCP’s response at a time when it was being accused of covering up the outbreak.
The latest wave of Spamouflage activity differs in two key ways from its predecessors. First, it includes a wealth of videos in English and targets the United States, especially its foreign policy, its handling of the coronavirus outbreak, its racial inequalities, and its moves against TikTok. This is the first time the network has published substantial volumes of English-language content alongside its ongoing Chinese coverage--a clear expansion of its scope. The network was particularly active, and reactive to current events, in the period of investigation: videos commenting on recent U.S. official statements were created and uploaded in less than 36 hours.
Second, it is the first time that we have seen Spamouflage Dragon use clusters of accounts with AI-generated profile pictures. Other operations are known to have done so, but this is the first time the practice has been adopted by this particular network. Given the ease with which threat actors can now use publicly available services to generate fake profile pictures, this tactic is likely to become increasingly prevalent.