Australian Outlook

In this section

Spamouflage: A Hidden Weapon in Influence Operations

17 Sep 2024
By Yuxuan Lai and Dr Gavin Mount
This image was generated by Dall-E

Spamouflage, a form of disinformation spread through fake social media accounts, poses a growing threat to democratic processes by manipulating public opinion and amplifying polarisation. Governments must take proactive steps to curb the impact of these campaigns and protect the integrity of elections.

Spamouflage refers to the dissemination of propaganda or misleading content (often for political motives) using fake social media accounts and algorithms. These activities are mainly initiated by governments or non-profit organisations and aim to influence public opinion by using technical skills to disseminate specified content through social media platforms. Spamouflage is an instrument of covert geopolitical influence that degrades trust in information and manipulates public perceptions, and may pose a major risk by destabilising the democratic process.

The operation of spamouflage propaganda is complex and multifaceted, and it relies on organising a large-scale cyber force to attack designated topics. These netizens often masquerade as locals by adopting local language, cultural references, and political stances to blend into online conversations. They express a range of views on political issues in the target country, typically taking more extreme positions to attract the attention of real users who are actively debating the topic. This tactic increases engagement and helps to sway public opinion through manufactured controversy.

When the debate begins, the platform’s algorithm determines that the current content or video has the potential to become popular, and the platform will push this content to more people, thus achieving the goal to expand and spread. According to a recent Graphika Report on current US social media platforms, these activities amplify extreme opinions on sensitive issues such as gun control, homelessness, and racial inequality. When these users are repeatedly exposed to content that corroborates with their established views, the information cocoon is reinforced. The personalised nature of algorithmic feeds, which promote content based on user engagement (like shared comments), ensures that individuals continue to receive similar content, reinforcing their beliefs and isolating them from opposing viewpoints. This creates an environment where biases are amplified rather than challenged, thereby increasing social polarisation and amplifying the impact of disinformation.

In the political environment, spamouflage poses a significant risk to the democratic process, especially during elections when public trust in information is crucial. The protection of freedom of speech in democratic societies provides room for the existence of spamouflage. With the US elections approaching, analysts have documented a sharp increase in the activity of Chinese netizens impersonating Americans to publish sharp comments on American social issues, which has deepened conflicts and antagonism between different ethnic groups, causing voters to make irrational choices during elections.

By using algorithms to re-share and amplify content within platforms, spamouflage creates a distorted media environment that ultimately erodes trust in the democratic process when voters are subtly influenced by disinformation. When elections and voters are affected by such artificially created confusion, then the future stability of the country’s long-term policies will be affected in a non-negligible way.

Spamouflage continues to thrive on social media platforms, unfortunately, in large part due to the inability or unwillingness of Big Tech to effectively address these issues. As user activity and volume are key metrics for measuring the success of these companies, the presence of bots, spamouflage, and misinformation helps them boost daily active users (DAUs) and thus inflate valuations. While this damages their reputation in the long run, many platforms find this too tempting to ignore, and generally rely on this “irresistible allure.” It is unrealistic to expect these companies to exercise self-discipline purely through community management.

Governments must take responsibility and adopt proactive strategies to reduce the spread of spamouflage and disinformation, especially before elections. There are several mitigation strategies that can be used.

Firstly, governments need to invest in developing tools to detect and reduce the spread of disinformation on a large scale. This is crucial in response to potential saturation media attacks in the run-up to the election. Through such identification mechanisms, governments have the ability to detect disinformation with pandemic potential early on and then prevent it from operating accordingly. For instance, Mandiant, a cybersecurity firm, uses attribution techniques to identify disinformation from state actors, helping governments block coordinated fake campaigns before they impact elections.

Secondly, public education, as exemplified by the initiative “Addressing Misinformation with Media Literacy through Cultural Institutions” has the potential to cultivate critical thinking and scepticism among the general public, thereby limiting the ability of disinformation to gain traction. Similarly, Taiwan’s media literacy program educates the public on how to critically evaluate information and identify disinformation. These efforts help to counter China’s influence operations to manipulate public opinion, especially during election periods.

Thirdly, governments need to establish a regulatory framework for platform companies. The EU’s Code of Practice on Disinformation provides a clear guideline for platform companies to tackle disinformation. This code encourages companies to implement stronger verification processes, regularly audit their systems, and increase transparency in the fight against disinformation, ensuring that platforms take active steps to monitor and prevent misleading content from spreading.

Finally, a range of diplomatic approaches are required to coordinate global and regional efforts aimed at building confidence and resilience against widespread information influence campaigns. Such measures could include developing a verified database of propaganda and disinformation experts; the appointment of special envoys; establishing global monitoring and debunking strategies; intelligence cooperation and information sharing; and regular conventions to guide and negotiate regulation standards.

The combination of media literacy, technological development, and government oversight can greatly enhance a society’s ability to resist disinformation. Media literacy equips individuals to critically assess information, technology detects and reduces disinformation, and government oversight holds platforms accountable while limiting external interference in democracy. Together, these efforts create a resilient society that can withstand attempts to undermine trust, reduce polarisation, and preserve the integrity of public discourse. It has to be recognised that the fight against disinformation is a long-lasting one, and the techniques will become more invisible. Only by being proactive and flexible can the government take the initiative in the future media and information battlefield.

Yuxuan Lai is an expert in social media management, having worked as a user growth manager for major social media companies.  During his tenure, he has led several spamouflage attacks with business purpose. Currently, he is exploring how to utilize AI technology to detect spamouflage.

Dr Gavin Mount is a Senior Lecturer and Nexus Fellow at UNSW Canberra. His research focuses on the intercession between geopolitics, ethnic conflict and disruptive capabilities. Gavin is a Commissioning Editor of Australian Outlook.

This article is published under a Creative Commons License and may be republished with attribution.