To fight disinformation, treat it as organised crime
20 Jan 2025|

The Australian government’s regulatory approach to tackling disinformation misses the mark by focusing on content moderation and controlling access to platforms. This focus on symptoms is like fighting a flood by mopping the floor: it feels like you’re dealing with the immediate problem, but it ignores the root cause.

The government should instead treat disinformation like organised crime and focus on dismantling networks.

Laws governing organised crime are effective because they focus on patterns and networks, not necessarily the commodities criminal syndicates trade in. Laws treating disinformation similarly would focus on scale, coordinated inauthentic behaviour, financial patterns and systematic manipulation for profit or influence, not content or controlling platform access. This would target orchestrated disinformation infrastructure while preserving freedom of expression.

The approach would allow governments, social media companies and their cyber allies to better tackle disinformation networks and actors. They would be able to take down malign disinformation enterprises, instead of playing Whac-A-Mole with content—shifting to controversial community notes or applying ineffective and unenforceable blanket access bans to groups of citizens.

Every disinformation campaign begins with an initiator, someone who deliberately spreads untruthful content to distort our view of reality. Disinformation differs from misinformation, which is unknowingly false—an honest mistake.

We used to think that content moderation and fact checking were the solution, but alone they are ineffective.

Human content moderation costs too much time and money, so companies have been experimenting with AI-assisted processes.

But automated moderation can’t reliably understand nuance, context or intent, which all help determine whether content is truly harmful or simply controversial. AI systems struggle with basic linguistic challenges. They often fail to catch harmful content in non-English languages, regional dialects and culturally specific contexts. Or they frequently misclassify content, struggling to distinguish between disinformation and legitimate discussion about disinformation.

Controlling platform access, such as recent regulation in Australia banning children under 16 years old from using social media, is another approach. But enforcement is difficult.

Yet the biggest problem is neither technical nor practical. It is philosophical.

Liberal democratic societies value freedom of speech. Content moderation is problematic because it treats freedom of speech as merely a legal or technical problem to be solved through rules and algorithms. But freedom of speech, open discourse and the marketplace of ideas are central to the democratic process.

Age-based social media bans present a fundamental tension with democratic and liberal philosophical principles as they impede young people’s development as democratic citizens. Social media is a key space for civic engagement and public discourse. Blanket age bans prevent young people from gradually developing digital citizenship skills. Consequently, young people would suddenly gain access to digital spaces without prior experience navigating them.

Approaching disinformation as organised crime focuses on the root cause of the problem—the malicious actors and networks creating harmful content—rather than trying to regulate the average citizen’s platform access or speech. Such an approach would target specific malicious groups, whether traced back to foreign information manipulation and interference, domestic coordinated inauthentic networks, or financially motivated groups creating fake news for profit.

Laws that treat disinformation as organised crime would require the prosecution to show several elements: criminal intent, harm or risk to public safety, structured and coordinated efforts, and proceeds of crime.

The first two elements should be covered by the definition of disinformation as the intent to deceive for malicious purposes. For the past four years, the Australian Security Intelligence Organisation has warned of the threat of foreign interference. In 2022, foreign interference supplanted terrorism as ASIO’s main security concern and in 2024, it was described as a real, sophisticated and aggressive danger.

ASPI data supports this assessment, exposing widespread cyber-enabled foreign interference and online information operations targeting Australia’s elections and referendums, originating from China, Russia, Iran and North Korea.

Together, ASIO and ASPI’s work indicates intent and harm—to individuals, institutions, organisations and society—for financial or political purposes.

Structured and coordinated efforts are equally provable. Disinformation is already known to involve coordination by organised networks, akin to organised crime syndicates. Meta, Google, Microsoft, OpenAI and TikTok already detect and disrupt covert online influence operations. They understand the tactics, techniques and procedures malicious actors use on their platforms—including identity obfuscation, impostor news sites, bot networks, coordinated amplification activity, and systematic exploitation of platform vulnerabilities.

Finally, disinformation is a funded enterprise, so profits can be classed as proceeds of crime. Like any criminal venture, disinformation is a calculated operation funded to undermine society, through advertising, fraudulent schemes or foreign funding. Laws that target financial aspects of disinformation operations—such as shell companies, front organisations, suspicious financial transactions or use of fake, compromised or stolen accounts—would differentiate malign enterprises from authentic individuals expressing genuine beliefs, however controversial.

Regulating content and platform access risks either over-censorship that chills legitimate discourse or under-moderation that allows harmful content to spread. We already have the tools and legal frameworks to prove malign online influence without undermining liberal democratic values. It’s time to change our approach and classify disinformation as an organised crime.