Examining Australia’s bid to curb online disinformation
4 Sep 2023|

Hardly a day had passed after the government unveiled its initial draft of the Combatting Misinformation and Disinformation Bill 2023 when critics descended upon it.

‘Hey Peasants, Your Opinions Hell, your facts Are Fake News’, posted Canadian right-wing professor Jordan Peterson on X (then Twitter) in response to the announcement.

Since then, commentary on the bill has grown more intense and fervent. The bill sets up Canberra to be a ‘back-up censor’ ready to urge the big tech companies to ‘engage in the cancellation of wrong-speak’, wrote Peta Credlin in The Australian under the headline ‘“Ministry of Truth” clamps down on free expression’. For Tim Cudmore writing in The Spectator, the bill represents nothing less than ‘the most absurdly petty, juvenile, and downright moronic piece of nanny-state governmental garbage ever put to paper’.

In reality, the intentions of the bill are far more modest than the establishment of a so-called Ministry of Truth. Indeed, they’re so modest that it may come as a surprise to many that the powers don’t already exist.

Put simply, the bill is designed to ensure that all digital platforms in the industry have systems in place to deal with mis- and disinformation and that those systems are transparent. It doesn’t give the Australian Communications and Media Authority any special ability to request that specific content or posts be removed from online platforms.

If the bill were to pass, it would mean that digital platforms like WeChat will finally have to come clean with their censorship practices and how they’re applying them, or not, to content aimed at Australian users. It will also mean that digital platforms like X that once devoted resources to ensuring trust and safety on their platforms, but are now walking away from those efforts, are made accountable for those decisions.

If there’s one thing that Elon Musk’s stewardship of X has shown, it’s that even with an absolutist approach to free speech, content-moderation decisions still need to be made. Inevitably, any embrace of absolute free-speech principles soon gives way to the complexities of addressing issues like child exploitation, hate speech, copyright infringement and other forms of legal compliance. Every free-speech Twitter clone has had to come to this realisation, including Parler, Gettr and even Donald Trump’s Truth Social.

So, if all digital platforms inevitably engage in some sort of content moderation, why not have some democratic oversight over that process? The alternative is to stick with a system where interventions against mis- and disinformation take place every day, but they’re done according to the internal policies of each different platform and the decisions are often hidden from their users. What the Combatting Misinformation and Disinformation Bill does is make sure that those decisions aren’t made behind closed doors.

Under the current system, when platforms overreach in their efforts to moderate content, it’s only the highest-profile cases that get the attention. To take one recent example, a report by Credlin was labelled ‘false information’ on Facebook based on a fact-check by RMIT FactLab. The shadow minister for home affairs and cyber security wrote to Facebook’s parent company, Meta, to complain, and the ABC’s Media Watch sided with Credlin.

Would it not be better if this ad hoc approach were replaced with something more systematic that applied to regular members of the public and not just high-profile commentators? Under the proposed bill, all the platforms will have to have systems in place to deal with mis-and disinformation while also balancing the need for free expression. The risk of the status quo is not just that the platforms will not moderate content enough, but that they will overdo it at times.

When digital platforms refrain from moderating content, harmful content proliferates. But as platforms become more active in filtering content without publicly disclosing their decision-making, there’s an increased risk that legitimate expression will be stifled. Meta executives admitted at a recent Senate committee hearing that it had gone too far when moderating content on the origin of Covid-19, for example.

In contrast to the Australian government’s modest approach is the EU’s Digital Services Act, which just came into effect last week. That act heaps multiple requirements on the platforms to stop them from spreading mis- and disinformation. Many of these requirements are worthwhile, and in a future article, I’ll make the case for what elements we might like to use to improve our legislation. Fundamentally, the act represents a positive step forward by mandating that major social networks such as Facebook, Instagram and TikTok enhance transparency over their content moderation processes and provide EU residents with a means to appeal content-moderation decisions.

But if critics wish to warn about Orwellian overreach, they’d do better scrutinising the EU’s Digital Services Act, not Australia’s proposed Combatting Misinformation and Disinformation Bill. In particular, they should take a look at one element of the European legislation that enables the EU Commission to declare a ‘crisis’ and force platforms to moderate content according to the state’s orders. That sets up a worrying precedent that authoritarian rulers around the world are sure to point to when they shut down internet services in their own countries.

After years of relative laissez-faire policymaking, the world’s biggest tech companies are finally becoming subject to more stringent regulation. The risk of regulatory overreach is real and critics are right to be wary. But the Australian government’s proposed solution, with its focus on scrutinising the processes the platforms have in place to deal with mis- and disinformation, is a flexible approach for dealing with a problem that will inevitably continue to grow. And unlike the European strategy, it avoids overreach by both the platforms and the government.