More stick, less carrot: Australia’s new approach to tackling fake news
17 Jul 2023|

An urgent problem for governments around the world in the digital age is how to tackle the harms caused by mis- and disinformation, and Australia is no exception.

Together, mis- and disinformation fall under the umbrella term of ‘fake news’. While this phenomenon isn’t new, the internet makes its rapid, vast spread unprecedented.

It’s a tricky problem and hard to police because of the sheer amount of misinformation online. But, left unchecked, public health and safety, electoral integrity, social cohesion and ultimately democracy are at risk. The Covid-19 pandemic taught us not to be complacent, as fake news about Covid treatments led to deadly consequences.

But what’s the best way to manage the spread of fake news? How can it be done without government overreach, which risks the freedom and diversity of expression necessary for deliberation in healthy democracies?

Last month, Minister for Communications Michelle Rowland released a draft exposure bill to step up Australia’s fight against harmful online mis- and disinformation.

It offers more stick (hefty penalties) and less carrot (voluntary participation) than the current approach to managing online content.

If passed, the bill will bring us closer to the European Union-style model of mandatory co-regulation.

According to the draft, disinformation is spread intentionally, while misinformation is not.

But both can cause serious harms including hate speech, financial harm and disruption of public order, according to the Australian Communications and Media Authority (ACMA).

Research has shown that countries tend to approach this problem in three distinct ways:

  • non-regulatory ‘supporting activities’ such as digital literacy campaigns and fact-checking units to debunk falsehoods
  • voluntary or mandatory co-regulatory measures involving digital platforms and media authorities
  • anti-fake-news laws.

Initial opinions about the bill are divided. Some commentators have called the proposed changes ‘censorship’, arguing it will have a chilling effect on free speech.

These comments are often unhelpful because they conflate co-regulation with more draconian measures such anti-fake-news laws adopted in illiberal states like Russia, whereby governments arbitrarily rule what information is ‘fake’.

For example, Russia amended its Criminal Code in 2022 to make the spread of ‘fake’ information an offence punishable with jail terms of up to 15 years, to suppress the media and political dissent about its war in Ukraine.

To be clear, under the proposed Australian bill, platforms continue to be responsible for the content on their services—not governments.

The new powers allow ACMA to look under a platform’s hood to see how it deals with online mis- and disinformation that can cause serious harm, and to request changes to processes (not content). ACMA can set industry standards as a last resort.

The proposed changes don’t give ACMA arbitrary powers to determine what content is true or false, nor can it direct specific posts to be removed. The content of private messages, authorised electoral communications, parody and satire, and news media all remains outside the scope of the proposed changes.

None of this is new. Since 2021, Australia has had a voluntary Code of Practice on Disinformation and Misinformation, developed for digital platforms by their industry association, the Digital Industry Group (known as DIGI).

This followed government recommendations arising out of a lengthy Australian Competition and Consumer Commission inquiry into digital platforms. This first effort at online regulation was a good start to stem harmful content using an opt-in model.

But voluntary codes have shortfalls. The most obvious is that not all platforms decide to participate, and some cherrypick the areas of the code they will respond to.

The Australian government is now seeking to deliver on a bipartisan promise to strengthen the regulators’ powers to tackle online mis- and disinformation by shifting to a mandatory co-regulatory model.

Under the proposed changes, ACMA will be given new information-gathering powers and capacity to formally request that an industry association (such as DIGI) vary or replace codes that aren’t up to scratch.

Platform participation with registered codes will be compulsory and attract warnings, fines and, if unresolved, hefty court-approved penalties for noncompliance.

These penalties are steep—as much as 5% of a platform’s annual global turnover if it is repeatedly in breach of industry standards.

The move from voluntary to mandatory regulation in Australia is logical. The EU has set the foundation for other countries to hold digital technology companies responsible for curbing mis- and disinformation on their platforms.

But the draft bill raises important questions that need to be addressed before it is legislated as planned for later this year. Among them are:

  • how to best define mis- and disinformation (the definitions in the draft are different from DIGI’s)
  • how to deal with the interrelationship between mis- and disinformation, especially regarding election content. There’s a potential issue because research shows that the same content labelled ‘disinformation’ can also be labelled ‘misinformation’ depending on the online user’s motive, which can be hard to divine
  • and why exclude online news media content? Research has shown that news media can also be a source of harmful misinformation (such as 2019 election stories about the ‘death tax’).

While aiming to mitigate harmful mis- and disinformation is noble, how it will work in practice remains to be seen.

An important guard against unintended consequences is to ensure that ACMA’s powers are carefully defined along with terms and likely circumstances requiring action, with mechanisms for appeal.

The closing date for public submissions is 6 August.The Conversation