Bushfires, bots and the spread of disinformation
15 Jan 2020|

As fire continues to wreak havoc across large parts of the country, online Australia is battling another crisis: the waves of misinformation and disinformation spreading across social media. Much of the media reporting on this has referred to ‘bots and trolls’, citing a study by Queensland University of Technology researchers which found that about a third of the Twitter accounts tweeting about a particular bushfire-related hashtag showed signs of inauthentic activity.

We can’t fight disinformation with misinformation, however. It is important to be clear about what is, and what is not, happening.

There’s no indication as yet that Australia is the target of a coordinated disinformation ‘attack’. Instead, what we’re seeing online is a reflection of the changing information environment, in which high-profile national crises attract international attention and become fuel for a wide array of actors looking to promote their own narratives—including many who are prepared to use disinformation and inauthentic accounts.

As online discussion of the bushfire crisis becomes caught up in more and more of these tangled webs, from conspiracy theories to Islamophobia, more and more disinformation gets woven into the feeds of real users. Before long, it reaches the point where someone who starts off looking for information on #AustraliaFires winds up 10 minutes later reading about a UN conspiracy to take over the world.

The findings of the QUT study have been somewhat misconstrued in some of the media reporting (by no fault of the researchers themselves). There are a few factors to keep in mind.

First, a certain amount of inauthentic activity will be present on any high-profile hashtag. Twitter is full of bot accounts which are programmed to identify popular hashtags and use them to sell products or build an audience, regardless of what those hashtags are. Using a small sample size as the QUT study did (315 accounts) makes it difficult to determine how representative that sample is of the level of inauthentic activity on the hashtag as a whole.

Second, the QUT study relied on a tool called Bot or Not. This tool and others like it—which, as the name suggests, seek to automatically determine whether an account is a bot or not—are useful, but it’s important to understand the trade-offs they make when you’re interpreting the results. For example, one factor which many bot-detection tools look at is the age of the accounts, based on the assumption that newer accounts are more likely to be bots. That may in general be a reasonable assumption, but it doesn’t necessarily apply well in a case like the Australian bushfire crisis.

Many legitimate users may have recently joined Twitter specifically to get information about the fires. On the flipside, many bot accounts are bought and sold and repurposed, sometimes over several years (just search ‘buy aged Twitter accounts’ on Twitter for yourself to see how many are out there). Both of these things will affect the accuracy of a tool like Bot or Not. It’s not that we shouldn’t use tools which claim to detect bots automatically, but we do need to interpret their findings based on an informed appreciation of the factors which have gone into them.

Finally, there isn’t necessarily a link between bots and disinformation. Disinformation is often, and arguably most effectively, spread by real users from authentic accounts. Bots are sometimes used to share true, helpful information. During California’s wildfires in 2018, for example, researchers built a bot which would automatically generate and share satellite imagery time-lapses of fire locations to help affected communities.

There’s clearly a significant amount of disinformation and misleadingly framed discussion being spread on social media about the bushfires, particularly in relation to the role of arsonists in starting the fires.

However, the bulk of it doesn’t appear to be coming from bots, nor is it anything so straightforward as an attack. Instead, what appears to have happened is that Australia’s bushfire crisis—like other crises, including the burning of the Amazon rainforest in 2019—has been sucked into multiple overlapping fringe right-wing and conspiracy narratives which are generating and amplifying disinformation in support of their own political and ideological positions.

For example, fringe right-wing websites and media figures based in the United States are energetically driving a narrative that the bushfires are the result of arson (which has been resoundingly rejected by Australian authorities) based on an ideological opposition to the consensus view on climate change. Their articles are amplified by pre-existing networks of both real users and inauthentic accounts on social media platforms including Twitter and Facebook.

QAnon conspiracy theorists have integrated the bushfires into their broader conspiracy that US President Donald Trump is waging a secret battle against a powerful cabal of elite cannibalistic paedophiles. Believers in the ‘Agenda 21/Agenda 2030’ conspiracy theory see it as proof of ‘weaponised weather control’ aimed at consolidating a United Nations–led global takeover. Islamophobes are blaming Muslim arsonists—and getting thousands of likes.

And that’s not even touching the issue of misleading information that’s been spread by some Australian mainstream media.

It’s not just the climate that has changed. The information ecosystem in which natural disasters play out, and which influences the attitudes and decisions the public makes about how to respond, is fundamentally different from what it was 50, 20 or even five years ago. Disinformation is now, sadly, a normal, predictable element of environmental catastrophes, particularly those large enough to capture international attention. Where once we had only a handful of Australian newspapers, now we have to worry about the kind of international fringe media outlets which think the US government is putting chemicals in the water to make frogs gay.

This problem is not going away. It will be with us for the rest of this crisis, and the next, and the next. Emergency services, government authorities and the media need to collaborate on strategies to identify and counter both mis- and disinformation spreading on social media. Mainstream media outlets also need to behave responsibly to ensure that their coverage—including their headlines—reflects the facts rather than optimising for clicks.

It would be easy to dismiss worry about online disinformation as insignificant in the face of the enormous scale of this crisis. That would be a mistake. Social media is a source of news for almost all Australians, and increasingly it is the main source of news for many. Responding to this crisis and all of the crises to come will require national cohesion, and a shared sense of what is true and what is just lies, smoke and mirrors.