Deep fakes will exacerbate challenges Australia already faces in the digital age

The Covid-19 crisis is revealing many truths about our society, not least that lies and misinformation can be just as infectious as any disease.

Since Australians went into lockdown in March, we have seen conspiracy theories about the virus proliferate, a spike in Covid-19-themed cyber scams and online frauds, Chinese diplomats tweet misinformation about the virus’s origins, and the US president suggest injecting bleach as a potential treatment.

In our digitally connected world, separating fact from fiction and genuine opinion from geopolitical jostling is increasingly difficult. The deaths of individuals who have followed treatment misinformation, and the damage to public health objectives from confusion and misdirection, show that information (or lack of it) can also cost lives.

So it’s the right time for Australians to start thinking about one of the newest tools that criminals, conspiracy theorists and nation-states will use to deceive and mislead: deep fakes. Deep fakes are digital forgeries that use deep learning (a type of artificial intelligence) to generate or manipulate video, audio, images and text.

Three years ago, the term barely registered on a Google search. Now, thanks to rapid advances in AI, deep fakes are increasingly realistic, fast and cheap to create, and are a rising cause for alarm for governments and businesses globally.

Last year, criminals used AI to impersonate a company executive’s voice, duping the CEO of a UK energy firm into transferring €220,000 to them. Intelligence operatives were discovered to be using a fake LinkedIn profile for a ‘Katie Jones’, complete with an AI-generated profile picture, to access security professionals’ networks. A military coup was attempted in Gabon, after the president posted a video to counter speculation that he was in poor health and his opponents wrongly dismissed it as a deep fake.

These are early examples of how deep fakes will exacerbate challenges societies already face in the digital age, further destabilising our instincts about who and what we can trust, and distorting news and information flows online.

In the months and years ahead, deep-fake technology will only become better and more accessible. As our ASPI report has found, the proliferation of deep fakes will require society and organisations to adapt in three important ways.

First, we will need to get used to the idea that seeing (and hearing) no longer means simply believing. As deep-fake technology proliferates, more criminals will use AI-generated forgeries to try to trick, or blackmail, targets into paying them money, and revealing their secrets.

Second, text-based deep fakes will make disinformation possible at unprecedented scale and speed, and accessible to new actors. Human operatives like the workers at Russia’s infamous ‘troll farms’ currently drive the disinformation value chain. But text-based deep fakes will take humans further out of the loop.

For example, in November, Chinese government–funded researchers published a paper outlining how an AI system could be trained to scan a news article, distil its key points and then post ‘attention-maximising’ online comments. AI optimised for trolling will be able to flood online spaces to drown out authentic voices or give false momentum to extreme views.

Finally, in a world where computer-generated audio-visual and written content is increasingly realistic, we will need to ask ourselves: how will we know whether anything is true? Arguably the biggest risk in the deep-fake age is not manipulated content itself, but the risk that our uncertainty about what is real, and what is doctored, attenuates our ability to trust each other and our institutions.

To avoid this dystopic future, governments and industry can act early across three lines of effort.

First, investing in detection technologies will be key. Unfortunately, detection researchers are unlikely to always win the ‘arms race’ against increasingly sophisticated deep fakes. However, not all detection methods need to rely on technology: new procedures may be needed such as requiring corroborating evidence before businesses or officials act on digital content.

Second, and even more important that detection, is developing reliable and trusted mechanisms for signalling when information is in fact authentic. Australia will likely need new standards like digital watermarks and digital chain-of-custody procedures to assure content is legitimate. Businesses can take the lead on these efforts, but governments will have a role, especially in harmonisation.

Finally, governments have a narrowing window to make meaningful investments in trusted gatekeepers of information, such as independent media, transparency bodies and fact-checking outfits. Digital literacy initiatives will also be critical.

The best responses to deep fakes will be those that address the harms they can cause, rather than attempt to fight the technology itself. In a democracy like Australia which is connected to a global information environment, we will never be able to identify and neutralise all lies and misinformation, and nor should we try.

And if we play our cards right, we won’t need to. Deep fakes are a new, albeit potent, tool for playing old games. The strength of liberal democracies is that we can build resilient institutions, stable legal frameworks, trusted channels of communication and an educated citizenry to minimise the harms that deep fakes can cause. And investment in these things will also boost the health of our democracy overall—something that is always a worthwhile goal.