The new struggle for truth in the era of deepfakes
12 Nov 2020|

A billboard firm cleverly lures clients with a simple slogan on an otherwise blank canvas—unsee this.

Its genius rests on a human trait: we can’t unsee, unhear or unsmell anything. Our senses are primordial devices programmed to extract millions of data points every second, most of it, at some level, novel. Yet the brain can sort and analyse only around 50 points per second in order to assess a possible response.

Research suggests we make more than 200 decisions just about food every day. Because the brain also chews through calories, as animals we favour simple responses that maximise our energy reserves. At some level, we hunger to be told what to do, and believe, because it’s much less tiring than overthinking.

Propaganda, whether purveyed by Joseph Goebbels, Proctor and Gamble, or Mills and Boon, is calibrated to overcome complex decision-making. Seeing is believing, a picture is worth a thousand words, and repetition and sloganeering make it stick.

Similarly, conspiracy theories satisfy our human biological platform, feeding our desire for simplicity rather than complexity. They are sticky, too. One of the stickiest is the anti-Semitic screed The protocols of the elders of Zion, written in Czarist Russia in around 1903, and still a hot favourite.

In the recent confessional Netflix documentary The Social Dilemma, some progenitors of the social media revolution explain that it was calculated to tease and slake our thirst for the novel, addictive and gossipy. Clickbait is exactly as it describes: small pieces of tempting information that attract our clicks, hooking us and generating advertising revenue for Google and Facebook.

Moreover, research from the Massachusetts Institute of Technology suggests that fake news spreads six times faster than the truth. A drunk Nancy Pelosi makes for compelling ‘news’ that returns greater advertising revenue, rather than a sober House speaker doing more of the same dull routine of law-making and politicking. One wins plenty of clicks, the other doesn’t.

Pelosi was not drunk in that viral video of May 2019. The quick discovery that a malicious garden-shed-conservative propagandist had slowed the video to slur her speech made no difference. Slander sticks, gossip is viral, and repetition of another ‘drunk’ Pelosi video a year later reinforced the original for those who believed that she was, regardless of what was likely, let alone true.

As an instinctive huckster, President Donald Trump was well suited to the era of fake news, with his mercurial temperament, lurid sense of proportion, and sledgehammer approach to bedrock political traditions that once seemed perpetual. As his grip on the political bullhorn dwindles, the world is left assessing the damage of four years of distorted reality that have shaken the foundations of convention.

But before liberal states have fully grasped the corrosive effects of fake news—or legislated to rein in the social media behemoths that trade in it—we are on the cusp of the ‘deepfake’ era that will make the past four years seem as quaint as the cinematic effects of Woody Allen’s 1987 mockumentary Zelig, in which the chameleon-like Jewish imposter Zelig supports Adolf Hitler at a Nazi rally and peers from a Vatican balcony behind Pope Pius XI.

Artificial intelligence has supercharged the ability of amateurs to acquire the voice and image of anybody who has been recorded, and to recompose voice and image into entirely fake video sequences. It’s an emanation of the so-called fourth industrial revolution that is embedding hyper-technology into every particle of our lives. It promises a gamed-up world, in which the boundaries of reality are befuddled by a propaganda Pandora’s box in the palm of every hand. Welcome to the ‘infocalypse’.

At this point, the efforts remain reasonably juvenile, and detectable with complex software. But the author of a recent book on deepfakes reckons that within a year, anybody with a mobile phone will be able to recreate and improve on the de-ageing effects applied to Robert De Niro and Al Pacino in Martin Scorsese’s 2019 movie The Irishmen. When filmed, it was the result of hundreds of technicians, millions of dollars and a year of work.

For an introduction to deepfakes, or ‘synthetic media’, the makers of the satirical cartoon Southpark have conjured Sassy Justice, a new series that premiered just last month. It’s a radically technologically updated version of Zelig. Deeply funny, this deepfake show is also a portent of disaster, a time when we can no longer believe our eyes as willingly as we do today.

In the video, Donald Trump has been transformed into a satin-bewigged effeminate reporter from a news station in Cheyenne, Wyoming. Mark Zuckerberg, Julie Andrews, Jared Kushner, Ivanka Trump and Michael Caine have similarly been AI-shanghaied (presumably without their permission). What the show illustrates is the potential of this seminal technological revolution.

The entertainment possibilities are boundless. Paul Robeson can be brought to life as the new James Bond. Jim Morrison will join a mashed-up K-Pop tour. There’ll be remixes of Charlie Chaplin supporting #Me2 and #BLM, Amelia Earhart spruiking space flights for Elon Musk, and Maria Callas singing duets with Taylor Swift. In fact, there’ll be no need for actors, and with a few swipes on our mobile device each of us will be able to star in Titanic—or Gas Light.

Then there’s the bad stuff. If you think that social media stokes teenaged anxiety, there is worse to come. AI has already been deployed in what the creators of one app claimed was harmless fun to strip the clothing from any photographic image of a female figure. As if that’s not bad enough, in this new dystopia a photo can be feasibly lifted from your child’s Instagram and transformed into a fully-fledged pornographic video. Try unseeing that.

When writing my own book about the war in Sri Lanka, I relied on forensic reconstruction of open source audio-visual evidence with electronic fingerprints that left no doubt as to the provenance of evidence. Yet the weak link in any investigation is always doubt. In theory, AI can simply retool itself to avoid forensic detection. The implications for judicial and investigative processes are convulsive.

Identity theft is already a multibillion-dollar industry that financial institutions spend billions fighting against. An American widow was recently duped of almost $300,000 in the course of a romance conducted entirely over Skype by an imposter posing as an American admiral. Your Nigerian scammer need no longer use their own voice, but instead can target the elderly with reconfigured voice recordings of their children lifted from Instagram grabs and reworked according to script: ‘Mother, can you wire me $10,000?’

Now contemplate the videos, photos and voice recordings that constitute the evening news, and the extent to which they drive political and social discourse and spark royal commissions, resignations of ministers and revolutions.

To conjure just one random unsettling example, an academic recently floated the disbanding of Australia’s special forces due to allegations of criminal misconduct in Afghanistan, sourced from video evidence. The rules-based order has been a central tenet of Australia’s foreign policy, and we like to think that we take our obligations under international law seriously.

Imagine, momentarily, a new deepfake body-cam video sequence showing Australian troops beheading Afghan civilians and desecrating the Koran. Grainy footage would do. Its release might ruin trade with the Middle East, lead to the killing of Australians and shatter agreements with nations such as Indonesia, as well as swing public pressure to disband the special forces.

The Russians are still best at using information wedges. After a brief pause in the 1990s, Russian disinformation changed tack. Coupled with the internet, captive social media audiences and the smartphone, Russia exchanged parsimonious Cold War ops for ‘flooding the room’ (a favourite play of Trump’s, deployed in his first debate against Joe Biden). The value proposition was proven in the spoliation of the 2016 presidential election result. Simmering US culture wars did the rest.

Pluralistic societies are being shaken by information-revolution developments eroding our resilience. There is nothing new about political gossip, or the use of new technology for pornography or fraud, or companies making money, or adversary countries seeking an edge. What is new is the speed, scale, force multiplication and challenge to our singular human psychology. To quote Trump, a.k.a. Fred Sassy, ‘As human beings, we all rely on our eyes to determine reality.’

So what can we do when our eyes are no longer a measure of perception? When our senses are drowned in a flood of dubious images? How will we make truth more resilient in order to maintain stable governance, trust in institutions and faith in the evening news? Here are four suggested solutions.

1. Strengthen the gatekeepers. When the internet arrived, it seemed that everybody could be a journalist. But like it or not, there are hierarchies of competence. Iconic media organisations governed by public values are a vital element of liberal democracy. Public broadcasters should be boosted, and their reach expanded, not defunded at a time when competitor nations like Russia and China are expanding their media reach.

2. Legislatively decouple Facebook and Google from their clickbait-driven profit bases, because whatever the cost to shareholders cannot compare with the social, political and economic losses to society at large. Clickbait algorithms destroy advertising revenue streams that fertilise the small-town journalism that is the bedrock of the media’s oversight and investigative role.

3. Legislate to protect the role of the media. According to the Alliance for Journalists’ Freedom, Australia is the weakest of the Five Eyes alliance countries when it comes to protecting the media (a weakness that accounts for the raids on the ABC by the Australian Federal Police that made headlines around the world). These protections ought to include media freedom laws, a public interest defence in defamation, the protection of journalists’ data, protection for whistleblowers, and a public-interest test in matters of national security.

4. Build a farm-to-table system for news. All news and information must be traceable to sources so that hierarchies of competence can be established, and rated on a scale of indicators. Information needs an evidentiary chain, or a genealogy for consumers to establish ‘truth’ to their satisfaction. The security of such a system is complex and expensive, but possible with block-chain technology, multilateral R&D and shared purpose between like-minded nations.