
Deepfakes are no longer fringe curiosities. To combat this threat, Australia and its partners should fortify the infrastructure that supports democracy.
In Britain this year, AI-generated deepfakes flooded social media. One showed former prime minister Rishi Sunak proposing to send ‘18-year-olds to active war zones’, while another depicted Prime Minister Keir Starmer using expletives towards staff. Both clips spread across social platforms. This blurred fact and fabrication in real time, making the videos difficult to immediately discredit. Former home secretary James Cleverly has warned that malign state actors, including Russia and Iran, are already weaponising such tools to disrupt elections.
Australia’s social media feeds draw from the same global platforms as Britain’s. What corrodes confidence in London can do so in Canberra. These episodes warn that trust is being rerouted away from democratic institutions and towards opaque technological systems that are neither secure, neutral nor reliably truthful.
This redirection of trust is destabilising democracy’s foundations. Citizens once relied on governments, scientific experts and independent media as arbiters of truth. Now, anti-vaccination conspiracies circulate in WhatsApp groups, robocalls impersonate presidents, and deepfakes mimic trusted journalists. The outcome is not just confusion but corrosion: people grow unsure of whom to believe, or whether anyone can be believed at all.
Into this vacuum step technologies that present themselves as frictionless and authoritative. The internet of things has woven connected devices into homes, workplaces and public spaces—from smart speakers and vehicles to children’s toys. AI systems respond to queries with mechanical confidence. AI-generated news digests often pull from Wikipedia, Reddit or YouTube rather than vetted journalism, and users are increasingly likely to accept these first responses as definitive rather than interrogating those sources.
My doctoral research found that systems in the internet of things expand the threat landscape across physical, virtual and legal domains. Yet the more profound risk is not technical but rather how these devices quietly reshape beliefs, echoing dubious perspectives with the seductive certainty of a machine. Malign actors grasp the implications: to destabilise democracies, they need not hack ballot boxes. It is enough to inject disinformation until voters doubt that their voices matter or until they disengage altogether. Russia has already sought to poison AI training data with propaganda. The battlefield is no longer just digital; it is cognitive.
This is why, just as power grids and water plants are protected as critical infrastructure, so too do we need to shield democratic infrastructure—the institutions that sustain public trust and enable national resilience. This term covers sectors essential for the maintenance of democracy that are not designated as key for economic stability or national security in the same way as those labelled ‘critical national infrastructure’. Democratic infrastructure encompasses a free press; a just legal system; accessible and (relatively) unbiased education, including public libraries; and other such mechanisms through which public trust is built—and can be threatened.
Cyber risks are often treated as discrete, including AI hallucinations, microtargeted propaganda and insecure devices in the internet of things. Using a strategic term such as ‘democratic infrastructure’ could help policymakers and the public understand that these are in fact interconnected assaults on trust.
In fact, in July, the House of Lords Communications and Digital Committee reported on media literacy. The report warns that the current deficit in critical digital skills amid these converging technological sources is a direct risk to social cohesion and democracy. Meanwhile, established credible sources such as traditional media outlets battle cyberattacks on their platforms and deepfakes intended to either capitalise on or discredit their trusted status. They’re also contending with the loss of human audiences.
Any credible technology security framework needs to therefore expand beyond a purely technically focused protection model and recognise the democratic infrastructure under threat. Public consumption of news is increasingly decentralised and vulnerable to foreign influence. Traditional media outlets are regulated for ownership and standards, and modern digital platforms require equivalent oversight. Safeguards need to ensure citizens are not exposed to inauthentic or manipulative content designed to distort opinion or advance state-sponsored foreign agendas.
Some governments have begun to act, but their responses remain fragmented. Australia’s 2021 News Media Bargaining Code rebalanced financial power between platforms and publishers, but it was framed as competition law, not democratic protection. Britain’s Online Safety Act compels platforms to remove harmful content, but a parliamentary report published in July warned it failed to address how experimental features such as AI overviews amplified corrosive information to mass audiences. Both are important steps, but neither confronts the systemic nature of information operations.
We need a coherent strategy that treats truth and trust as national security assets. This means protecting independent media from cyberattacks; investing in media literacy as inoculation against disinformation; and regulating technology companies for competition, consumer safety and their effect on democratic resilience.
Trust, once lost, is painfully difficult to restore. If democracies do not defend their own infrastructure of trust, adversaries will exploit the vacuum. The future of democracy will depend not only on securing our grids and pipelines, but on safeguarding the fragile networks of trust that hold societies together.