Protecting truth in the era of AI mediation

The ritual is now familiar. A user calls Grok, the AI model used on social media platform X, into a political argument. Grok gives a mainstream, citation-driven answer. Instead of settling anything, it becomes fresh ammunition: one side posts it triumphantly; the other side turns its outrage on the AI, accusing it of bias, censorship or foreign influence. The thread then descends into a lengthy back-and-forth between users and the machine, often more hostile than the original disagreement. This reflects a broader pattern seen in online conflict: systems are instrumentalised not for learning, but to generate content for performative outrage.

If governments, platforms and users fail to recognise this as an emerging information-security problem, we shouldn’t be surprised when our newest referee becomes an active participant in an already-fragile contest over reality. This AI-mediated epistemic conflict is no longer speculative; it’s already here.

It is also observable at scale. One independent analysis identified around 370,000 publicly discoverable Grok conversation URLs indexed by search engines. While not necessarily a representative dataset—and likely an undercount, not representing private or unindexed interactions—it is large enough to reveal consistent behavioural trends: multi-turn political arguments, heavy use of national-security prompts and an emerging genre of users screenshotting their arguments with Grok to perform indignation for followers.

This behaviour is unfolding within an information environment defined by high uptake, low trust and eroding common baselines for assessing truth. The Reuters Institute’s 2025 Generative AI and News Report, surveying roughly 12,000 consumers across six countries, found that only 6 percent use generative-AI systems for news each week. Despite 77 percent reporting daily news consumption, only 19 percent say they encounter AI-labelled content daily and 28 percent weekly. Use is rising, but AI remains a minority gateway to news—one that many approach cautiously.

The institute’s 2025 Digital News Report, based on nearly 100,000 respondents across 48 countries (typically around 2,000 per market), shows that for the first time, more Australians get news through social media (26 percent) than through traditional outlets, with only 23 percent selecting online news sites. Yet just one percent nominate AI chatbots as their primary news source. AI is increasingly woven into Australian and global news-media environments but still floats on top of feeds dominated by creators, partisan commentary and algorithmic outrage rather than professional journalism.

The quality signal is equally worrying. This year, the European Broadcasting Union and the BBC conducted a study evaluating answers to 3,000 news-related questions in 14 languages. They found significant factual errors in 45 percent of responses and broader issues—including outdated data, sourcing flaws or incoherence—in around 81 percent. For national-security debates, where precision and sourcing matter, these error rates represent a structural vulnerability.

Public sentiment reinforces this. A Pew survey of 5,023 US adults in June found 50 percent were more concerned than excited about AI’s growing role in daily life, compared with just 10 percent who were more excited than concerned. Fifty-seven percent rated the societal risks of AI as high, while only 25 percent rated the benefits as high. Across 25 countries, a median of around 55 percent trust their own government to regulate AI—around two-thirds in Australia—but far fewer trust foreign governments or global tech companies. Separate US polling has shown that 78 percent expect AI misuse to influence elections, while 58 percent believe AI will increase misinformation.

Layer these numbers over the Grok fights playing out daily, and a pattern emerges: tens of millions are using AI assistants; a small but growing proportion are using the technology for news; nearly half of AI-generated news answers contain significant errors; and majorities already assume AI will worsen misinformation. In that environment, every Grok answer on the Chinese Communist Party, AUKUS or Australian defence spending becomes a potential lightning rod. When the answer affirms a user’s worldview, it is posted as objective proof. When the answer contradicts that worldview, the user attacks the machine and often leaves the interaction even more entrenched.

Into this ecosystem steps the rage influencer: the self-appointed geopolitical analyst with no formal training, qualifications or methodological grounding. In the absence of guardrails, these influencers can claim the title of researcher or analyst, build an audience and launch personalised culture-war arguments dressed up as objective analysis. When qualified people challenge them, the influencers claim elitism or censorship. They advance a deeper narrative that expertise is inherently compromised: contaminated by academic training, corrupted by funding or skewed by discipline and methodology. In this framing, qualifications are not evidence of competence but proof of bias. This reflects the logic of hostile information operations, where the strategic objective is not persuasion but erosion of the very criteria by which trust is assessed. It would almost be satire if the effects on public understanding and national security discourse were not so corrosive.

This is the post-truth environment into which AI is now embedded. The issue is no longer simply disagreement about facts, but rather the collapse of any shared infrastructure for establishing them. Epistemic security—the integrity of the processes through which societies determine what is true—is under strain. Traditional media, universities, public broadcasters and now AI systems are recast as partisan actors. That’s why watching people argue with Grok feels symptomatic: the debate has drifted from ‘what is true?’ to ‘who is allowed to define truth?’

There are clear policy implications. Platforms need to stop marketing AI assistants as neutral arbiters and instead recognise them as high-impact mediators—indeed, even as emerging attack surfaces—in a contested information battlespace. That requires independent threat simulations of political and strategic prompts; transparent documentation of error rates and training data limitations; and clear labelling when AI-generated content is injected into public threads. Given their scale and potential harms, these systems should be treated as critical information infrastructure.

Governments, including Australia’s, need to approach AI-mediated discourse as an epistemic security issue. A society where millions argue with bots about defence policy and walk away believing the bot is part of a hostile plot is a society vulnerable to manipulation—whether by targeted prompting, bot networks amplifying ‘AI bias’ narratives, or opportunistic exploitation of every AI misfire.

Finally, analysts and citizens need to recalibrate expectations. AI can accelerate reasoning and help retrieve information, but it’s not a substitute for judgment. The strategic task is rebuilding the human institutions that make democratic decision-making possible: independent journalism, academic method, transparent governance and accountable public debate. AI should sit within that ecosystem as a fallible tool—not as a new, infallible arbiter of truth.