Law enforcement and social media platforms must implement real-time data sharing to stop online extremism before it leads to violence. Using appropriate safeguards, we can achieve this without raising concerns about creating a surveillance state.
Social media companies have vast behavioural data, but their reluctance to share it with authorities means we’re left scrambling after an attack occurs. The resulting delay facilitates radicalisation and puts lives at risk. Rather than reacting to attacks, we should aim to prevent harm through a coordinated, data-driven approach. The current system is failing. Speed matters. Privacy concerns are valid, but when the stakes are this high, we need to ask: how many more lives are we willing to risk?
Extremist groups exploit unregulated online spaces to recruit, radicalise and incite violence. By the time we detect it, it’s often too late. We’ve seen the deadly consequences: shootings, terrorism and violence facilitated through social media. Social media companies like to claim they are neutral platforms, but they control the algorithms that amplify content, creating an environment where radical ideas can thrive.
Take the Christchurch mosque shootings in 2019 for example. The shooter posted his manifesto on Facebook and 8chan (an online message-board) before killing 51 people. Although Facebook moved quickly to remove his manifesto, the content spread to thousands. But his interactions with extremist groups and violent posts could have been flagged long before the attack. If they had then been shared immediately with law enforcement, authorities could have detected his extremist behaviour early and intervened.
Social media platforms must be more proactive in identifying extremist content and sharing it with authorities immediately. Delayed intervention leaves room for radicalisation. This is compounded by algorithms that prioritise content likely to generate engagement—likes, shares and comments. Extreme content, which often elicits strong emotional reactions, is amplified. Conspiracy theories, such as QAnon, spread widely on online platforms, drawing users deeper into radical echo chambers.
This isn’t about mass surveillance—it’s about content moderation. This approach should build on existing moderation systems. Authorities should only be alerted when certain thresholds of suspicious activity are crossed, much as financial institutions report suspicious transactions. For example, if activity suggests a user is being recruited by a terrorist group, or if the user shares plans for violence, social media companies should have the ability—and in fact the responsibility—to flag this behaviour to authorities.
Of course, automated content detection can result in misjudgements. This is where human content moderators within social media companies could play a role: once an automated system flags potentially harmful activity, it could trigger a review by an employee who would assess whether the flagged behaviour meets a threshold for real-time sharing with law enforcement. If the content is likely to incite violence or indicate a credible threat, the moderator could initiate real-time data sharing with authorities for possible intervention.
This verification process could be among the safeguards in place to ensure that only high-risk, potentially harmful activities are flagged, protecting the privacy of those who don’t present a threat and preventing concerns arising about the government creating a surveillance state. Shared data would follow appropriate legal channels, ensuring transparency and accountability.
The costs of implementing real-time data-sharing systems are manageable. Social media platforms already use automated systems for content moderation, which could be adapted to flag extremist behaviour without imposing significant human resource costs. Shared financial responsibility between social media companies and law enforcement could also help. Law enforcement agencies could receive funding to process flagged data, while tech companies would have to pay for technology needed to detect extremist activity. We can manage implementation costs and focus resources where they’re most needed by prioritising high-risk platforms and upscaling the system over time.
A limitation is that Australia could not impose this mechanism on platform operators that had no presence in the country. But the larger platforms’ operators, such as Meta, X and Snap, do.
Our current reactive approach isn’t working. We need real-time data sharing between tech companies and law enforcement to intercept threats before they escalate. Lives are at stake, and we can’t afford to wait for the next tragedy.