
The struggle for influence and the right to be heard has often been a dangerous and fraught one for women. But the strategies to silence us are getting darker and more sophisticated. Women in 2025, wherever they live, must contend with a growing number of digital threats in their fight to be meaningfully represented in all spheres, including humanitarian and peacekeeping operations.
As Australia’s independent regulator for online safety, eSafety is committed to working with regional partners to build safer digital environments, particularly for women in public-facing roles. Through capacity-building initiatives and targeted training, we’re helping journalists, politicians and human rights advocates across the Indo-Pacific respond to technology-facilitated abuse. But the personal toll of this abuse is escalating. Increasingly, women are questioning the price of participation—and whether they want their daughters to inherit the same burden.
As we mark the anniversary of landmark United Nations Security Council Resolution 1325—the founding document of the Women, Peace and Security agenda—and look to what’s next, we must accept that the digital online environment is inextricably shaping the dynamics, sentiment and politics of almost all offline interactions. We must also confront the reality that women are in the direct line of digital fire when they advocate for greater protections for women, children and the vulnerable—because they are women.
Women’s participation is essential for sustainable peace solutions. But if we want more women to be architects of global peace and security, we need to confront the tide of malicious online activity seeking to drown out their voices and jeopardise peace and healing.
The imperative for Safety by Design
While technology-facilitated gender-based violence takes many forms, themes tend to oscillate around the sexualisation of women. This includes threats of rape; assertions that women are inferior; and an obsession with appearance, fertility and traditional family roles. Tactics can range from cyberattacks and rape threats to stalking, exposing personal information (doxing), and image-based abuse. Disinformation and conspiracy theories thrive in unstable environments and fill online information vacuums, fostering mistrust. The cumulative intent is painfully clear: to intimidate women back into the shadows, off the frontlines and out of public life.
One of the fastest-growing harms we’re seeing is explicit and sexualised deepfakes, which almost always feature women and girls. We predicted this threat in our 2020 issues paper and, five years later, reports of the issue from investigators in our image-based scheme are increasingly common. Governments around the world are beginning to address this gendered harm. The United States’ Take It Down Act, for example, ‘criminalises the nonconsensual publication of intimate images’, including deepfakes, and requires platforms to remove such images. While these global actions are heartening, the tech sector itself must do more.
An obvious starting point for understanding online gender-based violence is to examine the DNA of the industry itself. It was principally established, and remains largely dominated, by men. While there’s no doubt that many people in the industry are motivated to harness tech for good, this gender gap may be blinding companies to how humans could exploit their designs. In short, there’s simply no lived experience reflected in the engineering. And when you zoom out, it’s facilitating an industry-wide myopia to how technology is used as a vector for violence.
The language of hate, misogyny and violence can be coded and nuanced, particularly in conflict zones. Context matters. Seemingly imperceptible cultural and linguistic signals matter. With the evisceration of the industry’s trust and safety teams worldwide, and the roll-back of policies aimed at addressing harmful content, these companies have lost valuable local knowledge to expertly identify and respond to serious harms playing out on their platforms.
Since 2018, we’ve been advocating for industry to apply our Safety by Design principles across the system and product lifecycle to assess these risks and help prevent some of the most egregious harms. But our calls are more often answered with lip service than meaningful measures to remediate harm. In fact, we’ve seen a regression in safety protections, especially over the past two years.
The chilling feedback loop: when abuse is magnified by generative AI
Since ChatGPT burst onto the scene in 2022—reaching 1 million users in just five days—a wave of open-source tools has followed, enabling anyone to generate not just text but hyper-realistic images and videos at the click of a button. These apps are cheap and no longer require extensive technical expertise, with few guardrails preventing weaponisation. Beyond the chaos and misinformation this tech can sow during humanitarian operations, there are huge implications for free and fair elections and the vitality of democracy.
The opacity of generative AI development and deployment is also deeply problematic, with little known about the complex processes that govern it. This is especially true for large language models. We have virtually no information about how the data inputs are weighted and balanced, raising important questions about how they could reinforce outdated or narrow gender norms.
As the pace of generative-AI adoption accelerates, it’s more important than ever for companies to embed Safety by Design principles, creating platforms equipped with robust, survivor-centric reporting tools.
Resetting the course of online history
Addressing technology-facilitated gender-based violence is central to ensuring women’s full and equal participation in all sectors and spheres. The next 25 years must not simply be about building on the legacy of Security Council Resolution 1325; they must be about scaling it for a digital age to bolster, rather than hinder, humanitarian and peace-building efforts. The range of online harms canvassed here can significantly undermine missions by eroding trust, inflaming tensions, and disrupting coordinated action.
Australia is paving the way in mitigating this range of digital harms through proportionate legislation. Using our powers under the Online Safety Act, eSafety is holding industry to account and taking on some of its biggest players. But we can’t do this alone.
As we look to 2050, partnerships such as the Global Online Safety Regulators Network (of which eSafety is a founding member) and the Global Partnership for Action on Gender-based Online Harassment and Abuse have the potential to be a powerful defender of human rights online, starting with ensuring personal safety in the digital world. But we need to swell our ranks with even more human rights-respecting regulators, unafraid to take on a powerful tech sector that is more focused on profits than safety and well-being.