
Deepfake technology is weaponising artificial intelligence in a way that disproportionately targets women, especially those working public roles, compromising their dignity, safety, and ability to participate in public life. This digital abuse requires urgent global action, as it not only infringes on human rights but also affects their democratic participation.
Britain’s recent decision to criminalise explicit deepfakes is a significant step forward. It follows similar legislation passed in Australia last year and aligns with the European Union’s AI Act, which emphasises accountability. However, regulations alone are not enough, effective enforcement and international collaboration are essential to combat this growing and complex threat.
Britain’s legislation to criminalise explicit deepfakes as part of the broader Crime and Policing Bill that will be introduced to the parliament marks a pivotal step in addressing technology-facilitated gender-based violence. This move is a response to a 400 percent rise in deepfake-related abuse since 2017, as reported by Britain’s Revenge Porn Helpline.
Deepfakes, which fabricate hyper-realistic content, often target women and girls, objectifying and eroding their public engagement. By criminalising both the creation and sharing of explicit deepfakes, Britain’s law closes loopholes in earlier revenge porn legislation. The legislation places stricter accountability on platforms hosting these harmful images, reinforcing the message that businesses must play a role in combatting online abuse.
The EU has taken a complementary approach by introducing requirements for transparency in its recently adopted AI Act. The regulation does not ban deepfakes outright but mandates that creators disclose their artificial origins and provide details about the techniques used. This empowers consumers to better identify manipulated content. Furthermore, the EU’s 2024 directive on violence against women explicitly addresses cyberviolence, including non-consensual image-sharing, providing tools for victims to prevent the spread of harmful content.
While these measures are robust, enforcement remains a challenge due to fragmented national laws, and deepfake abuse often transcends borders. The EU is working to harmonise its digital governance and promote AI transparency standards to mitigate these challenges.
In Asia, concern over deepfake technology is growing in countries such as South Korea, Singapore and especially Taiwan where it not only targets individual women but is increasingly used as a tool for politically motivated disinformation. Similarly, in the United States and Pakistan, female lawmakers have been targeted with sexualised deepfakes designed to discredit and silence them. Italy’s Prime Minister Giorgia Meloni faced a similar attack but successfully brought the perpetrators to court.
Unfortunately, many countries still lack comprehensive legislation to effectively combat the abuse of deepfakes, leaving individuals vulnerable, especially those without the resources and support to fight back. For example, similar laws in the United States remain stalled in legislative pipelines—the Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Bill and Deepfake Accountability Bill.
Australia offers a strong example of legislative action as it faces similar challenges with deepfake abuse contributing to a chilling effect on women’s activity in public life, affecting underage students and politicians. This abuse not only affects individual privacy but also deters other women from engaging in public and pursuing leadership roles, weakening democratic representation.
In August 2024, Australia passed the Criminal Code Amendment, penalising the sharing of non-consensual explicit material.
While formulating legislation is the first step, to effectively address this issue, governments must enforce the regulation while ensuring that victims have accessible mechanisms to report abuse and seek justice. Digital literacy programs should be expanded to equip individuals with the tools to identify and report manipulated content. Schools and workplaces should incorporate online safety education to build societal resilience against deepfake threats.
Simultaneously, women’s representation in cybersecurity and technology governance needs to be increased. Women’s participation in shaping policies and technologies ensures that gendered dimensions of digital abuse are adequately addressed.
Although Meta recently decided to cut back on factchecking, social media platforms need to be held to account for hosting and amplifying harmful content. Platforms must proactively detect and remove deepfakes while maintaining transparency about their AI applications and data practices. The EU AI Act’s transparency requirements serve as a reference point for implementing similar measures globally.
Ultimately, addressing deepfake abuse is about creating a safe and inclusive online space. As digital spaces transcend borders, the fight against deepfake abuse must be inherently global. Countries need to collaborate with international partners to establish shared enforcement mechanisms, harmonise legal frameworks and promote joint research on AI ethics and governance. Regional initiatives, such as the EU AI Act and the Association of Southeast Asian Nations’ guidelines for combatting fake news and disinformation, can serve as a means for building capacity in nations lacking the expertise or resources to tackle these challenges alone.
In a world where AI is advancing rapidly, combatting deepfake abuse is more than regulating technology—it is about safeguarding human dignity, protecting democratic processes and ensuring that everyone, including women, can participate in society without fear of intimidation or harm. By working together, we can build a safer, more equitable digital environment for all.