- The Strategist - https://www.aspistrategist.org.au -

Countering deepfakes: We need to forecast AI threats

Posted By and on September 3, 2024 @ 13:37



Australia needs to get ahead of the AI criminality curve.

Last month, parliament criminalised the use of deepfake technology to create or share non-consensual pornographic material. The legislation is commendable and important, but the government should consider more action to address new forms of criminality based on AI and other technology.

As far as possible, we shouldn’t let these new forms surprise us. The government should organise a group of representatives from law-enforcement and national security agencies to identify potential or emerging criminal applications of new tech and begin working on responses before people are affected. Functionally, the group would look for the early warning signs and adjust our course well before potential challenges become crises.

The legislation followed recent cases in which Australians, especially young women and girls, were targeted via deepfakes and legislation was found wanting. In the past few years, there have been many incidents of non-consensual pornographic deepfakes affecting students and teachers. Most often, that content is created by young men. Similar cases have occurred internationally.

Deepfake risks were identified years ago. High-profile cases of non-consensual deepfake pornography date back to 2017, when it was used to generate sexually explicit content depicting various celebrities; and, in 2019, AI monitoring group Sensity found that 96 percent of deepfake videos were non-consensual pornography. A 2020 ASPI report also highlighted the issue’s national-security implications.

Unfortunately, non-consensual deepfakes are not the only issue. Rapidly developing AI and other emerging technologies have intricate and multiplying effects and are useful for both legitimate and criminal actors. AI chatbots can be used to generate misleading resources for financial investment scams, and image generators and voice clones can be used to create divisive misinformation or disinformation or promote conspiracy theories.

The issue is not that we can’t foresee these challenges. We can and do. The problem is in the lag between identifying the emergent threat and creating policy to address it before it becomes more widespread. Legislative systems are cumbersome and complex—and policymakers and legislators alike are often focused on current challenges and crises, not those still emerging. Bringing together the right people to identify and effectively prepare for challenges is essential to good law enforcement and protecting victims.

Beyond legislation, the government should establish a group of experts—from the Department of Home Affairs and the Attorney-General’s Department, the Department of Education, the National Intelligence Community, the Australian Cyber Security Centre, the eSafety Commission and law enforcement agencies. The group’s key role would be to consider how emerging technology can be manipulated by criminal and other actors, and how to best prepare against it and protect Australians.

It would need to meet regularly, ideally quarterly, and distribute its assessments at a high level to affect strategic and operational decision-making. Meeting this challenge will also require a whole-of-society approach, including experts from academia, think tanks, industry, social workers and representatives from community and vulnerable groups. Each of those groups offers valuable and necessary insights—especially at the coalface—and will be vital in creating change on the ground.

The need for their inclusion is evident from the current non-consensual deepfake pornography challenge. Reports on the bill highlighted a handful of problematic areas with it, including effects on young offenders. A significant proportion of this content is being created by young people—and, while it is now rightly a crime, the ideal long-term solution is in preventing, not prosecuting, non-consensual deepfake pornography. The National Children’s Commissioner particularly raised concerns that the law could result in higher rates of child incarceration as a result of sharing the material.

The effects of generative AI and other technology in the community are also extensive and harmful below the criminal threshold. The technology is increasingly being used to create fake social media influencers and streamers, or fake online love interests. Users can interact in real time with often highly sexualised or explicit AI-dependent content. Harms include the fostering of unhealthy parasocial attachments among vulnerable or socially isolated people—especially young men and boys.

Community and socially focused organisations and individuals will see these challenges far more immediately and clearly than government. Accessing their experience and expertise should be a priority for policymakers.

New technologies will continue to be developed, and they will continue to have an even greater effect on our lives. We might not always be able to predict such changes, but the potential challenges are not unpredictable. While legislation is important, a proactive approach is crucial.


Article printed from The Strategist: https://www.aspistrategist.org.au

URL to article: https://www.aspistrategist.org.au/countering-deepfakes-we-need-to-forecast-ai-threats/

[1] criminalised: https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/bd/bd2324a/24bd081

[2] cases: https://www.abc.net.au/news/2024-06-24/schools-on-the-frontline-of-ai-generated-child-abuse-material/104016762

[3] cases: https://www.vice.com/en/article/gal-gadot-fake-ai-porn/

[4] 96 percent: https://medium.com/sensity/mapping-the-deepfake-landscape-27cb809e98bc#:~:text=Another%20key%20trend,action%20is%20taken.

[5] report: https://www.aspi.org.au/report/weaponised-deep-fakes