
It used to be so simple. An intelligence officer could fly to a country, change passports and, with a false identity, emerge as a completely different person. But those days are long since over. Biometrics and facial recognition technologies can easily detect people travelling on false identities. Even if you can travel on false documents, a simple Google search uncovers your lack of an online profile and digital legend.
Solving this problem is a generational challenge for defence and intelligence agencies. As technology rapidly develops—including encryption, smart cities and generative AI—security agencies and defence communities have a golden window to research, develop and build new identity technologies. Failure to develop the right technology now will change espionage forever.
There are two related capability challenges: creating genuine-looking false profiles for offensive intelligence operations and, conversely, developing the ability to detect artificially generated profiles and identify who is behind them. This is a classic poacher-turned-gamekeeper challenge that domestic and foreign intelligence agencies must navigate: they must develop the capabilities to catch the bad guys while coming up with solutions to defeat those investigative technologies themselves.
In a world where Google, Meta, commercial data brokers and even your bank know a lot more about you than the government does, creating a false identity is hard. However, governments still have one big advantage: they are still the authority on and ultimate backstop for authenticating a person’s identity—typically by validating birth certificates and passports.
Australia, like many countries, has embraced technology as a partial solution to the authenticating identity challenge. It adopted an enhanced tax file number system in 1988 following the defeat of the then government’s attempt to introduce a national identity card. Backed by voiceprint recognition for authenticating identity, the tax file number is now used for data matching across much of the Australian government.
Fraudsters, adulterers, government agents and police forces are perhaps the only people who have an overwhelming need to develop convincing deepfake technologies. An arms race is developing. As companies include more security and authentication in generative-AI tools (such as embedded watermarks and improvements in deepfake detection tools), it is harder to use such tools nefariously. Even if you can spoof official forms of identification (as Russian sleeper agents were recently shown to have done via South American countries) and also generate a realistic digital twin, your online profile (or lack thereof) will still give you away.
Generating believable false social media profiles, posts and digital devices is only a small part of the capability requirements. Spooks also need to trace all aspects of the digital profile they already have, including biometrics such as voice print, gait and face descriptors, and have them consistent across multiple devices and platforms. They will need to be able to track where that digital identity already exists and how exposed they are to adversaries.
Operational officers will want assurance that profiles have not been contaminated through data breaches or tradecraft errors (such as a recent cyberattack that exposed face descriptors and biographical data). They will need to be able to delete or modify aspects of these profiles that are already in the world (their digital shadow). They may want to operate online with an only slightly modified profile (a digital twin), rather than operating online under a completely different persona. Spooks will need online profile management tools to track and trace where digital profiles have been used and exposed online.
At the same time, investigators will need to be able to spot other countries doing this to us; which raises its own challenges. How can we track profiles across platforms? How do we validate identities online? And how do you fuse identity information together to convey the level of uncertainty to the analyst? Saying that a profile is a 60 percent match to a known criminal isn’t that helpful to a busy analyst.
Ultimately, the difficulties in developing these technologies will start to challenge the assumptions of what kind of espionage can be done in-person or online. For example, if bots can generate profiles and hold conversations with targets online, nurturing the relationship until a human role-player can pick up the engagement (while keeping the conversation in line with the digital-forensics profile of the bot), why do you need to meet the target in person at all? Bots can have thousands of conversations in parallel whereas a role-player is generally limited to two or three engagements at any one time.
A broad technology architecture of different tools is required to solve this generational challenge. It is not all about technology. We need to develop these capabilities in ways that are in line with western democratic values, manage ethical and privacy concerns, address the public’s lack of trust in government and large tech companies, and account for the increasing globalisation of social media platforms. There’s no easy solution.