China’s AI use for cyber espionage shifts cyber focus from detection to trust
12 Dec 2025|

The question facing security and technology leaders is no longer whether adversaries will deploy AI agents against their environment. Now, those leaders must ask whether their trust architecture, access models and identity systems are ready for a world where breakout time—the time taken for an attacker to move from initial access to lateral movement through a digital system—has vanished, and machine-speed attackers are the default assumption.

Anthropic’s 13 November report marked a significant turning point in cybersecurity. Their investigation into the GTG-1002 campaign—assessed with high confidence as a Chinese state-sponsored operation—confirmed that AI-driven espionage is no longer hypothetical or in development. It is active and already targeting large technology firms, financial institutions, chemical manufacturers and government agencies worldwide. Anthropic describes it as the first documented case of a large-scale cyberattack carried out with minimal human involvement. The finding is important, but it should not come as a surprise.

GTG-1002 relied on what Anthropic calls agentic AI—systems that run autonomously, chain complex tasks together and reason across an entire intrusion lifecycle with minimal human direction. Once safeguards were bypassed and the system was instructed to behave as a ‘cybersecurity employee’, Claude Code executed roughly 80–90 percent of the campaign’s activity on its own.

The lifecycle diagrams in the report show how work traditionally handled by multiple human operators—including reconnaissance, scanning, exploit development, credential harvesting, lateral movement, data triage, exfiltration and even documentation—was compressed into a near-continuous loop of automated action.

At the peak of the campaign, the system generated thousands of requests, often several per second. That tempo alone destabilises traditional detection-led security. Our tools were built around the predictable slowness of human adversaries: pauses, mistakes, timing indicators and behavioural breadcrumbs. AI agents do none of this. They don’t pause, mistype or probe clumsily; they operate at machine speed.

One of the most striking insights from the report is what GTG-1002 did not use. There were no cutting-edge zero-days or exotic new malware families at the heart of this campaign. They weren’t needed. Instead, the attackers exploited what is the most pervasive vulnerability in modern enterprises: fragile trust fabric.

The agents navigated and weaponised various features that already existed within Anthropic’s system. These included static, human-configured identity and access management policies; accumulated permissions and legacy entitlements; fragile automation and unmonitored service accounts; internal application programming interfaces that authenticated blindly once inside the perimeter; as well as fragmented access controls with no unified identity context.

Anthropic repeatedly noted that the campaign succeeded because the environment behaved exactly as it was designed to behave. Nothing had to break. Access models simply had to be sufficiently predictable and permissive for autonomous reasoning to turn them into linear attack paths.

This is the lesson defenders must internalise: rather than exploiting new technical vulnerabilities, agentic AI uses the trust structures we already operate.

Anthropic is careful not to dismiss detection. Detection was essential in discovering and ultimately disrupting this operation. But the key takeaway is that detection-led models cannot remain the primary defensive strategy when attackers no longer display human markers. As the report details, the AI-driven pipeline wrote exploit code, harvested credentials, analysed stolen data and created backdoors at a pace ‘impossible for human hackers to match’. By the time a traditional security operations centre correlates indicators and escalates an incident, an AI agent may already be several phases further into the kill chain.

Detection retains value—including for forensics, learning and back-end threat hunting—but it no longer provides the anchor we historically relied on. If machine-speed adversaries are the new normal, our architecture must adapt. That reorientation starts with treating trust as the primary control plane. This is ‘zero trust’ as an operating model, not as a product label: a consistent approach to constraining what identities—human, machine and AI—are allowed to do, and under which conditions.

GTG-1002 emerges at the same time we are seeing a sharp rise in deepfake-enabled fraud, AI-generated impersonation and identity spoofing. In this environment, trust is no longer a soft concept or a branding slogan; it is a concrete security boundary.

Boards and executives are increasingly asking a basic but uncomfortable question: how do we know that the devices, systems, communications and colleagues we interact with are genuine and not AI-assisted proxies for hostile actors? When a deepfake from a North Korean operator can plausibly pose as a remote job applicant or potential partner, the integrity of identity becomes a matter of national security as well as corporate risk.

Anthropic’s GTG-1002 report is an early, detailed case study of that world. It shows us not only how attackers will operate, but also where defenders must move next. The organisations that navigate this transition successfully will be those that recognise an uncomfortable truth hiding in plain sight: in the age of AI-driven intrusion, trust is the perimeter, and identity is the only durable control.