Cyber stability: why retaliation won’t deter
20 Oct 2015|

3085157011_4560528e9e_z

Nuclear deterrence theory is often seen as the go-to solution to cyber instability. After suffering a sequence of alleged Chinese hacks on its corporations and government departments, the US prepared a suite of potential economic sanctions for China in the hope of bringing Xi Jinping to the negotiating table and deterring future attacks. This move came in the wake of public commentary citing the need for a ‘cyber equivalent of a nuclear deterrent’ and a US shift towards a more offensive cyber posture, as seen in the Department of Defences’ April Cyber Strategy that outlines the importance of ‘effective response capabilities to deter an adversary from initiating an attack.’

That language is a clear attempt to apply nuclear deterrence theory to international cyber relations. Deterrence by punishment uses the threat of an unacceptable cost to make an attacker’s perceived reward no longer justifiable. This strategy prevented the Cold War from turning ‘hot’ and some hope this stabilising affect can be brought to bear in cyberspace. The threat of sanctions may have helped facilitate the Sino–US ‘common understanding’ in Washington last month, which has been interpreted as an ‘historic’ shift in relations. This was followed soon after by news that Chinese police recently arrested hackers at the request of the US government. Unfortunately, the Washington agreement lacks tangible enforcement measures and the arrests, likely an attempt to ease tensions in the weeks leading up to Xi’s trip, weren’t the first time China has obliged the US in this way. The lingering threat of US punishment is unlikely to be a successful deterrent in the long-term, with experts expecting hacks to continue unabated. Such attempts to apply nuclear deterrence theory to cyberspace will likely generate no lasting change for three reasons.

First, in order for an adversary to be deterred from taking an unwanted action, the deterring state must be able to identify and articulate the behavioural red line past which the adversary will be punished. The binary nature of nuclear weapons makes this simple. However, it’s far more challenging to establish a threshold for retaliation in cyberspace due to the continuous spectrum of actions possible and absence of a ‘red button’.

This challenge is visible in the US’ current deliberations over how to respond to China’s alleged hacking of the Office of Personnel Management (OPM) in June this year. The former Director of both the CIA and the NSA, Michael Hayden, argued that the OPM breach represented a ‘legitimate foreign intelligence target’. However, other US officials are divided over whether the sheer size of that intrusion changes things. Normative behaviour in cyberspace is still yet to be determined and entrenched so the credibility of a deterrence threat is undermined, as there’s no confidence in what behaviour will or won’t be punished. As The Diplomat’s headline put it: ‘America Can’t Deter What It Can’t Define in Cyberspace’.

Second, low detection levels of the majority of hacks pose another obstacle to deterrence. A threat is only effective if the perpetrator believes they will be caught. This isn’t an issue for nuclear deterrence, thanks to missile trajectory analysis and the limited number of potential culprits. However, high frequency/low intensity cyber intrusions often slip under the radar—35­­-70% of all hacks go undetected. These hacks are individually insignificant, however in aggregate they represent a persistent syphoning of intellectual property and government data through a salami-slicing tactic. So, even if an adversary was convinced of the credibility of a threat, it may fail as a deterrent if they think that they can succeed unnoticed.

Third, the difficulty and desirability of the attribution process undermines deterrent threats. Network technologies weren’t designed with identity in mind, and as a result it’s challenging to determine the specific computer that launched the attack, let alone who was operating that computer. Adversaries aren’t discouraged by a threat if they don’t expect to be identified. For example, ISIS was thought to be responsible for the highly sophisticated hack of TV5 Monde earlier this year, and it’s only recently been discovered that it was in fact the work of a Russian group called APT28. Attribution is a risky business: misinformed retaliation could translate into an attack on an innocent party, the creation of a new enemy and an escalation of conflict.

Moreover, even if a perpetrator could be accurately identified, attributing blame may be a pyrrhic victory. In the rules-based international order, a state may have to expose valuable data resources, detection capabilities or assets in order to prove the guilt of the party on whom they are enacting punishment. Revealing those capacities may compromise ongoing operations and capabilities, resulting in a tactical win but a strategic loss.

Cyber retaliation is all well and good if the end is punishment itself, however if a state is seeking to establish a deterrent, this approach is likely to leave them disappointed. Governments must be cautious of pursuing a policy that’s not only unlikely to work, but also risks escalating tensions and exposing vital intelligence assets. As former Deputy Secretary of Defense William Lynn foreshadowed in 2010, the unique qualities of cyberspace necessitate that cyber deterrence ‘be based more on denying any benefit to attackers than on imposing costs through retaliation.’