
The argument over US officials’ misuse of secure but non-governmental messaging platform Signal falls into two camps. Either it is a gross error that undermines national security, or it is a bit of a blunder but no harm was done.
US Defense Secretary Pete Hegseth has twice used Signal for sensitive national security conversions, including once in which he and officials discussed planned military operations against Houthis in Yemen. When we consider the security implications of this, we see that classification systems are complicated, subjective and nuanced. Many people, even those who have worked within government for years, don’t understand them.
A classification is simply a label that the originator of the information attaches to it based on his or her perception of the damage that would happen if it became public. Information can be declassified by group consensus and through a proper process, but it is generally up to the originator to make a reasonable judgment at the time.
Only certain government systems are permitted to hold the most sensitive information, and these are highly protected and monitored. They are usually air gapped, meaning they are separated from other networks and not connected the internet. Still today, the most sensitive information is shared only on specific coloured paper that is destroyed after being read in a special room. In 2013, the Russians even bought up a stock of typewriters to make sure they were truly offline. The much discussed secure compartmented information facility (SCIF) is there to protect from physical attacks such as eavesdropping or covert cameras.
A classification is different to a handling caveat, which indicates to whom the information can be shared, and a nationality caveat (preceded in Britain with ‘Eyes Only’), indicating which country produced and which countries can read the information. These elements (plus some codewords others for more sensitive information) make up what we generally call a classification. It is technically possible to have unclassified information with other caveats. However, because classified information is for most practical purposes allowed only on particular systems, people tend to rely on classifications alone.
Technically, the person who first writes down the information is meant to classify it based on a set of guidelines. These definitions are very clear. In the British system, something is classified based on the effect if it is compromised. Top secret information, for instance, ‘could lead to loss of human life’ or damage military operations, international relations or prosperity. To avoid compromises, people tend to over-classify or add handling caveats that limit the people to whom the information may be distributed.
On one hand, it is possible to argue that the information in that Signal chat about the planned operation was not classified as the ‘accidental or deliberate compromise’ of the information did not result in anything ‘more than moderate, short-term damage to the operational effectiveness’. There is probably a longer conversation to be hard around causing ‘moderate damage to the work or reputation of the organisation’. Regardless, it is up to the person who writes the information down to classify it; so technically whatever Hegseth says goes.
To add to the complication, classifications mean nothing outside of government systems. Although some apps claim to have ‘government or military encryption’ they are not accredited to hold classified information; so, by definition, they don’t have classified information on them.
The more you think about the definition of a secret, the more things start to break down. Governments spend millions on sophisticated (and highly classified) cyber tools but then send them to people’s computers and phones to hack those devices. These tools bring back highly sensitive information from these devices over the internet; and only when they get back into a government system does the information become ‘classified’.
When does this information become ‘secret’? The answer is: when it sits on a government IT system.
Furthermore, people who meet agents around the world have highly classified conversations in coffee shops or in hotel rooms; hiding in obscurity. It only becomes ‘classified’ when it is written up on an IT system.
When diving into these matters, it often helps to turn the question around. Instead of asking ‘what is the effect if this leaks?’ ask ‘what are you trying to protect?’ It could be a capability, operation, intention or limitation, a set of factors commonly abbreviated as COIL. It is also worth asking: who are you trying to protect this from, and how long for? Striking the Houthi bases isn’t going to be a secret for long once the planes have taken off.
Asking these deeper questions should have a significant effect on your security posture, considerations and planned approach. These questions start getting to the heart of the issue and mitigating the systemic risks, which is a lot more important. In most cases, the classification of some information is the bluntest of instruments to protect what could be a person, a sensitive operation, a policy plan or your own reputation.
Applying blame for a breach in security essentially comes down to subjective judgements on the person and the error they made. In the case of SignalGate it was a pretty bad one, no matter the classification of the information.