Not dark yet—strong encryption and security (part 2)
27 Jun 2017|

In the previous part of my exploration of the impact of strong encryption on our security agencies, I described the unsophisticated days of intercepting telephony in the 1970s. With voice communications, it’s largely a case of ‘grab it or it’s gone’. Most of the history of signals intelligence is about eavesdropping on moving data. But the advent of internet communications introduced a new angle, as data ‘at rest’ in a computer or smartphone at either end of the communications channel became a potential source of intelligence.

The Apple handset that became the centre of a 2016 court case in the US last year provides an intriguing case study. Even after being presented with a court order to provide the FBI with access to the handset, Apple declined on the grounds that they would have to create an access channel that could be used to render vulnerable any iPhone using that system. It wasn’t a case of encryption being the sticking point—the problem was getting past the phone’s passcode. It’s a more complicated story than sometimes appreciated, but it brought the tension between customer privacy, information security across the wider economy, and the requirements of law enforcement and intelligence agencies very much into public view.

There’s something a little puzzling about the pushback in the iPhone case. As I pointed out last time, we all lived happily enough in the post-1979 world of legislatively-guaranteed warranted access to our telecommunications. Philosophically at least, it seems reasonable for governments to want that level of access to be preserved (or, perhaps more accurately, reinstated). In principle I’m inclined to agree, with the proviso that there’s robust and effective oversight, including the stipulation of warranted collection.

It must be said that some governments haven’t helped themselves in that respect. The public is more tolerant of focused investigations of suspicious behaviour and individuals than it is of wider ‘fishing expeditions’ into big data pools. In 1979, it was hard to do much of the latter, but more recently the US National Security Agency was caught out hoovering up large quantities of metadata under their Prism program without sufficient oversight. A UK system called Tempora went well beyond metadata, and was undiscriminating in its targeting. And the Australian government did a horrible job of explaining its own ambitions for metadata collection.

And in practice, I don’t think we can get there from here. Encryption isn’t just a tool used by bad people to plan bad things: it’s now a critical part of the rapidly growing online economy. Banking and e-commerce couldn’t function effectively without it. As we saw in part 1, the US government rolled out strong encryption for exactly that reason in the 1970s (and continues to support today). And individuals have perfectly valid reasons to implement security mechanisms such as virtual private networks—any traveller doing internet banking over someone else’s Wi-Fi network has good reason to want the additional protection. In fact, given how poor network security can be, it makes good sense for users to implement protective measures over sensitive data.

Perhaps most important are end-to-end encryption systems, used by applications like WhatsApp, Signal, iMessage, and Facebook Messenger. Only the two client users have the key to decrypt any message. Companies such as Apple and Facebook, on whose products the messages are transmitted, don’t have access to unencrypted messages or to encryption keys.

There have been calls to outlaw strong encryption so that law enforcement and intelligence agencies can crack communications between targets of interest. That begs many questions. Who decides how strong is ‘too strong’? Does ASIO or the AFP need to be able to access data in an hour, a day, or a week? Moore’s Law tells us that what the NSA can do today, others will be doing in the not too distant future. So how can we ensure the protection of innocent but sensitive communications? Or is the government going to decree that some privacy measures won’t be available to the public at large?

Finally, even if we managed to tie up all of the loose ends in the Australian telecommunications marketplace, how do we quarantine local users from apps and hardware that are compatible with Australian networks and are readily available from offshore vendors? Australia, the UK, and even the US can’t legislate for the totality of the messaging app universe, and any lawful intercept legislation would quickly move serious threats onto other platforms that could be even worse for law enforcement—or even wider society. High profile companies like Apple, Google and Facebook tend to help when it’s clearly a public duty to do so (they work with authorities to identify and eliminate child pornography, for example). But smaller firms, especially those in other countries, might feel no such obligation. And any vulnerabilities engineered into products will be available to be exploited by entities other than our own security agencies.

I think it’s an intractable problem. The horse has bolted, and the access to data through lawful intercept that our security agencies once enjoyed will never be possible again. As Bob Dylan might put it, it’s not dark yet, but it’s getting there.

 

Note: I had a lot of useful feedback from my ASPI colleagues on these two posts. I thank them, but don’t blame them for anything here.