I’m glad to see Klee Aiken offer his thoughts on cybersecurity, and I share his suspicion of intrusive surveillance. But it’s for this very reason that I’m raising alternative uses of ‘big data’, and I fear Klee’s assessment of the potential role for government neglects several key issues.
To begin with, I’m not particularly confident the private sector will prove capable of protecting their own systems as cyber security becomes more burdensome. I’d welcome Klee’s input on this point, but I see a classic example of market failure behind industry’s lagging response to the system-wide costs inflicted by malware. After all, the US government has been announcing plans to foster more security-conscious behaviour since 2003, and yet they’re still waiting for any meaningful changes which can keep pace with new dangers.
Klee suggests that we should put this sense of urgency in perspective, given the implications of more government control. But he does so by drawing a spurious line between ambitious counter-terrorism programmes and cybersecurity on the grounds that the latter are private sector based. I can think of several terrorism analysts in the 80s and 90s who would have been tickled pink by his suggestion that their field comprised a ‘traditional’ area of national security. How times change!
Threats, and by extension security, are largely matters of perception. The September 11 attacks are representative of this, in that they jolted the US public into a new frame of thinking about the definition of national security. This is why the fallout from routine terrorism activity in India has less strategic impact than in relatively untouched Australia, and it’s why Klee can assert that counter-terrorism surveillance is more critical than monitoring computer networks.
But for how long? Can we afford to remain sanguine about the severity of cyberweapons in the future? We should push back against doomsday thinking, but in my view, it’s worth thinking about more effective arrangements now, if only so that we’re more prepared in the case of a truly shocking event down the line. What’s the risk, for example, that a rogue nation like North Korea unleashes a malware attack which leads to financial panic, and we overreact? Excessive cyber security measures taken in response to disaster could harm our strategic objectives. I don’t mean to be flippant, but a ‘cyber 9/11’ might lead to a ‘cyber Guantanamo’ in the absence of more preparation and thinking.
So I’d like to hear more from Klee about the limited model of big data security he outlined. Rather than involve the government in direct monitoring, he argues this would confine data mining to local operators with safeguards to avoid disclosure of information. But I have some reservations: by this stage it seems we’re back to thinking in terms of tracking terrorists, not examining cyber threats. In the James Baker address I cited (PDF), he argues that the content of digital transmissions, not just metadata, will be needed to understand complex malware. Will ‘need to know’ filtering systems in ISPs prove sufficient in real time, when cyber threats are so innovative and immediate? Metadata provided on inquiry won’t reveal as much as we’d like it to, and probably not within the short time span necessary to avert damage.
In any case, as long as we’d like to conduct targeted investigations, Klee admits these ISPs would retain enormous volumes of digital information. But while we need more work on the privacy issues this raises, as he says, we must also consider the real potential for abuse that this arrangement would open up in the meantime. I think that big data of this nature would be more worrisome for our liberal society than government monitoring. Paul Pillar has valuable things to say about the risks and incentives that could influence the private and public actors who are privy to large volumes of personal data. His view is that the balance comes out in favour of government: we have no idea how ISPs conduct internal monitoring of their employees to obey ‘need to know’ filtering, and rule breaking is more easily detected in public agencies scrutinised by the media and politicians. Personally, I’d place more faith in public servants undergoing routine security checks than Telstra technicians.
Klee would likely respond that an unwieldy system could be misused by intelligence agencies because they’re afforded secrecy, and I agree. That’s why I’m talking about a security framework under a civilian department. We can better avoid abuses of power by locating this authority inside the Attorney General’s office, not the ASD. I don’t know what the best system would look like, but this seems like a reasonable place to start.
If we step back from the daily hype of the Snowden affair, it appears there’s been no serious abuse of lawful authority by intelligence agencies. The controversy is about transformational shifts in policy that were initiated behind closed doors, and overseen by bodies without the technical skills to properly scrutinise their activity. This is of less concern if government ministers operated a cybersecurity system along clearly defined lines which are subject to parliamentary accountability. Think tax returns, not Watergate.
David Schaefer is a sessional tutor in the School of Global, Urban and Social Studies at RMIT University.