The dangers of a ‘zero trust’ digital world
16 Feb 2022|

In the early days of cybersecurity, organisations adopted the model of Berlin during the Cold War: a wall high enough to prevent unwanted border crossings and a Checkpoint Charlie to regulate the rest.

But the physical world doesn’t map easily onto the digital. Perimeter-based approaches fail in an ever-shifting network of highly interconnected systems, where even physical disconnection does not suffice for separation, given wi-fi, Bluetooth and other electromagnetic phenomena. And because software is ever changing—that’s one of its strengths—digital systems are never complete or fully known. That means static structures and single solutions, such as walls and checkpoints, generally fail to prevent evolving threats.

So, cybersecurity thinking has adapted to the reality of constantly shifting, often unknowable systems, replete with continual interactions and adjustments between users, technology, data and the environment. Since 2010, the favoured approach to this perpetual state of insecurity—to harden the ‘chewy centre’ of information systems—is ‘zero trust’.

Zero trust acknowledges that malware and intruders may penetrate barriers and checkpoints. Every packet of data moving into, out of and within organisational systems is regarded with suspicion. Nor is it just about the technology. Core to its premise is that users cannot be trusted. User access is hard; once granted, it’s limited typically to ‘least privileged’ role-based permissions, and user behaviour is monitored to identify aberrant patterns.

Zero trust is neither a cheap nor a quick fix. The considerable setup, operational and compliance costs are most often justified by the prospective or realised costs of a breach, data loss or ransomware attack.

In zero-trust environments, nothing is trusted. In the words of one cybersecurity executive, ‘Trust is a vulnerability and, like all vulnerabilities, should be eliminated.’ The premise of zero trust, after all, is not limited by boundaries or platforms, but seeps, stepwise, to include partners, supply chains and regulatory systems.

As governments struggle to meet the challenges of fast-changing technological disruption, a growing plethora of threats to stability and an increasingly precarious geopolitical environment, all exacerbated by an ongoing pandemic, there’s a temptation to latch onto concepts that promise control and certainty. Security and safety often trump other arguments in policy debates, especially as politics becomes partisan. As such, the ideas that motivate zero-trust approaches, facilitated by digital technology, appeal more and more.

But that path leads ever down into darkness. Considerable dangers exist in extending approaches that may suit digital needs within contained environments to the broader spheres of social, political and economic life.

There’s the question of fit. Digital systems are particularly parsimonious. That may sound odd, given the apparent tangle of modern technological systems and their ubiquity. But as the American political scientist Herbert A. Simon demonstrated, it’s impossible for an artificial system to replicate the real world; they are always incomplete representations.

Moreover, digital systems are fundamentally unlike social systems. Not only do they lack the richness, multiplicity and ambiguity that characterise human relations, but their underlying network structure and behaviours differ. Applying misaligned and overly rigid order through a zero-trust approach to social systems would force disassociation within those systems: ‘they will cut our life within to pieces’.

Then there’s the question of cost. Beyond establishing the necessary surveillance infrastructure, the costs include the burden on and erosion of human relationships, culture and practice. It’s not simply the extra time and effort needed to negotiate internal rules and boundaries imposed by others; the lack of privacy inevitably generates self-censorship, an unwillingness to participate or debate, and an avoidance of risky ideas or ventures. A zero-trust culture valorises control—at the cost of efficiency, effectiveness, innovation, creativity and contestability.

There’s also the question of power. Technological design and operation comprise a series of choices—purpose, costs, compromises and privileges. Those making design and operational decisions, typically hidden from scrutiny, exert a tremendous amount of power through access control, surveillance and defining acceptable behaviour. Rights once assumed—privacy, freedom of expression, intellectual property, increasingly identity and avenues for redress—are eroded or lost.

Zero trust embeds and deepens an imbalance of power in favour of the few over the many. Zero-trust systems are not democratic systems: they are inherently authoritarian, even totalitarian, in nature.

And there’s the rub: our society is fundamentally based on trust. To imagine a zero-trust social order, think not Cold War West Berlin, but a supercharged Stasi-run East Germany, where every individual, device and interaction is continuously tracked, interrogated and measured against a profile set by a data-enabled intelligence apparatus.

There are strong national security reasons for containing the damage that untrustworthy technologies can wreak on our society. But there are stronger reasons to ensure that security doesn’t come at the cost of weakening societal fabric, crippling innovative or productive capacity, or damming the wellsprings of democracy.

Zero trust is the latest effort to tame the inherently wicked problem of cybersecurity; there will be others. Nobler concepts are needed by the public, policymakers and even security experts to ensure a healthy, resilient civil society. It’s now, as authoritarian states engage in wordplay, disinformation and lawfare, that trust matters most.