On the eve of destruction? (Part 1)
17 Aug 2018|

For a long time, I thought that nuclear weapons were a tolerable evil. I bought into the argument that they played a positive role in some circumstances by deterring nations from going to war because they feared the consequences. But I recently read two books—one newly published and the other over half a century old—that together convinced me that I was quite wrong to think those things.*

Daniel Ellsberg’s recent book The doomsday machine: confessions of a nuclear war planner contains many sobering observations about nuclear weapons. I didn’t need to be told how destructive they are (though he makes an important point about the size of thermonuclear yields compared to the Hiroshima and Nagasaki bombs), but I hadn’t before appreciated the alarming scope for human error in their employment.

In particular, Ellsberg points out that reducing the risk of a pre-emptive ‘decapitation’ attack that kills most of a nation’s senior officials requires the authority to use nuclear weapons to be delegated to subordinates down the chain of command. So the popular view of the national leaders metaphorically having the sole ‘finger on the button’ can’t be true.  (And it wasn’t true right from the start—the two nuclear weapons used against Japan weren’t authorised by President Truman.) Ellsberg gives multiple examples of Cold War circumstances in which delegation could have led to disastrous escalation, plausibly describing how a single pilot or submarine commander could have set off World War III on their own initiative.

In the mid-1960s, Pentagon planners estimated that hundreds of millions of people would die as a result of a US nuclear attack on the Soviet bloc. And the suffering would not be limited to the ‘enemy’; most of those killed would be civilians rather than combatants, and allied and neutral countries in Western Europe would suffer huge casualties from radioactive fallout. You’d hope that numbers like that would have given pause to nuclear-war planners, but history shows that the growth in the nuclear arsenals of both sides accelerated from that point, reaching colossal numbers in the years that followed.

As it happens, even those horrendous estimates of casualties were optimistic. The ‘nuclear winter’ effect that would follow a large-scale exchange of nuclear weapons was unknown in the 1960s, but we now know that it would have global consequences and would kill a very large proportion of the world’s population.

After that, I was looking for something more uplifting to read, so I re-read Carl Sagan’s Cosmos. But rather than providing a respite from the bleakness, Sagan’s book turned out to have an unexpected but worrying overlap with Ellsberg’s. After a magisterial tour of the universe (as it was understood in the late 1970s), there’s a chapter that speculates about the density of intelligent life in the universe—which ought to be high, given what we know—and wonders why we’re yet to see any sign of it elsewhere.

The apparent disparity between the estimates of alien intelligence and what’s (not) observed is known by researchers in the field as Fermi’s paradox. It might be that we haven’t been looking hard enough for long enough, but the unsuccessful search has been going on for 40 years now. As a potential resolution of the paradox, Sagan points out that one of the unknowns in estimates of the number of alien civilisations is their average lifetime before coming to an end, for whatever reason. If advanced civilisations (meaning ‘capable of broadcasting information into space’) prosper for many millions of years on average, the chance of finding one or more is relatively high. But if they only last a few decades that could explain the radio silence.

In that context, Sagan refers to a classic 1960 book: Lewis F. Richardson’s The statistics of deadly quarrels. (It seems to be a collector’s item now, but if your library doesn’t have it you can read a good summary of Richardson’s results here.) Richardson assembled data on 315 wars after 1800. He was hoping to identify systematic trends or indicators for the outbreak of wars, which would provide some optimism for thinking that we could get better at avoiding them. Alas, that wasn’t to be. The depressing conclusion of his statistical analysis was that the outbreak of war is essentially random—that is, the probability of a war breaking out tomorrow is pretty much the same as on any other day.† The review linked above observes that ‘the data offer no reason to believe that wars are anything other than randomly distributed accidents’.

Between them, Ellsberg and Richardson paint an extremely gloomy picture. Richardson tells us that there’s a finite—and not especially small—probability that a sizeable war will break out at any given time. The randomness in the historical data tells us that we can’t be confident that sensible policy settings will appreciably reduce that probability. And Ellsberg reminds us that a war involving nuclear-armed combatants has the potential to bring our civilisation to an abrupt end.

The rational thing to do in light of those observations is to assume that wars will always be with us and take steps to reduce their potential to destroy our civilisation. If we want to be around to chat with our interstellar neighbours in the future, keeping nuclear weapons around is just dumb.


* I received several sets of comments on a draft of this piece. Perhaps not surprisingly, the consensus was that the rarefied air of an academic environment had softened my critical faculties. But ultimately the arguments I heard are unconvincing, as I’ll explain in part 2.

† For the technically minded, the frequency of the number of wars that begin in a year is very closely matched by a Poisson distribution. See Figure 3 in the summary paper linked above.