The odds are high that the phrase ‘quantum supremacy’ will soon cross your desk. In mid-2016 Google engineers
announced plans to achieve quantum supremacy in the near-term; the likely timescale is five years, possibly much sooner. Quantum supremacy is a sexy idea (they had me at
quantum) and it’ll be trumpeted in the media. But in the lull, it’s a good time to inoculate against some of the more militant readings that follow the ‘[adjective] supremacy’ construct.
There’s rhetorical currency in the term ‘quantum supremacy’, and it plays well in cut-throat grant-applications. But the technical meaning's less racy—quantum supremacy simply means a clear demonstration of a 'quantum speedup'. That is, a quantum computer that outperforms the best classical supercomputer on a specific task. ‘Outperforms’ is quantified by the calculation speed as a function of input size. This is a big deal in computer science, and would effectively show that there are some things a quantum computer can do that a classical computer can't—at least on a practical timescale.
So, we should understand that the demonstrated superiority isn't at all like air supremacy, for example. Rather than an adversary, the victory is over particular classical limits. That’s a cue for the hardened sceptics to ask
okay, now what? Supposing quantum supremacy is achieved soon, it’s going to make physicists happy. Everyone else, being interested in practical computing tasks, wants to know (a) how scalable quantum computers will be and (b) what quantum computers can actually do. It’s worth dwelling on these points, because most questions about the viability of quantum computing are
asked and
answered along those lines.
First, the question of scalability. Proof-of-concept problems can be difficult to translate to real problems. Quantum computers are error-prone due to background noise (we say that the quantum states suffer
decoherence), and there's a risk that, as we scale up, the accumulated errors would prevent accurate longer calculations. But there's some hope, due to what's called
the threshold theorem. The theorem says that if we can keep the noise below a certain level, we’ll always be able to correct noise-induced errors
faster than they’re created. That means that fault-tolerant quantum computers of almost arbitrary size should be possible. It’s a powerful finding, because the theorem doesn’t specify the source of the noise—it could be either from environmental sources, inaccuracies in the initial set-up, or measurement of the computer's state.
The trouble is that
error correction is notoriously difficult to implement. Quantum supremacy as an initial benchmark doesn’t need much of it, but more ambitious targets will. One way to manage the challenge is to think in terms of
modules, each of which is small enough to allow for effective error correction. Another possibility,
championed by Microsoft, lies in 'topological quantum computing', in which 'circuitry' is encoded in the arrangement of exotic matter states. Such a setup is robust against environmental decoherence.
Second, the question of utility. The conventional narrative is that current cryptography will soon be defeated by
Shor’s algorithm. The algorithm is a quantum computing approach to factoring numbers into their prime factors (the computational difficulty of which lies at the heart of encryption schemes like the widespread RSA system). But it’s not so straightforward—developments in cryptography based on quantum key distribution and
post-quantum cryptography might mean that would-be codebreakers using quantum computers will be defeated before they begin.
It’s tempting to end there, and notch up one for the Luddites—but that misses the big picture. The promise lies in some less well-known uses:
Grover’s search algorithm, the
HHL algorithm for linear equations and
quantum simulation. Each isn't really an application, but calculations in most scientific fields can be enabled by these processes. The overall task would be sped up by quantum computing. Grover’s algorithm enables more efficient inversion of functions and faster database queries. The HHL algorithm provides an exponential increase in runtime when solving a system of linear equations—this could allow for faster (and therefore more detailed) modelling in everything from weather forecasts to radar extrapolations.
Most machine learning techniques also involve linear equations, so the combination of quantum computing and artificial intelligence might act as a technological force multiplier. Finally, a quantum simulator would let us model atomic-scale interactions efficiently, something that a classical computer can't do. There’s
big interest in areas like medicine, chemistry and engineering, which now use advanced supercomputers to approximate behaviour of drugs, organics and materials.
The final piece of the puzzle might be the
floodgates that quantum supremacy success opens in the investing world. D-Wave is
making waves with their claimed quantum computers, but so far there’s
no evidence of quantum speedups. IBM is
expanding a cloud-based, publicly-accessible platform (though it won't outperform classical computing) and there’s a
growing start-up ecosystem looking to leverage quantum computing out of the lab. Although computer giants have
invested in quantum computing already, a successful demonstration of quantum supremacy could attract a new wave of private money and larger-scale public investment.
Properly realised, quantum computing will extend beyond commonly-touted cryptographic uses, enabling faster scientific calculations across the board. One step along the way will be the practical achievement of quantum supremacy, followed by wider commercialisation. As ever though, the promise is commensurate with the massive labour needed to translate the theory, noise and all, into reality.