Introduction to fault tolerant quantum computation

Right from the conception of quantum computers it was realised that controlling the build-up of errors in quantum computations is a key problem. Theoretical progress over the last 25 years has led to the development of techniques ready to meet this challenge.

This introduction is for readers interested in the terminology of quantum error correcting codes (QECC) and fault tolerant quantum computation (FTQC) but who would like to skip the mathematics. (For those interested in some gentle maths, please see the introduction recently published by QCDA [60]).

A naïve understanding of quantum mechanics might make quantum error correction look impossible.

Quantum Mechanics
Superposition – qubits are not just 1 or 0, they exist as a weighted superposition of each. In addition to bit errors we also have phase errors. Worse, these errors are continuous rather than binary in nature.

Entanglement – qubits don’t stand alone, quantum calculations depend on the subtle correlations between entangled qubits that behave as one system.

Collapse – attempting to measure (and perhaps then correct) the value of a qubit results in collapse of the state to either 1 or 0, along with the similar collapse of the state of any entangled qubits. Without care, this breaks the underlying quantum calculation.

No-cloning theorem – one result of these fundamental quantum properties is that it’s not possible to simply make a ‘copy’ of a quantum state (a function taken for granted in conventional error correction).

Schemes to overcome these problems often borrow from the theory of error correcting codes as used  to protect signals from noise in conventional communication channels.

Classical Coding Theory
[n, k, d] codes – we encode k bits using a greater number of bits n. The Hamming distance d is the measure of the number of independent ‘bit flips’ needed to turn one valid codeword into another. Errors up to the distance can be detected; errors of less than half the distance can be corrected.

[[n, k, d]] codes – quantum error correcting codes are often denoted via ‘double brackets’.

The golden age of quantum error correcting codes

Early work in the field solved the problem of ‘digitising’ seemingly analogue quantum errors. Multiple noisy physical qubits are encoded into a robust logical qubit. Carefully designed syndrome measurements are used to detect and correct errors without revealing or disturbing the logical qubit state. Fault tolerant operations are used to perform quantum gates. In the end it was shown that FTQC is possible provided the fidelity of the underlying hardware surpasses a sufficient threshold.

Shor 9-qubit code (1995) – the first error correcting code [[9,1,3]]

Steane 7-qubit code (1996) – another early code [[7,1,3]]

Lafamme/ Bennet 5-qubit code (1996) – the smallest known code [[5,1,3]]

CSS codes (1996) – a generalised family of codes based on the work of Calderbank, Shor and Steane. Such codes show how certain classes of conventional error correcting codes can be adapted into quantum codes.

Stabilizer codes (1997) – a general framework for defining codes based on ‘stabilizers’: a powerful way to describe and manipulate the required quantum states. A common set of quantum gates, known as the Clifford group, is sufficient to encode and decode stabilizer states.

Threshold Theorem (1997) – Provided that the fidelity of the underlying hardware exceeds threshold, then code concatenation or other techniques to increase the code distance can be used to arbitrarily supress logical errors allowing even large calculations to succeed.

John Preskill called this “the golden age of quantum error correction”. Most QECCs under discussion today are still examples of stabilizer codes.

Topological codes

A key realisation has been that topological properties can make a code easier to implement. Suitable configurations ensure that stabilizer measurements are ‘local’ in character. This avoids the need for long-range qubit interactions that would be difficult to achieve in many proposed architectures.

Toric code – pioneered by Kitaev, this was the forerunner of other topological codes.

Surface code – implemented on a 2D lattice and requiring just nearest neighbour interaction; this ‘2DNN’ property, together with its low threshold, has made this family of code  part of the standard model  FTQC. A wide variety of variations of the layout and technique have been developed.

Color code – another family of topological codes based on a three-colourable lattice. These codes seek to improve on the properties offered by surface codes.

Topological qubits – a parallel approach being developed by Microsoft and others seeks to build-in the benefits of topological error protection at the hardware level.

An attractive feature of topological codes is that their capacity to detect and correct errors can typically be increased by stepping-up the size of the unit cell. However for larger codes the task of ‘decoding’ error syndrome measurements to determine the corrective action that must be applied becomes non-trivial. The availability of an efficient decoding algorithm is therefore a key practical consideration for the implementation of a code.

Syndrome decoding

Typically error syndrome information is read out by making measurements on additional ancilla qubits. These are an additional overhead, and their measurement must itself be fault tolerant. Often the assumption is that repeated rounds of measurement will be made introducing another overhead.

In the standard model of FTQC the decoding and interpretation of error syndrome information is performed using a conventional processor. This must be done in real time so that any required corrective action can be taken. This puts further constraints on the system due to the need to engineer the processor, together with the heat it produces and control lines it requires into the device environment.

Fault Tolerant Gates

A universal quantum computer is one able to perform in principle any quantum calculation. To do this it must be able to process a universal set of quantum gates. Stably encoding logical qubits is in itself not enough. We also have to be able to perform a universal set of quantum gate operations on those qubits in a fault tolerant way.

Gottesman-Knill Theorem ( 1998) – a quantum computer that uses gates only from the Clifford group can be efficiently simulated by a conventional computer (i.e. it would not be able to offer a quantum advantage over conventional computation).

Eastin-Knill Theorem (2009) – a universal set of quantum gates cannot be achieved using only transversal gates (i.e. just the corresponding qubits in each block interact).

Topological codes support a variety of approaches to fault tolerant computation, the theoretical sophistication of which has evolved over time.

Transversal gates – qubit-to-qubit interactions that don’t spread errors uncontrollably.
(Unfortunately this often breaks the attractive 2DNN nature of the code)

Braiding – defects are created in the lattice that can be manipulated as quasiparticles.
(This can restore 2DNN, but typically at the expense of extra qubit overheads)

Lattice surgery – planar code surfaces are split and merged to implement gate operations.
(Currently envisaged as the standard model for large scale surface code devices)

A key drawback of the surface code is that it doesn’t offer an easy way to implement a universal set of quantum gates. In the standard model of FTQC the solution to completing the gate set is a process called magic state injection.  Unfortunately this is resource intensive and often considered to be a bottleneck in current architectures.

Magic state distillation – the process of preparing high fidelity magic state qubits from lower quality inputs.

Magic state factory – a hypothesized part of a quantum processor optimised for the production of magic states.

LDPC Codes

Most discussions of QECCs assume that a single logical qubit is encoded as a block of physical qubits. Unfortunately with current approaches the ratio of physical to logical qubits increases as we go to larger circuits (by a polylog factor). Can we do better by encoding multiple logical qubits in single block?

In conventional information theory low density parity check (LDPC) codes are used to allow channel capacity to approach its limit. Work by Gottesman (2013) points to the potential existence of new families of quantum LDPC codes.

Quantum LDPC Codes – in principle the required ratio of physical/logical qubits (the channel capacity)  can be made constant and can in fact approach ‘1’ if we encode not just individual qubits, but instead large blocks of qubits together.
(Though such codes probably require long-range qubit interactions, high thresholds and a tough decoding challenge)

These and other new approaches hold out the promise of an exciting future for the field.

David Shaw

About the Author

David Shaw has worked extensively in consulting, market analysis & advisory businesses across a wide range of sectors including Technology, Healthcare, Energy and Financial Services. He has held a number of senior executive roles in public and private companies. He has a degree in Physics from Oxford University, a PhD in Particle Physics from UCL and is a member of the Institute of Physics.

Leave Comment