[PYTHON] Basics of Quantum Information Theory: Fault Tolerant Quantum Computation

\def\bra#1{\mathinner{\left\langle{#1}\right|}} \def\ket#1{\mathinner{\left|{#1}\right\rangle}} \def\braket#1#2{\mathinner{\left\langle{#1}\middle|#2\right\rangle}}

Introduction

Up to Last time, I was able to understand the "error correction code". With this, even if some noise is added to the quantum computer, the error can be corrected and the calculation result can be obtained correctly! I'd like to say (at least in theory!), But that's not the case at all! I will talk about that from now on. Noise can occur anywhere in the quantum computing process, and special treatment is required for each of them. It can be said that fault-tolerant quantum computation, which is error-tolerant quantum computation, has been realized for the first time by implementing them.

Let me first say what I am trying to explain in the sections that follow. First, in "Theoretical explanation", after explaining what it means to be fault-tolerant, we will introduce concretely fault-tolerant quantum gates and methods for realizing measurement and initial state creation, and connect codes and threshold theorem. This section explains. In "Operation check", we will take a very simple example of fault-tolerant quantum calculation and check its operation using the quantum calculation simulator qlazy.

The following documents were used as references.

  1. Nielsen, Chan "Quantum Computer and Quantum Communication (3)" Ohmsha (2005)

Theoretical explanation

Definition of fault tolerant

The error correction code corresponds to the noise generated when the quantum state is stored and transmitted. Since the actual quantum calculation consists of a series of processes such as "initial state creation", "gate operation", and "measurement", the error correction code alone is not enough. For example, how do you perform a gate operation? Shouldn't we just decode the coded state once, execute the gate operation, and then encode it again immediately? You might think, but the idea is not good. Noise can get mixed in during the decoding and encoding process and gate operations. Therefore, a better way is to perform the gate operation in the encoded state, but simply configuring the gate so that it does not need to be decoded is not enough. It must be configured so that noise generated somewhere does not propagate and multiply. For example, the control NOT gate (CNOT gate) is a very useful gate in every situation, but if $ U_ {1,2} $ is a control NOT operation, $ U_ {1,2} X_ {1} = X_ {1} Since X_ {2} U_ {1,2} $ holds, the bit invert noise generated on the control side of the input propagates to both the control side and the target side of the output. On the other hand, $ U_ {1,2} Z_ {2} = Z_ {1} Z_ {2} U_ {1,2} $ holds, so the phase inversion noise generated on the input target side is the output control side and target. Propagate to the side. If you write it in a diagram, it will propagate as follows.

<Bit inversion on the control side[X]Propagates to control / target>

--[X]--*----     ----*--[X]--
       |      =>     |
-------X----     ----X--[X]--


<Phase inversion on the target side[Z]Propagates to control / target>

-------*----     ----*--[Z]--
       |      =>     |
--[Z]--X----     ----X--[Z]--

It is very bad to have such a part in the gate operation for the code state. Error correction is possible if it occurs only once in one code block, but with the above configuration, one noise increases to two, making it impossible to correct. Therefore, it must be configured to prevent this from happening. Furthermore, noise may occur not only in the gate calculation but also in the error correction procedure itself, so this countermeasure is also necessary. In addition, noise can be mixed into the initial state creation and measurement process.

When I think about it, I feel that fault tolerant is impossible. Because CNOT propagates noise. Is it possible to realize CNOT for code states without using CNOT? Is it possible to achieve error correction without CNOT? But don't worry. In fact, it doesn't mean that you should never use CNOT. There is no problem if the propagation meets certain conditions. The condition (= definition of fault tolerant) is "when one noise occurs in the system that executes quantum calculation, at most one noise is generated in each code block. ". Since one noise per code block can be corrected, propagation to the same code block where the noise occurred is not good (because there are two noises per block), but propagation to other blocks Is allowed (because one noise can be corrected). If the quantum computing system is configured in this way, it can be said to be fault tolerant.

Unfortunately, noise can occur in more than one place at the same time [^ 1]. And, unfortunately, the propagation can result in two noises in the same code block. In that case, the calculation result will be incorrect. However, the probability should be suppressed to at most $ O (p ^ {2}) $, assuming that the probability of noise occurring in one place is $ p $. This is obvious fault-tolerant, as the probability of a wrong calculation is $ O (p) $ and it will be $ O (p ^ {2}) $ if no action is taken on the error. It is an effect. In other words, the fact that the calculation error rate is at most $ O (p ^ {2}) $ can be considered as one of the characteristics of fault-tolerant quantum calculation.

[^ 1]: "At the same time" means that another noise is generated before error correction is performed for one noise (I think).

Structure of fault-tolerant quantum computation

Now let's look at how "gate calculation", "measurement", and "initial state creation" can be specifically configured as fault tolerant. Since it is easiest to think with the Steane code of 7 qubits, all the following discussions assume the Steane code [^ 2].

[^ 2]: In the case of 5 qubit codes, it is quite difficult to make the gate operation fault tolerant. However, it seems that both $ X $ and $ Z $ can be made into the transversal type that will be explained later, so isn't it so difficult? I feel that, but I think that there is a wall in a place that I have not yet understood, perhaps because I am making a terrible mistake. Thinking about error tolerance with 5 qubits will be my homework in the future.

Gate operation

Pauligate: X, Y, Z

Let's start with Pauli $ Z $. The logic $ Z $ of the Steane sign is commutative with all six generators.

\bar{Z} = Z_{1} Z_{2} Z_{3} Z_{4} Z_{5} Z_{6} Z_{7}  \tag{1}

Can be defined as. The generator is

Source operator
g_1 I \space I \space I \space X \space X \space X \space X
g_2 I \space X \space X \space I \space I \space X \space X
g_3 X \space I \space X \space I \space X \space I \space X
g_4 I \space I \space I \space Z \space Z \space Z \space Z
g_5 I \space Z \space Z \space I \space I \space Z \space Z
g_6 Z \space I \space Z \space I \space Z \space I \space Z

So, you can see that the definition of equation (1) is good [^ 3].

[^ 3]: The same operator is written as $ X_ {1} Z_ {2} X_ {3} $, $ X \ otimes Z \ otimes X $, or $ XZX $, but the substance is It's the same so don't get confused.

The stabilizer state uniquely determined by 7 operators, which is 6 generators plus $ \ bar {Z} $, is $ \ ket {0_L} $, and $ \ bar {Z} $ is $-\ bar { Let $ \ ket {1_L} $ be the stabilizer state that is uniquely determined when changing to Z} $. Then

\bar{Z} \ket{0_L} = \ket{0_L}, \space \bar{Z} \ket{1_L} = - \ket{1_L}  \tag{2}

And if $ \ bar {Z} $ is expressed by the logical basis $ \ {\ ket {0_L}, \ ket {1_L} \} $,

\bar{Z} = 
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}  \tag{3}

It will look familiar to you. So far, this is a review to remember what I did when using error correction code. What I would like to pay attention to here is the form of equation (1). When written in a quantum circuit,

--Z--
--Z--
--Z--
--Z--
--Z--
--Z--
--Z--

It will be. As you can see, it is in the form of performing a $ Z $ operation on a bit-by-bit basis, and noise in any bit does not affect the other bits in the code block. The property that a coded gate can realize by a bit-by-bit gate operation is called "transversality", and it can be immediately said that it is a fault-tolerant gate operation.

Next is Pauli $ X $.

\bar{X} \bar{Z} \bar{X} = -\bar{Z}  \tag{4}

All you have to do is decide $ \ bar {X} $ to meet. For example

\bar{X} = X_{1} X_{2} X_{3} X_{4} X_{5} X_{6} X_{7}  \tag{5}

You can go. Then

\bar{X} \ket{0_L} = \ket{1_L}, \space \bar{X} \ket{1_L} = \ket{0_L}  \tag{6}

So [^ 4], the matrix representation in the logical basis is

[^ 4]: $ \ bar {X} \ ket {0_L} = \ bar {X} \ bar {Z} \ ket {0_L} =-\ bar {Z} \ bar {X} \ ket {0_L} $ So $ \ bar {X} \ ket {0_L} $ is the eigenstate for the eigenvalue $ -1 $ of $ \ bar {Z} $. That is, $ \ bar {X} \ ket {0_L} = \ ket {1_L} $. You can get $ \ bar {X} \ ket {1_L} = \ ket {0_L} $ by calculating $ \ bar {X} $ on both sides. In detail, I think it's okay to define $ \ bar {X} \ ket {0_L} = e ^ {i \ alpha} \ ket {1_L} $, but let's say $ \ alpha = 0 $ I think that is a practice.

\bar{X} =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}  \tag{7}

is. As you can see from the form of equation (5), this is also a transversal type, so this means that a fault-tolerant Pauli $ X $ has been realized.

For Pauli $ Y $,

\bar{Y} = i\bar{X}\bar{Z} = Y_{1} Y_{2} Y_{3} Y_{4} Y_{5} Y_{6} Y_{7}  \tag{8}

If you set it as, everything will fit in a circle. That is, $ \ bar {Y} $ is anti-commutative with $ \ bar {X} $ and $ \ bar {Z} $, and $ \ bar {Y} $ itself becomes Hermitian.

\bar{Y}\ket{0_L} = -i\ket{1_L}, \space \bar{Y}\ket{1_L} = i\ket{0_L} \tag{9}

So the matrix representation is

\bar{Y} =
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}  \tag{10}

It will be. Since equation (8) is a transversal type, this means that a fault-tolerant Pauli $ Y $ has been realized.

Hadamard Gate: H

Based on $ \ bar {X}, \ bar {Y}, \ bar {Z} $, let's configure the Hadamard gate for the code state as a fault tolerant.

\bar{H} \bar{X} \bar{H} = \bar{Z}, \space \bar{H} \bar{Z} \bar{H} = \bar{X}  \tag{11}

I just need to decide $ \ bar {H} $ that satisfies

\bar{H} = H_{1} H_{2} H_{3} H_{4} H_{5} H_{6} H_{7}  \tag{12}

You can see that equation (11) is satisfied.

\begin{align}
\bar{H}\ket{0_L} &= \bar{H}\bar{Z}\ket{0_L} = \bar{X}\bar{H}\ket{0_L} \\
\bar{H}\ket{1_L} &= -\bar{H}\bar{Z}\ket{1_1} = -\bar{X}\bar{H}\ket{1_L} \tag{13}
\end{align}

As you can see, $ \ bar {H} \ ket {0_L} $ is the eigenvalue of $ \ bar {X} $ and $ + 1 $ is the eigenstate and $ \ bar {H} \ ket {1_L} $ is $ \ The eigenvalue of bar {X} $ is the eigenstate of $ -1 $. Therefore, each

\begin{align}
\bar{H}\ket{0_L} &= \frac{1}{\sqrt{2}} (\ket{0_L} + \ket{1_L}) \\
\bar{H}\ket{1_L} &= \frac{1}{\sqrt{2}} (\ket{0_L} - \ket{1_L}) \tag{14}
\end{align}

The matrix representation can be written as

\bar{H} = \frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
1 & -1
\end{pmatrix}  \tag{15}

It will be. As you can see from equation (12), it is a transversal type, so this means that a fault-tolerant Hadamard gate has been realized.

Phase shift gate: S
\bar{S}\bar{X}\bar{S}^{\dagger} = \bar{Y}, \space \bar{S}\bar{Z}\bar{S}^{\dagger} = \bar{Z} \tag{16}

Determine $ \ bar {S} $ to meet.

\bar{S} = S_{1} S_{2} S_{3} S_{4} S_{5} S_{6} S_{7}  \tag{17}

I think for a moment that I can go, but with this, equation (16) does not hold.

\bar{S} = (Z_{1}S_{1}) (Z_{2}S_{2}) (Z_{3}S_{3}) (Z_{4}S_{4}) (Z_{5}S_{5}) (Z_{6}S_{6}) (Z_{7}S_{7})  \tag{18}

If so, it's OK (I think you can easily check it). The matrix representation for satisfying equation (16) is

\bar{S} =
\begin{pmatrix}
1 & 0\\
0 & i
\end{pmatrix}  \tag{19}

(I think you can easily check this too). As you can see from equation (18), it is a transversal type, so this means that a fault-tolerant phase shift gate has been realized.

I don't think I'll make a mistake, but if I write a circuit diagram just in case,

--S-Z--
--S-Z--
--S-Z--
--S-Z--
--S-Z--
--S-Z--
--S-Z--

is. It is common knowledge in the industry that the order of operators when written in mathematical formulas and the order of operators when written in schematics are reversed, but if you inadvertently implement it incorrectly (yourself). It's sweat).

Control NOT gate: CNOT

Now, the control NOT gate in question. Is it possible to configure it so that it does not propagate within the same code block? The control NOT operator for the code state, where the code block $ A $ is regarded as the control bit and the code block $ B $ is regarded as the target bit, is written as $ \ bar {U} _ {A, B} $. At this time,

\begin{align}
\bar{U}_{A,B} \bar{X_A} \bar{U}_{A,B} &= \bar{X_A} \bar{X_B} \\
\bar{U}_{A,B} \bar{X_B} \bar{U}_{A,B} &= \bar{X_B} \\
\bar{U}_{A,B} \bar{Z_A} \bar{U}_{A,B} &= \bar{Z_A} \\
\bar{U}_{A,B} \bar{Z_B} \bar{U}_{A,B} &= \bar{Z_A}\bar{Z_B} \tag{20}
\end{align}

Consider deciding the control NOT operation to satisfy. The 7 bit numbers contained in the code block A are $ A_ {1}, A_ {2}, A_ {3}, A_ {4}, A_ {5}, A_ {6}, A_ {7} $ Then, the 7 bit numbers contained in the code block B are $ B_ {1}, B_ {2}, B_ {3}, B_ {4}, B_ {5}, B_ {6}, B_ { Let's say 7} $, control the $ i $ th bit, and write a simple control NOT operator that targets the $ j $ th bit as $ U_ {i, j} $. Then, $ \ bar {U} _ {A, B} $ that satisfies equation (18) becomes

\bar{U}_{A,B} = U_{A_1,B_1} U_{A_2,B_2} U_{A_3,B_3} U_{A_4,B_4} U_{A_5,B_5} U_{A_6,B_6} U_{A_7,B_7}  \tag{21}

You can see that. It's easy to see. The first equation in equation (20) is

\begin{align}
\bar{U}_{A,B} \bar{X_A} \bar{U}_{A,B} &= (U_{A_1,B_1} \cdots U_{A_7,B_7}) (X_{A_1} \cdots X_{A_7}) (U_{A_1,B_1} \cdots U_{A_7,B_7}) \\
&= (U_{A_1,B_1} X_{A_1} U_{A_1,B_1}) \cdots (U_{A_7,B_7} X_{A_7} U_{A_7,B_7}) \\
&= (X_{A_1} X_{B_1}) \cdots (X_{A_7} X_{B_7}) \\
&= (X_{A_1} \cdots X_{A_7}) (X_{B_1} \cdots X_{B_7}) = \bar{X_A} \bar{X_B} \tag{22}
\end{align}

Therefore, equation (21) is OK. You can see the second, third, and fourth in the same way. I was a little laid back, but in the end it became a very simple form. If you write it in the circuit diagram,

           A1 --*--------------------
[block A]  A2 --|--*-----------------
           A3 --|--|--*--------------
           A4 --|--|--|--*-----------
           A5 --|--|--|--|--*--------
           A6 --|--|--|--|--|--*-----
           A7 --|--|--|--|--|--|--*--
                |  |  |  |  |  |  |
           B1 --X--|--|--|--|--|--|--
[block B]  B2 -----X--|--|--|--|--|--
           B3 --------X--|--|--|--|--
           B4 -----------X--|--|--|--
           B5 --------------X--|--|--
           B6 -----------------X--|--
           B7 --------------------X--

This is a beautiful shape, which means that fault-tolerant control NOT has been realized. As you can see, the bit inversion noise generated in code block A propagates to code block B. However, it is okay because it does not propagate within the same block. The phase inversion noise generated in the code block B also propagates to the code block A, but this also does not propagate to the same block, so it is okay (error correction is possible for each block).

Phase shift gate: T

This completes the Clifford gate. Another phase shift gate, $ T $ gate, is needed to make quantum computation universal. The $ T $ gate is

T =
\begin{pmatrix}
1 & 0\\
0 & e^{i\pi /4}
\end{pmatrix}  \tag{23}

Since it is defined by, enter the sign state $ \ ket {\ psi} = a \ ket {0_L} + b \ ket {1_L} $ and enter $ \ ket {\ psi ^ {\ prime}} = a \ ket It would be nice if an operation that outputs {0_L} + be ^ {i \ pi / 4} \ ket {1_L} $ could be configured in fault tolerant. But this is not so easy. It's a clever way [^ 5] that's a little confusing, so I'll give you the answer first. The circuit diagram is

[^ 5]: [Practice 10.65] and "Practice 10.66" of Nielsen, Chan describes how to derive this circuit. When I try this exercise, I see! It will be like that. Also, Exercise 10.68 describes how to configure a Toffoli gate as a fault tolerant in a similar way.

               |C>
                v
|0L>  --/--H--T----*----SX-- T|psi>
                   |    |
|psi> --/----------X----M

is. What does it mean that the first code block contains the $ T $ that we are about to define? You might have thought. Think of the first Hadamard and $ T $ gates in this first code block as representing some action that spits out a state similar to that calculated in this way. In the figure, I wrote $ \ ket {C} $,

\ket{C} = \frac{1}{\sqrt{2}} (\ket{0_L} + e^{i\pi /4} \ket{1_L})  \tag{24}

It will be. I will explain how to create this state later, but I will explain the next operation on the premise that such a state is created. First, apply control NOT. This is the fault-tolerant control NOT you just configured. A fault-tolerant control NOT can be treated as a logical basis in mathematical formulas, so it is just like a control NOT.

\ket{0_L} \bra{0_L} \otimes I + \ket{1_L} \bra{1_L} \otimes \bar{X}  \tag{25}

Can be written. Therefore, the effect of CNOT on $ \ ket {C} \ ket {\ psi} $ is

\begin{align}
\ket{C}\ket{\psi} &\rightarrow \frac{1}{\sqrt{2}} (\ket{0_L} \bra{0_L} \otimes I + \ket{1_L} \bra{1_L} \otimes \bar{X}) \ket{C}\ket{\psi} \\
&= \frac{1}{2} (\ket{0_L} \bra{0_L} \otimes I + \ket{1_L} \bra{1_L} \otimes \bar{X}) (\ket{0_L} + e^{i\pi /4} \ket{1_L}) (a\ket{0_L} + b\ket{1_L}) \\
&= \frac{1}{2} (\ket{0_L} (a\ket{0_L}+b\ket{1_L}) + e^{i\pi /4} \ket{1_L} (a\ket{1_L}+b\ket{0_L})) \\
&= \frac{1}{2} ((a\ket{0_L}+b e^{i\pi /4} \ket{1_L}) \ket{0_L} + (b\ket{0_L}+a e^{i\pi /4} \ket{1_L}) \ket{1_L}) \tag{26}
\end{align}

It will be. If the second code block is measured and the measured value is $ 1 (\ ket {0_L}) $, nothing is done, so the output of the first code block is

\ket{\psi^{\prime}} = a\ket{0_L}+b e^{i\pi /4} \ket{1_L} = \bar{T} \ket{\psi}  \tag{27}

It will be. If the measured value is $ -1 (\ ket {1_L}) $, $ SX $ is calculated, so the state of equation (27) multiplied by the global phase is output as the first sign state. ..

Well, there was an explanation of how to spit out the state like equation (24). Considering the stabilizer format, the conversion from $ \ ket {0_L} $ to $ TH \ ket {0_L} $ is $ \ bar {Z} \ rightarrow \ bar {T} \ bar {H} \ bar {Z} \ bar It can be expressed as {H} \ bar {T} ^ {\ dagger} $, and can be expressed as $ \ bar {T} \ bar {H} \ bar {Z} \ bar {H} \ bar {T} ^ {\ dagger } $ Is

\begin{align}
\bar{T}\bar{H}\bar{Z}\bar{H}\bar{T}^{\dagger} &= \bar{T}\bar{X}\bar{T}^{\dagger}  \\
&=
\begin{pmatrix}
1 & 0\\
0 & e^{i\pi /4}
\end{pmatrix}
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & e^{-i\pi /4}
\end{pmatrix} \\
&= e^{-i\pi /4} \bar{S} \bar{X}  \tag{28}
\end{align}

It can be rewritten as. Therefore, the state $ \ ket {C} $ is the eigenstate for the operator $ e ^ {-i \ pi / 4} \ bar {S} \ bar {X} $ eigenvalue $ + 1 $. This means that $ e ^ {-i \ pi / 4} \ bar {S} \ bar {X} $ is indirectly measured, and if the measured value $ + 1 $ is obtained, it is $ \ ket {C} $. It will be good as. If the reading is $ -1 $, apply $ \ bar {Z} $. Then, it changes to the state of eigenvalue $ + 1 $ [^ 6].

[^ 6]: $ \ bar {Z} $ is anti-convertible to $ e ^ {-i \ pi / 4} \ bar {S} \ bar {X} $, so $ \ bar {Z} \ frac { 1} {2} (Ie ^ {-i \ pi / 4} \ bar {S} \ bar {X}) \ ket {0_L} = \ frac {1} {2} (I + e ^ {-i \ It becomes pi / 4} \ bar {S} \ bar {X}) \ ket {0_L} $, and the eigenvalue can be reversed.

There is one caveat here. I said that $ e ^ {-i \ pi / 4} \ bar {S} \ bar {X} $ is measured indirectly, but a little ingenuity is required. Schematically

|0>  -----H--*--*----*-----------H-- M
             |  |    |
|0L> --/-----X--S--exp(-i pi/4)-----

However, the problem is the controlled $ e ^ {-i \ pi / 4} I $ gate. this is,

|0>  ----*-----------    ---T+--
         |             = 
|0L> --exp(-i pi/4)--    -------

Using the equation

|0>  -----H--*--*--T+-H-- M
             |  |
|0L> --/-----X--S--------

It is good to transform it into the shape of. The $ T $ gate in this figure is just a $ T $ gate.

So now we have a $ T $ gate for code states. The $ CNOT, S, X $ gates can all be transversal, so they are fault-tolerant as a whole. However, no consideration was given to the measurement. It can be said that a truly fault-tolerant $ T $ gate could be realized only when the measurement part was also made fault-tolerant.

So, let's talk about fault-tolerant measurements next.

Measurement

The indirect measurement of the operator $ ABC $ that works on the code state is

|0> --H--*--*--*--H-- M
         |  |  |
    -----A--|--|-----
    --------B--|-----
    -----------C-----

(Since it is a Steane code premise, 7 lines are actually required for the code block, but for the sake of simplicity, it is omitted to 3 lines). However, from the perspective of achieving fault tolerant, this is a very bad idea. If the auxiliary bits are noisy, the noise can propagate to all the bits that make up the code block. To avoid this, we know that the following configuration should be used.

|0> --H--*----*----------*--H-- M
    -----X----|--*-------X-----
    -----X----|--|--*----X-----
              |  |  |
    ----------A--|--|----------
    -------------B--|----------
    ----------------C----------

Set the auxiliary bit to 3 bits

|0> --H--*--
    -----X--
    -----X--

This circuit creates the so-called cat state $ (\ ket {000} + \ ket {111}) / \ sqrt {2} $. Indirect measurement can be achieved in this way, and even if noise occurs in the auxiliary bits, the noise propagates to only one bit of the code block.

This is fine, but it can be noisy when creating the first cat state. If noise occurs in any of the 3 auxiliary bits, the noise easily propagates in the block. What should I do? So, in such a case, if you create a cat state, you have to check whether it is done correctly, and if it is not correct, discard that state and try to create it again. Specifically, the parity check circuit is inserted immediately after the cat state is created.

Parity check
      |0> --X--X-- M
            |  |
|0> --H--*--*--|--*----------*--H-- M
    -----X-----*--|--*-------X-----
    -----X--------|--|--*----X-----
                  |  |  |
    --------------A--|--|----------
    -----------------A--|----------
    --------------------A----------

It's like this. For the sake of simplicity, only one parity check is written, but since all bit combinations should be checked, the parity check of the second and third bits is also necessary. Repeat the creation of the cat state until all the parity becomes even parity.

It's a little difficult, but now it's fault tolerant! I would like to say, but there is another problem. It is also necessary to consider that phase inversion noise is generated in the auxiliary bit added for the parity check. In this case, it propagates to two bits of the 3-bit block for indirect measurement. This cannot correct errors. As a result, the rightmost measurement will output the wrong result. In this case as well, it can't be helped, so repeat the measurement three times to get a majority vote. Then, the error occurrence rate can be suppressed to $ O (p ^ 2) $ (assuming the probability of noise occurrence is $ p $).

You have now configured an error-tolerant measurement. The measurement is not only necessary for acquiring the calculation result, but also for the syndrome measurement for error correction and the $ T $ gate creation explained earlier, and also for the initial state described next. Is what becomes.

Initial state creation

So, let's talk about how to configure initial state creation into fault tolerant. Last time, Last time Initial state of 5 qubit sign (logical base) We have described in detail how to create the state $ \ ket {0_L} $). Simply replace it with the Steane code and replace the indirect measurement part with the fault-tolerant measurement just described.

The six generators of the Steane sign and the logical $ Z $ operator were presented earlier. To create the initial state, these seven operators are measured in order and projected onto the eigenspace with the eigenvalue of $ + 1 $ one after another, but they are projected onto the eigenspace with the eigenvalue of $ -1 $. In that case, it was better to apply an operator that is anti-commutative with the generator and is commutable with other generators. In the case of Steane sign, there are the following 7 operators [^ 7]. I will post it here as it will be needed when checking the operation later.

[^ 7]: I think there are other options, but I've listed the ones that I came up with as an example.

operator
g_{1}^{\prime} Z \space Z \space Z \space Z \space I \space I \space I
g_{2}^{\prime} Z \space Z \space I \space I \space Z \space Z \space I
g_{3}^{\prime} I \space I \space Z \space I \space Z \space I \space Z
g_{4}^{\prime} X \space X \space X \space X \space I \space I \space I
g_{5}^{\prime} X \space X \space I \space I \space X \space X \space I
g_{6}^{\prime} I \space I \space X \space I \space X \space I \space X
g_{7}^{\prime} X \space X \space X \space X \space X \space X \space X

Now we know how to configure all of the processes required for quantum computation (initial state creation, gate operation, measurement) into fault tolerant.

Now, the next question is how small the error rate of the quantum calculation can be. Assuming that the probability that each component of the system is wrong is $ p $, the error rate of the calculation result can be suppressed to $ O (p ^ 2) $ at most, but how small can it be? Is that? In fact, if $ p $ is below a certain threshold, there is a way to make it as small as you like. I'll explain that below.

Concatenation code and threshold theorem

The components that make up a quantum computing system include a qubit, a control device that controls it, a measurement device for measuring the quantum state, etc., and each of them receives a malfunction or noise from the outside world, and is predetermined. There is a certain probability that it will not work. Very simply let's assume that the average probability is $ p $. As explained in the previous sections, at most one error per code block can be corrected. The problem is when two errors occur at the same time and propagate to get two errors into a particular code block. Its probability is proportional to $ p ^ 2 $ and is $ cp ^ 2 $. This proportional coefficient $ c $ is the number in the case of "causing two errors in a particular code block". Nielsen, Chan says $ c \ approx 10 ^ {4} $ as a very rough estimate using a simple quantum circuit as an example. The value is posted.

In order to reduce this error rate, we will consider further encoding each qubit that constitutes the code block and the auxiliary block for error correction. Since 1 bit of the original system was 7 times + $ \ alpha $ by the Steane code, it is further 7 times + $ \ alpha $. The scale of the system will increase, but thanks to this, the error rate will be $ c (cp ^ 2) ^ 2 = c ^ {3} p ^ {4} $. When each bit is encoded with the Steane code again, the system scale becomes 7 times that + $ \ alpha $, and the error rate is $ c (c (cp ^ 2) ^ 2) ^ 2 = c ^ {7} p ^. It will be {8} $. If this operation is repeated $ k $ times, the system scale (required number of bits) will be $ d ^ k $ when the original system scale is $ d $, and the error rate will be $ (cp) ^ {2. It will be ^ k} / c $. Hierarchically configuring the encoding in this way is called a "concatenated code".

Suppose that the quantum computing system we are thinking of originally has $ p (n) $ gates. Suppose $ n $ represents the magnitude of a problem and $ p (n) $ is a polynomial function for $ n $. If the error rate you want to achieve in the end is $ \ epsilon $,

\frac{(cp)^{2^k}}{c} \leq \frac{\epsilon}{p(n)}  \tag{29}

The relationship must be established. As you can see, if $ p $ meets a certain threshold $ p_ {th} <1 / c $, you can reduce the error rate as much as you like by making the value of $ k $ large enough. As a rough estimate, it was $ c \ approx 10 ^ {4} $, so $ p_ {th} \ approx 10 ^ {-4} = 0.01 % $.

So what is the system size (bits) $ d ^ k $ required to achieve the error rate $ \ epsilon $? It can be found by replacing the inequality sign in equation (27) with an approximation and transforming it as follows.

d^k \approx (\frac{\log(p(n)/c\epsilon)}{\log(1/pc)})^{\log d} = O(poly(\log p(n)/\epsilon))  \tag{30}

The number of gates included in this is

O(poly(\log p(n)/\epsilon) p(n))  \tag{31} 

Can be estimated. In other words, a quantum circuit containing $ p (n) $ gates depends on the hardware whose component error probability is at most $ p $.

O(poly(\log p(n)/\epsilon) p(n))  \tag{32} 

An error rate of $ \ epsilon $ can be achieved using multiple gates. However, $ p $ must be less than a certain threshold. This is called the "threshold theorem".

Operation check

Now that I understand the theory, I would like to take a very simple example of fault tolerant quantum computation and check the operation with the quantum computation simulator qlazy. I explained above how to make the $ T $ gate fault tolerant, but I felt that it had a slightly tricky circuit configuration, so let's check if this really works. Even if the $ T $ gate is applied to the $ \ ket {0_L} $ state, it remains $ \ ket {0_L} $ and is not interesting at all, so apply Hadamard back and forth and $ \ bar {H} \ bar { Perform the operation T} \ bar {H} \ ket {0_L} $ and finally measure with the $ \ bar {Z} $ basis. For the time being, all gate operations and measurements are configured as fault tolerant. However, I omitted the process of adding some noise and correcting the error (since I did it last time). In addition, we have omitted the parity check that checks whether the cat state is correct and the method of measuring many times and taking a majority vote. Anyway, this time we focused only on making sure that the operations and measurements on the code states were correctly achieved in a fault-tolerant configuration.

Implementation

Now let's look at the whole Python code.

from qlazypy import QState

def logical_zero():

    anc = [0,1,2,3,4,5,6]     # registers for ancila
    cod = [7,8,9,10,11,12,13] # registers for steane code
    qs_total = QState(14)

    # g1
    qs_total.h(anc[0])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.cx(anc[0], cod[3]).cx(anc[1], cod[4]).cx(anc[2], cod[5]).cx(anc[3], cod[6])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.h(anc[0])
    mval = qs_total.m(qid=[anc[0]]).last
    if mval == '1': qs_total.z(cod[0]).z(cod[1]).z(cod[2]).z(cod[3])
    qs_total.reset(qid=anc)

    # g2
    qs_total.h(anc[0])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.cx(anc[0], cod[1]).cx(anc[1], cod[2]).cx(anc[2], cod[5]).cx(anc[3], cod[6])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.h(anc[0])
    mval = qs_total.m(qid=[anc[0]]).last
    if mval == '1': qs_total.z(cod[0]).z(cod[1]).z(cod[4]).z(cod[5])
    qs_total.reset(qid=anc)

    # g3
    qs_total.h(anc[0])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.cx(anc[0], cod[0]).cx(anc[1], cod[2]).cx(anc[2], cod[4]).cx(anc[3], cod[6])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.h(anc[0])
    mval = qs_total.m(qid=[anc[0]]).last
    if mval == '1': qs_total.z(cod[2]).z(cod[4]).z(cod[6])
    qs_total.reset(qid=anc)

    # g4
    qs_total.h(anc[0])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.cz(anc[0], cod[3]).cz(anc[1], cod[4]).cz(anc[2], cod[5]).cz(anc[3], cod[6])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.h(anc[0])
    mval = qs_total.m(qid=[anc[0]]).last
    if mval == '1': qs_total.x(cod[0]).x(cod[1]).x(cod[2]).x(cod[3])
    qs_total.reset(qid=anc)
    
    # g5
    qs_total.h(anc[0])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.cz(anc[0], cod[1]).cz(anc[1], cod[2]).cz(anc[2], cod[5]).cz(anc[3], cod[6])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.h(anc[0])
    mval = qs_total.m(qid=[anc[0]]).last
    if mval == '1': qs_total.x(cod[0]).x(cod[1]).x(cod[4]).x(cod[5])
    qs_total.reset(qid=anc)

    # g6
    qs_total.h(anc[0])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.cz(anc[0], cod[0]).cz(anc[1], cod[2]).cz(anc[2], cod[4]).cz(anc[3], cod[6])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
    qs_total.h(anc[0])
    mval = qs_total.m(qid=[anc[0]]).last
    if mval == '1': qs_total.x(cod[2]).x(cod[4]).x(cod[6])
    qs_total.reset(qid=anc)

    # g7
    qs_total.h(anc[0])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,7)]
    [qs_total.cz(anc[i], cod[i]) for i in range(7)]
    [qs_total.cx(anc[0], anc[i]) for i in range(1,7)]
    qs_total.h(anc[0])
    mval = qs_total.m(qid=[anc[0]]).last
    if mval == '1': [qs_total.x(q) for q in cod]
    qs_total.reset(qid=anc)

    qs = qs_total.partial(qid=cod)
    qs_total.free()

    return qs

def faulttolerant_t(qs_in):

    A = [0,1,2,3,4,5,6]         # registers for ancila
    B = [7,8,9,10,11,12,13]     # registers for output quantum state
    C = [14,15,16,17,18,19,20]  # registers for input quantum state
    qs_A = QState(7)
    qs_B = logical_zero()
    qs_AB = qs_A.tenspro(qs_B)
    qs_ABC = qs_AB.tenspro(qs_in)

    # -H-T-
    qs_ABC.h(A[0])
    [qs_ABC.cx(A[0], A[i]) for i in range(1,7)]
    # [qs_ABC.cx(A[i],B[i]).cs(A[i],B[i]).cz(A[i],B[i]).t_dg(A[i]) for i in range(7)]
    [qs_ABC.cx(A[i],B[i]).cs(A[i],B[i]).cz(A[i],B[i]) for i in range(7)]

    [qs_ABC.cx(A[0], A[i]) for i in range(1,7)]
    # qs_ABC.h(A[0])
    qs_ABC.t_dg(A[0]).h(A[0])
    mval = qs_ABC.m(qid=[A[0]]).last
    if mval == '1': [qs_ABC.z(q) for q in B]
    qs_ABC.reset(qid=A)

    # -CNOT-
    [qs_ABC.cx(B[i], C[i]) for i in range(7)]
    
    # -M-, -SX-
    qs_ABC.h(A[0])
    [qs_ABC.cx(A[0], A[i]) for i in range(1,7)]
    [qs_ABC.cz(A[i],C[i]) for i in range(7)]
    [qs_ABC.cx(A[0], A[i]) for i in range(1,7)]
    qs_ABC.h(A[0])
    mval = qs_ABC.m(qid=[A[0]]).last
    if mval == '1': [qs_ABC.x(q).s(q).z(q) for q in B]

    qs_out = qs_ABC.partial(qid=B)
    QState.free_all(qs_A, qs_B, qs_AB, qs_ABC)

    return qs_out

def faulttolerant_m(qs_in):

    anc = [0,1,2,3,4,5,6]      # ancila
    cod = [7,8,9,10,11,12,13]  # steane code
    qs_anc = QState(7)
    qs_total = qs_anc.tenspro(qs_in)

    qs_total.h(anc[0])
    [qs_total.cx(anc[0], anc[i]) for i in range(1,7)]
    [qs_total.cz(anc[i], cod[i]) for i in range(7)]
    [qs_total.cx(anc[0], anc[i]) for i in range(1,7)]
    qs_total.h(anc[0])
    md = qs_total.m(qid=[0], shots=10000)
    QState.free_all(qs_anc, qs_total)

    return md

if __name__ == '__main__':

    qs = QState(1).h(0).t(0).h(0)    # not fault-tolerant -HTH-
    qs.show()
    
    qs_ini = logical_zero()          # make initial state (logical zero)
    [qs_ini.h(i) for i in range(7)]  # operate fault-tolerant H
    qs_fin = faulttolerant_t(qs_ini) # operate fault-tlerant T
    [qs_fin.h(i) for i in range(7)]  # operate fault-tolerant H
    md = faulttolerant_m(qs_fin)     # execute fault-tolerant measurement
    print(md.frequency)

    QState.free_all(qs, qs_ini, qs_fin)

I will briefly explain what you are doing. First, take a look at the main processing section.

qs_ini = logical_zero()          # make initial state (logical zero)

So, we are creating the logical ground state $ \ ket {0_L} $ of the Steane sign. The details are in the function logical_zero, but it can be very long and confusing. But the basics are simple,

# g1
qs_total.h(anc[0])
[qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
qs_total.cx(anc[0], cod[3]).cx(anc[1], cod[4]).cx(anc[2], cod[5]).cx(anc[3], cod[6])
[qs_total.cx(anc[0], anc[i]) for i in range(1,4)]
qs_total.h(anc[0])
mval = qs_total.m(qid=[anc[0]]).last
if mval == '1': qs_total.z(cod[0]).z(cod[1]).z(cod[2]).z(cod[3])
qs_total.reset(qid=anc)

It is a repetition of the code like. The auxiliary bit is set to the cat state and the generator is indirectly measured. If the measurement result is 1 (that is, $ \ ket {1_L} $), the operator that is anti-commutative with the generator is applied. When all seven operators are complete, the state is returned (except for the auxiliary bits). Here, the method last to get the reading is to get a binary string (added in v0.0.41). next,

[qs_ini.h(i) for i in range(7)]  # operate fault-tolerant H

So, I am applying the fault-tolerant Hadamard. It's a transversal type, so it's easy.

qs_fin = faulttolerant_t(qs_ini) # operate fault-tlerant T

I'm running a fault-tolerant $ T $ gate. Let's take a look at the contents of the function faulttolerant_t. The first code block explained above is named B, the second code block is named C, and the auxiliary block required for measurement is named A. Each consists of 7 bits. I hope you can see it on the premise.

# -H-T-
qs_ABC.h(A[0])
[qs_ABC.cx(A[0], A[i]) for i in range(1,7)]
[qs_ABC.cx(A[i],B[i]).cs(A[i],B[i]).cz(A[i],B[i]).t_dg(A[i]) for i in range(7)]
[qs_ABC.cx(A[0], A[i]) for i in range(1,7)]
qs_ABC.h(A[0])
mval = qs_ABC.m(qid=[A[0]]).last
if mval == '1': [qs_ABC.z(q) for q in B]
qs_ABC.reset(qid=A)

So we are creating the same code state as applying $ H $ and $ T $. I set the auxiliary bit to the cat state and measure the operator $ e ^ {-i \ pi / 4} SX $, but since the transversal type of $ S $ was $ ZS $, I control $ ZSX $. will do so. Furthermore, since the phase $ e ^ {-i \ pi / 4} $ was equal to the $ T $ gate on the control side, I remember to include that operation.

# -CNOT-
[qs_ABC.cx(B[i], C[i]) for i in range(7)]

So, do a transversal CNOT,

# -M-, -SX-
qs_ABC.h(A[0])
[qs_ABC.cx(A[0], A[i]) for i in range(1,7)]
[qs_ABC.cz(A[i],C[i]) for i in range(7)]
[qs_ABC.cx(A[0], A[i]) for i in range(1,7)]
qs_ABC.h(A[0])
mval = qs_ABC.m(qid=[A[0]]).last
if mval == '1': [qs_ABC.x(q).s(q).z(q) for q in B]

If the code block C is measured and the measurement result is 1, then $ SX $ is applied to the code block A. Again, remember that the transversal type of $ S $ was $ ZS $. The result should be in code block B, so

qs_out = qs_ABC.partial(qid=B)

So, the partial system corresponding to the code block B is taken out by the partial method. I will return this.

Return to the main processing section.

For the result of the $ T $ gate

[qs_fin.h(i) for i in range(7)]  # operate fault-tolerant H

So, again calculate the Hadamard to fault tolerant, and finally,

md = faulttolerant_m(qs_fin)     # execute fault-tolerant measurement
print(md.frequency)

Then, carry out the measurement. See the contents of the function faulttolerant_m for details. It is just indirect measurement with the auxiliary bit in the cat state. The measurement result is acquired as a variable md (instance of MData class), and the frequency data is obtained in Counter format by the frequency method (added in v0.0.41). that's all.

result

The execution result is shown below.

c[0] = +0.9239-0.0000*i : 0.8536 |++++++++++
c[1] = +0.0000-0.3827*i : 0.1464 |++
Counter({'0': 8524, '1': 1476})

The first two lines are the result of a normal -H-T-H- that is not fault tolerant. This is the answer. This means that the probabilities of the $ \ ket {0_L} $ state and the $ \ ket {1_L} $ state are 0.8536 and 0.1464, respectively. Counter on the 3rd line is the result of performing the fault tolerant -H-T-H- and simulating 10000 times. I think it roughly matches the probability of the answer (think!).

in conclusion

It turns out that fault-tolerant quantum computation is a very difficult technique that can be realized by making full use of various techniques. In order to make a truly practical quantum computer, this kind of steady accumulation is important, isn't it? The threshold theorem was a super-important theoretical achievement that could be said to be a supportive one that guarantees that the effort will surely be rewarded. Even so, the error rate of $ 10 ^ {-4} = 0.01 % $ is a strict requirement. Assuming an error correction code such as a stabilizer code and a concatenation code, this level of accuracy is absolutely necessary. However, it seems that this threshold can be raised by using a different coding method. Next time, I would like to study "topological code (toric code)", which is one of such code methods.

that's all

Recommended Posts

Basics of Quantum Information Theory: Fault Tolerant Quantum Computation
Basics of Quantum Information Theory: Entropy (2)
Basics of Quantum Information Theory: Data Compression (1)
Basics of Quantum Information Theory: Horebaud Limits
Basics of Quantum Information Theory: Trace Distance
Basics of Quantum Information Theory: Quantum State Tomography
Basics of Quantum Information Theory: Data Compression (2)
Basics of Quantum Information Theory: Topological Toric Code
Basics of Quantum Information Theory: Quantum Error Correction (Shor's Code)
Basics of Quantum Information Theory: Quantum Error Correction (CSS Code)
Basics of Quantum Information Theory: Quantum Error Correction (Stabilizer Code: 4)
Basics of Quantum Information Theory: Quantum Error Correction (Classical Linear Code)
Basics of Quantum Information Theory: Universal Quantum Calculation by Toric Code (1)
Read "Basics of Quantum Annealing" Day 5
Basics of Tableau Basics (Visualization Using Geographic Information)
Basics of Python ①
Basics of python ①