[PYTHON] Basics of Quantum Information Theory: Horebaud Limits

\def\bra#1{\mathinner{\left\langle{#1}\right|}} \def\ket#1{\mathinner{\left|{#1}\right\rangle}} \def\braket#1#2{\mathinner{\left\langle{#1}\middle|#2\right\rangle}}

Introduction

Previous article helped me understand the basics of "quantum entropy". So, I thought that I would advance to applied technology such as "quantum cryptography" next time, but there seems to be some basic matters to keep in mind, so I will continue to study the basics for a while (that said). There is no end to it, but there are many interesting topics, so it can't be helped). By the way, this time, I will take up the "Holebo limit". After explaining the meaning and definition in quantum information communication, I would like to realize the "limit" by using the quantum calculation simulator qlazy.

The following documents were used as references.

  1. Nielsen, Chan "Quantum Computer and Quantum Communication (3)" Ohmsha (2005)
  2. Ishizaka, Ogawa, Kawachi, Kimura, Hayashi "Introduction to Quantum Information Science" Kyoritsu Shuppan (2012)
  3. Tomita "Quantum Information Engineering" Morikita Publishing (2017)

What is the Horevo limit?

Unlike classical bits, qubits can express the superposition state of 0s and 1s. In other words, since there are infinite degrees of freedom in one qubit, it seems that infinite information can be charged there in principle. Then, using this qubit, I feel that communication with a tremendous transmission capacity can be realized. For example, Alice somehow writes an infinite length bit to a single qubit she has and transmits it to Bob [^ 1]. Bob has to make measurements in order to extract meaningful information from the qubits he receives, but considering generalized measurements (POVM measurements), he can get any number of measurement results. If you can design a good POVM so that you can retrieve the infinite length information that has been prepared correctly, you should be able to realize your dream information communication!

[^ 1]: For example, the quantum state $ \ ket {\ psi} = a \ ket {0} + b \ ket {1} $ with a coefficient of $ a $, a coefficient of $ b $, or both with infinite digits (actually) Is a very long digit). Specifically, before transmission, some kind of rotation gate is applied (for example, if it is a photon, it is polarized at some angle), and a bit string is charged at that angle.

Is this story true?

The world is not so sweet. In fact, there is a limit to the amount of information that can be retrieved. That limit is called the "Holebo limit". The sad conclusion is that the limit, that is, the upper limit of the amount of information that can be extracted from one qubit, is actually only one bit!

Theoretically, it is written as follows [^ 2].

[^ 2]: It is described as "Theorem" on p.145 of Quantum Information Engineering.

"Alice's source outputs the symbols $ \ {X_i \}, \ space i = 1, \ cdots, n $ with a probability of $ p_i $. It is represented by $ \ rho_X $ according to this symbol series $ X $. Create a quantum state $ Q $ and send it to Bob. Bob creates POVM $ \ {E_Y \} = \ {E_ {1}, \ cdots, E_ {m} \} for the sent state. Do $ to get the classical information $ Y $. At this time, the upper limit of the mutual information amount $ I (X: Y) $ between $ X $ and $ Y $ is given as follows.

I(X:Y) \leq S(\rho) - \sum_{i=1}^{n} p_i S(\rho_i)  \tag{1}

However,

\rho = \sum_{i=1}^{n} p_i \rho_i  \tag{2}

Is. This upper limit is called the "horebaud limit". "

Let's prove it [^ 3].

[^ 3]: The proof of Quantum Information Engineering was very easy to understand, so I just traced it. ..

[Proof]

Let system A be the system of Alice, the source of information, system Q be the system of quantum states prepared according to the generated information, and system B be the system of the measuring instrument that measures the quantum state received by Bob. From the source of system A, the classical information $ \ {X_i \} $ is generated with the probability $ \ {p_i \} $, but $ X_i $ is the eigenstate of the observable $ X $ $ \ ket Since it can be regarded as an eigenvalue for {i} ^ A $, the occurrence of $ X_i $ is the same as the occurrence of the quantum state $ \ ket {i} ^ {A} \ bra {i} ^ {A} $. That is. Next, we will allocate the quantum state $ \ rho_i $ according to the generated $ X_i $. Initially, nothing came to Bob, so if you say $ \ ket {0} ^ {B} \ bra {0} ^ {B} $, the whole system is $ \ rho ^ {total} $ The initial state of

\rho^{total} = \sum_{i} p_i \ket{i}^{A} \bra{i}^{A} \otimes \rho_i \otimes \ket{0}^{B} \bra{0}^{B}  \tag{3}

Can be written. Here, Bob measures. If the CPTP map corresponding to this measurement is $ \ Gamma $ and the Kraus operator that obtains the measurement result $ y $ is $ M_y $,

\Gamma(\rho^{total}) = \sum_{i} p_i \ket{i}^{A} \bra{i}^{A} \otimes (\sum_{y} M_y \rho_i M_y^{\dagger} \otimes \ket{y}^{B} \bra{y}^{B})  \tag{4}

It will be.

Since the mutual information $ I (A: Q) $ of system A and system Q in the initial state does not change even if system B, which is in a product state independent of them, is added.

I(A:Q) = I(A:Q,B)  \tag{5}

is. Assuming that the measured systems are $ A ^ {\ prime}, Q ^ {\ prime}, B ^ {\ prime} $, respectively, the mutual information decreases depending on the quantum channel [^ 4].

[^ 4]: See "Properties (13)" in Previous article.

I(A:Q,B) \geq I(A^{\prime}:Q^{\prime},B^{\prime})  \tag{6}

Is established. If the measured system $ Q ^ {\ prime} $ is discarded, the mutual information will decrease [^ 5].

[^ 5]: See "Properties (12)" in Previous article.

I(A^{\prime}:Q^{\prime},B^{\prime}) \geq I(A^{\prime}:B^{\prime})  \tag{7}

It will be. When equations (5), (6) and (7) are combined,

I(A^{\prime}:B^{\prime}) \leq I(A:Q)  \tag{8}

Is established. Here, $ I (A ^ {\ prime}: B ^ {\ prime}) $ is a classical mutual information, so you can write it as $ I (X: Y) $. Then, equation (8)

I(X:Y) \leq I(A:Q)  \tag{9}

It will be. $ I (A: Q) $ is

\begin{align}
I(A:Q) &= S(A) + S(Q) - S(A,Q) \\
&= H(A) + S(\rho) - S(A,Q)  \tag{10}
\end{align}

It can be written as, so if you substitute it in equation (9),

I(X:Y) \leq H(A) + S(\rho) - S(A,Q)  \tag{11}

It will be.

Now consider $ S (A, Q) $. Now, decompose the spectrum of $ \ rho_i $ and

\rho_i = \sum_{j} q_{i}^{j} \ket{e_{i}^{j}} \bra{e_{i}^{j}}  \tag{12}

Suppose you can do something like Then $ S (A, Q) $ will be

\begin{align}
S(A,Q) &= S(Tr_{B}(\rho^{total})) \\
&= -Tr((\sum_{i,j} p_i \ket{i}^{A} \bra{i}^{A} \otimes q_{i}^{j} \ket{e_{i}^{j}} \bra{e_{i}^{j}}) \log(\sum_{i,j} p_i \ket{i}^{A} \bra{i}^{A} \otimes q_{i}^{j} \ket{e_{i}^{j}} \bra{e_{i}^{j}})) \\
&= - \sum_{i,j} p_{i} q_{i}^{j} \log(p_{i} q_{i}^{j}) \\
&= - \sum_{i} p_{i} \log p_{i} - \sum_{i} p_i \sum_{j} q_{i}^{j} \log q_{i}^{j} \\
&= H(A) + \sum_{i} p_{i} S(\rho_{i})  \tag{13}
\end{align}

Can be calculated. Substituting this into equation (11)

I(X:Y) \leq S(\rho) - \sum_{i} p_{i} S(\rho_{i})  \tag{14}

Is established. (End of proof)

Here, the upper limit of the right side of equation (1),

\chi \equiv S(\rho) - \sum_{i} p_i S(\rho_i)  \tag{15}

This is called "holevo quantity". This amount of Holebaud information is calculated using quantum relativistic entropy.

\chi = \sum_{i} p_i S(\rho_i||\rho)  \tag{16}

It can also be expressed as. that is,

\begin{align}
\sum_{i} p_{i} S(\rho_i || \rho) &= \sum_{i} p_{i} Tr(\rho_{i} \log \rho_{i} - \rho_{i} \log \rho) \\
&= \sum_{i} p_{i} Tr(\rho_{i} \log \rho_{i}) - \sum_{i} p_{i} Tr(\rho_{i} \log \rho) \\
&= - \sum_{i} p_{i} S(\rho_{i}) - Tr(\sum_{i} p_{i} \rho_{i} \log \rho) \\
&= S(\rho) - \sum_{i} p_{i} S(\rho_{i})  \tag{17}
\end{align}

You can tell from that.

Now, the upper limit of the amount of information that Bob can receive (that is, the amount of information that can be transmitted on this communication channel) is given as in equation (15), but let's examine this limit with a little more concrete example. I will.

Now, Alice has M kinds of quantum states $ \ {\ ket {\ phi_1}, \ ket {\ phi_2}, \ cdots, \ ket {\ phi_M} \} $ for one qubit. It is assumed that you have a device that can set any of the above and fire it at Bob. Alice uses this device to send a message of M alphabets to Bob. Now, how much information can Bob receive from Alice? You just have to calculate the "Holebo information amount" just explained. I'll try.

\rho_i = \ket{\phi_i} \bra{\phi_i}  \tag{18}

Then, the quantum state generated by Alice as a whole

\rho = \sum_{i}^{M} p_i \ket{\phi_i} \bra{\phi_i}  \tag{19}

It will be. Use this to calculate the amount of horebo information. Since equation (18) is in a pure state, the second term (on the right side) of equation (15) is zero. Therefore,

\chi = S(\rho)  \tag{20}

It will be. Where $ \ rho $ is a density operator for one qubit (Hilbert space dimension is 2), so the entropy never exceeds $ \ log 2 = 1 $. This means that even the important message that Alice tried hard to send to Bob can only be recognized by Bob as a series of binary numbers (= 1 bit). This isn't because of Bob's lack of effort, but because of the Horevo limits. Bob has a wall that cannot be overcome no matter how hard he tries (no matter how he devises POVM measurement) [^ 6].

[^ 6]: Rather, it can be said that Alice is building this wall because only one qubit can be prepared, but it is a secret (laughs).

But what if Alice's device evolved to allow simultaneous firing of two qubits? In this case, the value of $ S (\ rho) $ shown in equation (20) will be 2, so Bob will be able to receive 2 bits of information if he works hard. In addition, when N qubits can be fired at the same time, Bob will be able to receive N qubits.

Therefore, the reality that "dream information communication" of charging an infinite length bit string in one qubit and transmitting it is absolutely impossible is clearly presented here [^ 7]. (Remorse!).

[^ 7]: [Practice 12.3] of Nielsen Chan is "a classic of n bits or more using n q bits Discuss using the limits of Holevo that information cannot be transmitted. ”So I think this is the answer.

Specific examples of quantum communication channels

Well then, I would like to realize this "limit" using a simulator, but when I was hoping for some concrete example, just Nielsen Chan There was a good subject in "Practice Exercise 12.4" of .jp / book / 9784274200090 /), so I will try it with the quantum communication channel described there. Here's what it looks like.

The quantum state of one qubit that Alice, the sender, can prepare is

\begin{align}
\ket{X_1} &= \ket{0} \\
\ket{X_2} &= \frac{1}{3} (\ket{0} + \sqrt{2} \ket{1}) \\
\ket{X_3} &= \frac{1}{3} (\ket{0} + \sqrt{2} e^{2\pi i/3} \ket{1}) \\
\ket{X_4} &= \frac{1}{3} (\ket{0} + \sqrt{2} e^{4\pi i/3} \ket{1}) \tag{21}
\end{align}

There are four types, select one according to the character string (classical information) you want to send, and fire it at Bob one after another. Bob who receives it is a quantum communication channel that obtains classical information by measuring some POVM. In order to evaluate the nature of this quantum communication, we will ask Alice to randomly select these four and fire them. From the previous discussion, since there is only one qubit, it is possible to transmit up to 1 bit. In other words, if you calculate the amount of Holebaud information from the density operator of the state that Alice fires, it will be (should) be 1. However, in reality, unless Bob designs an appropriate POVM, the information that can be retrieved (= mutual information) will not be 1 bit. Nielsen Chan states that "POVMs that achieve 0.415 bits are known." What is POVM at that time? That is the question of this exercise.

So, let's take Bob's point of view and design a POVM with the goal of "0.415" for the time being.

The first thing that comes to mind is a projection operation using the four bases of equation (21). In other words

【POVM #1】

\begin{align}
E_1 &= \frac{1}{2} \ket{X_1} \bra{X_1} \\
E_2 &= \frac{1}{2} \ket{X_2} \bra{X_2} \\
E_3 &= \frac{1}{2} \ket{X_3} \bra{X_3} \\
E_4 &= \frac{1}{2} \ket{X_4} \bra{X_4} \tag{22}
\end{align}

is. Now we have multiplied the coefficient $ 1/2 $ on all POVMs so that $ \ sum_ {i} E_i = I $.

The next pattern that comes to mind is the basis that is orthogonal to each basis in equation (21).

\begin{align}
\ket{\tilde{X}_1} &= \ket{1} \\
\ket{\tilde{X}_2} &= \sqrt{\frac{1}{3}} (\sqrt{2} \ket{0} - \ket{1}) \\
\ket{\tilde{X}_3} &= \sqrt{\frac{1}{3}} (\sqrt{2} \ket{0} - e^{2 \pi i/3} \ket{1}) \\
\ket{\tilde{X}_4} &= \sqrt{\frac{1}{3}} (\sqrt{2} \ket{0} - e^{4 \pi i/3} \ket{1}) \tag{23}
\end{align}

It is a projection operation using. In other words

【POVM #2】

\begin{align}
\tilde{E}_1 &= \frac{1}{2} \ket{\tilde{X}_1} \bra{\tilde{X}_1} \\
\tilde{E}_2 &= \frac{1}{2} \ket{\tilde{X}_2} \bra{\tilde{X}_2} \\
\tilde{E}_3 &= \frac{1}{2} \ket{\tilde{X}_3} \bra{\tilde{X}_3} \\
\tilde{E}_4 &= \frac{1}{2} \ket{\tilde{X}_4} \bra{\tilde{X}_4} \tag{24}
\end{align}

is. Let's simulate with these two patterns and check what happens to the amount of horebo information and the amount of mutual information.

Check with the simulator

The whole Python code is below.

import random
import cmath
import numpy as np
import pandas as pd
from qlazypy import QState, DensOp

MIN_DOUBLE = 0.000001

def classical_joint_entropy(A,B):

    code_num_A = max(A) + 1
    code_num_B = max(B) + 1

    prob = np.zeros((code_num_A,code_num_B))
    for i in range(len(A)):
        prob[A[i]][B[i]] += 1
    prob = prob / sum(map(sum, prob))

    ent = 0.0
    for i in range(code_num_A):
        for j in range(code_num_B):
            if abs(prob[i][j]) < MIN_DOUBLE:
                ent -= 0.0
            else:
                ent -= prob[i][j] * np.log2(prob[i][j])

    return ent
    
def classical_entropy(A):

    code_num = max(A) + 1

    prob = np.zeros(code_num)
    for a in A:
        prob[a] += 1.0
    prob = prob / sum(prob)

    ent = 0.0
    for p in prob:
        if abs(p) < MIN_DOUBLE:
            ent -= 0.0
        else:
            ent -= p * np.log2(p)

    return ent

def classical_mutual_information(A,B):

    ent_A = classical_entropy(A)
    ent_B = classical_entropy(B)
    ent_AB = classical_joint_entropy(A,B)

    return ent_A + ent_B - ent_AB
    
def holevo_quantity(X,de):

    samp_num = len(X)
    code_num = len(de)

    prob = np.zeros(code_num)
    for x in X:
        prob[x] += 1.0
    prob = prob / sum(prob)
    
    de_total = DensOp.mix(densop=de, prob=prob)

    holevo = de_total.entropy()
    for i in range(code_num):
        holevo -= prob[i]*de[i].entropy()

    de_total.free()
    
    return holevo

def transmit(X,de,povm):

    samp_num = len(X)
    dim_X = len(de)
    dim_Y = len(povm)

    Y = np.array([0]*samp_num)
    
    prob_list = [None]*len(X)
    for i in range(samp_num):
        prob_list[i] = de[X[i]].probability(povm=povm)
        r = random.random()
        p = 0.0
        mes = dim_Y - 1
        for k in range(dim_Y-1):
            p += prob_list[i][k]
            if r < p:
                mes = k
                break
        Y[i] = mes
        
    return Y

def make_densop(basis):

    qs = [QState(vector=b) for b in basis]
    de = [DensOp(qstate=[q], prob=[1.0]) for q in qs]

    for n in range(len(qs)):
        qs[n].free()
    
    return de
    
def make_povm(vec):

    return [np.outer(v,v.conjugate())/2.0 for v in vec]

def random_sample(code_num,samp_num):

    return np.array([random.randint(0,code_num-1) for _ in range(samp_num)])

if __name__ == '__main__':

    SQRT1      = cmath.sqrt(1/3)
    SQRT2      = cmath.sqrt(2/3)
    EXT2       = cmath.exp(2*cmath.pi*1j/3)
    EXT4       = cmath.exp(4*cmath.pi*1j/3)

    basis      = [np.array([1.0, 0.0]),
                  np.array([SQRT1, SQRT2]),
                  np.array([SQRT1, SQRT2*EXT2]),
                  np.array([SQRT1, SQRT2*EXT4])]
    
    basis_orth = [np.array([0.0, 1.0]),
                  np.array([SQRT2, -SQRT1]),
                  np.array([SQRT2, -SQRT1*EXT2]),
                  np.array([SQRT2, -SQRT1*EXT4])]

    de = make_densop(basis)

    code_num = 4
    samp_num = 100
    trial = 100

    povm_name  = ['#1','#2']
    povm_basis = [basis, basis_orth]
    
    for b in povm_basis:

        povm = make_povm(b)
    
        mutual = []
        holevo = []
        for _ in range(trial):
    
            X = random_sample(code_num,samp_num)
            Y = transmit(X, de, povm)

            mutual.append(classical_mutual_information(X,Y))
            holevo.append(holevo_quantity(X,de))

        df = pd.DataFrame({'holevo quantity':holevo,'mutual information':mutual})
        holevo_mean = df['holevo quantity'].mean()
        holevo_std  = df['holevo quantity'].std()
        mutual_mean = df['mutual information'].mean()
        mutual_std  = df['mutual information'].std()

        print("== povm: {:} ==".format(povm_name.pop(0)))
        print("[holevo quantity]")
        print("- mean = {0:.4f} (std = {1:.4f})".format(holevo_mean, holevo_std))
        print("[mutual information]")
        print("- mean = {0:.4f} (std = {1:.4f})".format(mutual_mean, mutual_std))
        print()
        
    for n in range(len(de)):
        de[n].free()

I will briefly explain what you are doing. Look at the main processing section. First, the first four lines just set the numeric constants to the variables SQRT1, SQRT2, EXT2, EXT4 for the convenience of later calculations.

The variables basis and variable basis_orth are the basis for creating POVM # 1 and # 2. In other words, the basis of equations (21) and (23) is defined here.

Since the basis is also a quantum state for assigning the code you want to send,

de = make_densop(basis)

So, I am making a list of density operators corresponding to the quantum state.

code_num = 4
samp_num = 100
trial = 100

code_num is the number of code types (4 this time), samp_num is the randomly generated code = the number of data, and trial is the number of trials to obtain the mean or standard deviation.

povm_name  = ['#1','#2']
povm_basis = [basis, basis_orth]

for b in povm_basis:

    povm = make_povm(b)
    ...

Then, turn the outer for loop (POVM # 1, # 2). make_povm is a function that creates a POVM from the base.

The inner for loop is

mutual = []
holevo = []
for _ in range(trial):

    X = random_sample(code_num,samp_num)
    Y = transmit(X, de, povm)

    mutual.append(classical_mutual_information(X,Y))
    holevo.append(holevo_quantity(X,de))
    ...

So, it is for trying repeatedly (100 times this time).

The random_sample function creates a random code (data) sequence, and the transmit function simulates transmission. The POVM measurement is simulated inside the function to extract and output the data series. The function classical_mutual_information calculates the mutual information from the input and output data. The function holevo_quantity calculates the amount of holevo information from the input data and the density operator list. This will give you the amount of mutual information and the amount of horebo information for the number of trials. After that, I use pandas to calculate the mean and standard deviation of each. The program is almost over.

Now, the execution result is as follows.

== povm: #1 ==
[holevo quantity]
- mean = 0.9921 (std = 0.0066)
[mutual information]
- mean = 0.2714 (std = 0.0748)

== povm: #2 ==
[holevo quantity]
- mean = 0.9923 (std = 0.0057)
[mutual information]
- mean = 0.4547 (std = 0.0244)

Where mean is the mean and std is the standard deviation.

First, let's look at the hole vo quantity. The amount of horebo information is defined as the probability distribution of the information source and the quantum state set used for coding are determined, so it will be the same value no matter how you choose POVM. In the program, it is implemented to generate randomly with the random function, and the value is slightly out of 1 due to the effect that the probability distribution is not exactly even. However, the limit value 1 was confirmed.

On the other hand, what about mutual information? Since this is an assumption that a virtually randomly generated information source is encoded by qubits and Bob makes POVM measurements, the result will vary greatly depending on POVM. As you can see, "POVM # 2" is a better (closer to 1) result. It is close to the target value "0.415" of Nielsen Chan. Or rather, it goes beyond that .... This seems to be an error due to the fact that the random function is used and the number of samples and the number of trials are not set so large. In any case, I feel that this "POVM # 2" is the correct answer.

One question here. There is a difference between parallel and orthogonal, but why did the difference come out even though we prepared similar POVM? For "POVM # 1", for example, when Bob's measurement result was "1", it is most likely that Alice's sign was "1", but there is a certain probability that "2" or "3" or It also includes the case that it was "4". On the other hand, in the case of "POVM # 2", if Bob's measurement result is "1", the possibility that Alice's sign was "1" disappears. It should be either "2", "3" or "4". In other words, "POVM # 2" has less uncertainty about the information Bob receives. I think this is the difference between the two [^ 8].

[^ 8]: In Previous article that explained generalized measurement, Quantum Information Engineering I simulated POVM measurement according to the example published in / books / book / 3109), but when I constructed POVM using the base orthogonal to the qubit to be transmitted, I found that "unambiguous" It turned out that "state discrimination)" can be realized. I think it's related to that.

However, the value of about "0.415" seems to be too small compared to the limit value of 1. The exercise in Nielsen Chan asks, "Can we build a measurement that even better reaches the limits of Holevo?" Let me leave it as an issue for the future.

Confirmed by theoretical calculation

Well, I can end the story with this, but I have not yet proved theoretically whether "POVM # 2" really gives mutual information "0.415". It's just experimentally found to be like that by simulation. Moreover, it is a considerable error (it is already off in the second digit after the decimal point). Therefore, I would like to confirm that the mutual information amount is this value by theoretical calculation (or rather manual calculation).

The definition of mutual information is

\begin{align}
I(X:Y) &= H(X) + H(Y) - H(X,Y) \\
&= \sum_{i,j} p(X_i, Y_j) \log \frac{p(X_i,Y_j)}{p(X_i) p(Y_j)} \tag{25}
\end{align}

was. So, if you know $ p (X_i, Y_j) $, then $ p (X_i), p (Y_j) $ can be marginalized with each random variable, so you can calculate the mutual information. .. $ p (X_i, Y_j) $ is a joint probability distribution where $ X_i $ is generated from the source and $ Y_j $ is decoded on the receiving side. Currently, $ i $ and $ j $ are 4 patterns from 1 to 4, respectively, so you only need to know a total of 16 values.

Let's try it. Since it is troublesome to write $ p (X_i, Y_j) $ one by one, set it as $ p_ {ij} \ equiv p (X_i, Y_j) $.

\begin{align}
p_{ij} &= Tr(\tilde{E}_{j} \rho_{i}) \\
&= Tr(\frac{1}{2} \ket{\tilde{X}_j} \braket{\tilde{X}_j}{X_i} \bra{X_i}) \\
&= \frac{1}{2} \braket{\tilde{X}_j}{X_i} Tr(\ket{\tilde{X}_j}  \bra{X_i}) \tag{26}
\end{align}

Substitute equations (21) and (23) into this for steady calculation. Normalization as a joint probability distribution should not be possible with this, so at the end we will normalize so that the sum is 1.

First, when $ i = 1 $.

p_{11} = 0, \space p_{12} = \frac{1}{3}, \space p_{13} = \frac{1}{3}, \space p_{14} = \frac{1}{3}  \tag{27}

Similarly, if $ i = 2 $,

p_{21} = \frac{1}{3}, \space p_{22} = 0, \space p_{23} = \frac{2}{9} (1-\cos \frac{2\pi}{3}), \space p_{24} = \frac{2}{9} (1-\cos \frac{4\pi}{3}) \tag{28}

If $ i = 3 $,

p_{31} = \frac{1}{3}, \space p_{32} = \frac{2}{9} (1-\cos \frac{2\pi}{3}), \space p_{33} = 0, \space p_{34} = \frac{2}{9} (1-\cos \frac{2\pi}{3}) \tag{29}

If $ i = 4 $,

p_{41} = \frac{1}{3}, \space p_{42} = \frac{2}{9} (1-\cos \frac{4\pi}{3}), \space p_{43} = \frac{2}{9} (1-\cos \frac{2\pi}{3}), \space p_{44} = 0 \tag{30}

Can be calculated. here,

\begin{align}
\frac{2}{9} (1-\cos \frac{2\pi}{3}) &= \frac{2}{9} (1 - (-\frac{1}{2})) = \frac{1}{3} \\
\frac{2}{9} (1-\cos \frac{4\pi}{3}) &= \frac{2}{9} (1 - (-\frac{1}{2})) = \frac{1}{3} \tag{31}
\end{align}

So $ p_ {ij} $ is

\begin{pmatrix}
p_{11} & p_{12} & p_{13} & p_{14} \\
p_{21} & p_{22} & p_{23} & p_{24} \\
p_{31} & p_{32} & p_{33} & p_{34} \\
p_{41} & p_{42} & p_{43} & p_{44}
\end{pmatrix}
= \frac{1}{12}
\begin{pmatrix}
0 & 1 & 1 & 1 \\
1 & 0 & 1 & 1 \\
1 & 1 & 0 & 1 \\
1 & 1 & 1 & 0
\end{pmatrix}  \tag{32}

And so on, it's a very simple form (here we normalized the joint probability distributions to be 1).

\begin{align}
& p(X_i,Y_j) = p_{ij} \\
\\
& p(X_i) = \sum_{j} p_{ij} \\
& p(Y_j) = \sum_{i} p_{ij} \tag{33}
\end{align}

By substituting into equation (25),

I(X:Y) = 12 \times \frac{1}{12} \log \frac{\frac{1}{12}}{\frac{1}{4} \times \frac{1}{4}} = \log \frac{4}{3} = 2 - \log 3 = 0.415037\cdots  \tag{34}

It was confirmed that the mutual information amount is "0.415". Congratulations, congratulations.

in conclusion

Thanks to the basic knowledge of quantum entropy, I feel that I can understand the story of quantum information communication quite easily (although it may be just a touch part). I'm going to Zunsun in this condition!

However, the next topic is undecided as usual.

that's all

Recommended Posts

Basics of Quantum Information Theory: Horebaud Limits
Basics of Quantum Information Theory: Entropy (2)
Basics of Quantum Information Theory: Data Compression (1)
Basics of Quantum Information Theory: Trace Distance
Basics of Quantum Information Theory: Quantum State Tomography
Basics of Quantum Information Theory: Data Compression (2)
Basics of Quantum Information Theory: Topological Toric Code
Basics of Quantum Information Theory: Fault Tolerant Quantum Computation
Basics of Quantum Information Theory: Quantum Error Correction (Shor's Code)
Basics of Quantum Information Theory: Quantum Error Correction (Stabilizer Code: 4)
Basics of Quantum Information Theory: Quantum Error Correction (Classical Linear Code)
Basics of Quantum Information Theory: Universal Quantum Calculation by Toric Code (1)
Basics of Quantum Information Theory: Logical Operation by Toric Code (Brading)
Read "Basics of Quantum Annealing" Day 5
Read "Basics of Quantum Annealing" Day 6
Basics of Tableau Basics (Visualization Using Geographic Information)
Basics of Python ①
Basics of Python scraping basics
# 4 [python] Basics of functions
Basics of network programs?
Basics of Perceptron Foundation
Basics of regression analysis
Basics of python: Output