[PYTHON] Qiskit: Implementation of quantum Boltzmann machine

Introduction

I posted an article with the same content the other day, but I fell asleep and deleted it. I'm sorry.

This time, I implemented a quantum Boltzmann machine using Qiskit. The reference paper is here. A quantum algorithm to train neural networks using low-depth circuits Also, the Boltzmann machine using quantum annealing machine is introduced in the article by Mr. minato, so I would like to leave it to that. RBM Boltzmann learning from D-Wave to quantum gate machine

The restricted Boltzmann machine is said to be the smallest unit of a neural network. By the way, the author is not an expert in machine learning, so I will leave the detailed explanation to others.

QABoM

In this paper, it is introduced as Quantum Approximate Boltzmann Machine (QABoM) Algorithm. I don't have time to go into the details, so I would like to introduce the flow of QABoM introduced in Appendix.

The index to be used is as follows. ・ V: visible layer ・ H: hidden layer ・ U: visible + hidden layer

step 0

Define the full and partial initial Hamiltonians.

H_I = \sum_{j \in u} Z_j \\
H_{\tilde{I}} = \sum_{j \in h} Z_j 

Looking at the formula, full is about visible + hidden, and partial is about hidden. next Define the full and partial mixer Hamiltonians.

H_M = \sum_{j \in u} X_j \\
H_{\tilde{M}} = \sum_{j \in h} X_j 

Define weight $ J_ {jk} ^ 0 $ and bias $ B_j ^ 0 $, which are variables of the Boltzmann machine. Repeat steps 1 to 5 shown below for epoch 1 and above.

step 1

We define full cost Hamiltonian and partial cost Hamiltonian in epoch n.

\hat{H}_C^n = \sum_{j,k \in u} J_{jk}^n \hat{Z}_j \hat{Z}_k + \sum_{j \in u} B_j^n \hat{Z}_j \\
\hat{H}_{\tilde{C}}^n = \sum_{j,k \in u} J_{jk}^n \hat{Z}_j \hat{Z}_k + \sum_{j \in h} B_j^n \hat{Z}_j

step 2 Unclamped Thermalization

QAOA is executed using the formulas defined in step 0 and step 1.

a

Initialize pulse parameters-> $ \ gamma, \ beta $

b

Run QAOA using $ H_M $ and $ H_C $ (See other sites for QAOA commentary ...) Find the optimization parameter. Here, the expected value is calculated by $ \ hat {H} _C ^ n $.

c

Execute the parameter circuit with the optimum parameter entered, and find the expected values $ \ <Z_j Z_k> $ and $ \ <Z_j> $.

step 3 Clamped thermalization

For each data string $ x $

a

Initialize pulse parameters-> $ \ gamma, \ beta $

b

Encode $ x $ in the visible layer.

c

Run QAOA using $ H_M $ and $ H_C $ (See other sites for QAOA commentary ...) Find the optimization parameter. Here, the expected value is taken by $ \ hat {H} _ {\ tilde {C}} ^ n $.

d

Execute the parameter circuit with the optimum parameter entered, and find the expected values $ \ <Z_j Z_k> $ and $ \ <Z_j> $.

step 4

We will update the parameters. Where $ \ <\ bar {\ cdots}> _D $ is the average of the results of step 3.

\delta J_{jk}^n = <\bar{Z_j Z_k}>_D -< Z_j Z_k> \\
\delta B_j^n = < \bar{Z_j}> - <Z_j>_D \\
J_j^{n+1} = J_j^n + \delta J_j^n \\
B_j^{n+1} = B_j^n + \delta B_j^n

step 5

epoch = n+1 back to step 1

code

import and class initialization

python


# coding: utf-8

from qiskit.aqua.utils import tensorproduct
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit.quantum_info.analysis import average_data

import numpy as np
from copy import deepcopy

from QAOA import QAOA
from my_functions import sigmoid

class QBM:

    def __init__(self, num_visible=2, num_hidden=1, steps=3,
                 tmep=1.0, quant_meas_num=None, bias=False, reduced=False):
        self.visible_units = num_visible  # v
        self.hidden_units = num_hidden  # h
        self.total_units = self.visible_units + self.hidden_units
        self.quant_meas_num = quant_meas_num
        self.qaoa_steps = steps
        self.beta_temp = tmep
        self.state_prep_angle = np.arctan(np.exp(-1 / self.beta_temp)) * 2.0
        self.param_wb = 0.1 * np.sqrt(6. / self.total_units)
        self.WEIGHTS = np.asarray(np.zeros(shape=[self.visible_units, self.hidden_units]))
        if bias:
            self.BIAS = np.asarray(np.zeros(shape=[self.hidden_units]))
        else:
            self.BIAS = None
        self.reduced = reduced
        self.obs = self.observable()
        self.obs_tilde = self.observable(tilde=True)
        self.data_point = None

Creating an observable. You can create $$ by setting tilde = True.

python



    def Zi_Zj(self, q1, q2):
        I_mat = np.array([[1, 0], [0, 1]])
        Z_mat = np.array([[1, 0], [0, -1]])
        if q1 == 0 or q2 == 0:
            tensor = Z_mat
        else:
            tensor = I_mat
        for i in range(1, self.total_units):
            if i == q1 or i == q2:
                tensor = tensorproduct(tensor, Z_mat)
            else:
                tensor = tensorproduct(tensor, I_mat)
        return tensor

    def Zi(self, q):
        I_mat = np.array([[1, 0], [0, 1]])
        Z_mat = np.array([[1, 0], [0, -1]])
        if q == 0:
            tensor = Z_mat
        else:
            tensor = I_mat
        for i in range(1, self.total_units):
            if i == q:
                tensor = tensorproduct(tensor, Z_mat)
            else:
                tensor = tensorproduct(tensor, I_mat)
        return tensor

    def observable(self, tilde=False):
        visible_indices = [i for i in range(self.visible_units)]
        hidden_indices = [i + self.visible_units for i in range(self.hidden_units)]
        total_indices = [i for i in range(self.total_units)]

        obs = np.zeros((2**self.total_units, 2**self.total_units))

        for q1 in visible_indices:
            for q2 in hidden_indices:
                obs += -1.0 * self.Zi_Zj(q1, q2)

        if self.BIAS is not None:
            if tilde:
                for q in hidden_indices:
                    obs += -1.0 * self.Zi(q)
            elif not tilde:
                for q in total_indices:
                    obs += -1.0 * self.Zi(q)
        return obs

Create the parameter circuit.

python



    def U_circuit(self, params, qc):
        visible_indices = [i for i in range(self.visible_units)]
        hidden_indices = [i + self.visible_units for i in range(self.hidden_units)]
        total_indices = [i for i in range(self.total_units)]

        p = 2

        alpha = self.state_prep_angle
        for i in total_indices:
            qc.rx(alpha, i)
        for i in visible_indices:
            for j in hidden_indices:
                qc.cx(i, j)

        beta, gamma = params[:p], params[p:]

        def add_U_X(qc, beta):
            for i in total_indices:
                qc.rx(-2**beta, i)
            return qc

        def add_U_C(qc, gamma):
            for q1 in visible_indices:
                for q2 in hidden_indices:
                    qc.cx(q1, q2)
                    qc.rz(-2**gamma, q2)
                    qc.cx(q1, q2)
            return qc

        for i in range(p):
            qc = add_U_C(qc, gamma[i])
            qc = add_U_X(qc, beta[i])

        return qc

Execute step 2.

python



    def unclamped_circuit(self, params):
        qr = QuantumRegister(self.total_units)
        cr = ClassicalRegister(self.total_units)
        qc = QuantumCircuit(qr, cr)
        qc = self.U_circuit(params, qc)
        qc.measure(range(self.total_units), range(self.total_units))
        return qc

    def make_unclamped_QAOA(self):
        qaoa = QAOA(qc=self.unclamped_circuit, observable=self.obs, num_shots=10000, p=2)
        counts = qaoa.qaoa_run()
        return counts

I am using a self-made library called QAOA here, but I will introduce it below. Next is step 3. The difference from step 2 is that the data encode is included.

python



    def clamped_circuit(self, params):
        qr = QuantumRegister(self.total_units)
        cr = ClassicalRegister(self.total_units)
        qc = QuantumCircuit(qr, cr)
        for i in range(len(self.data_point)):
            if self.data_point[i] == 1:
                qc.x(i)
        qc = self.U_circuit(params, qc)
        qc.measure(range(self.total_units), range(self.total_units))
        return qc

    def make_clamped_QAOA(self, data_point):
        self.data_point = data_point
        qaoa = QAOA(qc=self.clamped_circuit, observable=self.obs_tilde, num_shots=10000, p=2)
        counts = qaoa.qaoa_run()
        return counts

Finally, it is the execution of learning.

python



    def train(self, DATA, learning_rate=1, n_epochs=100, quantum_percentage=1.0, classical_percentage=0.0):

        assert (quantum_percentage + classical_percentage == 1.0)
        DATA = np.asarray(DATA)
        assert (len(DATA[0]) <= self.visible_units)

        for epoch in range(n_epochs):

            print('Epoch: {}'.format(epoch+1))

            visible_indices = [i for i in range(self.visible_units)]
            hidden_indices = [i + self.visible_units for i in range(self.hidden_units)]
            total_indices = [i for i in range(self.total_units)]

            new_weights = deepcopy(self.WEIGHTS)
            if self.BIAS is not None:
                new_bias = deepcopy(self.BIAS)

            counts = self.make_unclamped_QAOA()
            unc_neg_phase_quant = np.zeros_like(self.WEIGHTS)
            for i in range(self.visible_units):
                for j in range(self.hidden_units):
                    model_expectation = average_data(counts, self.Zi_Zj(visible_indices[i], hidden_indices[j]))
                    unc_neg_phase_quant[i][j] = model_expectation

            unc_neg_phase_quant *= (1. / float(len(DATA)))

            if self.BIAS is not None:
                unc_neg_phase_quant_bias = np.zeros_like(self.BIAS)
                for i in range(self.hidden_units):
                    model_expectation = average_data(counts, self.Zi(hidden_indices[i]))
                    unc_neg_phase_quant_bias[i] = model_expectation

                unc_neg_phase_quant_bias *= (1. / float(len(DATA)))

            pos_hidden_probs = sigmoid(np.dot(DATA, self.WEIGHTS))
            pos_hidden_states = pos_hidden_probs > np.random.rand(len(DATA), self.hidden_units)
            pos_phase_classical = np.dot(DATA.T, pos_hidden_probs) * 1. / len(DATA)

            c_pos_phase_quant = np.zeros_like(self.WEIGHTS)
            if self.BIAS is not None:
                c_pos_phase_quant_bias = np.zeros_like(self.BIAS)

            if not self.reduced:

                iter_dat = len(DATA)
                pro_size = len(DATA)
                pro_step = 1

                for data in DATA:
                    counts = self.make_clamped_QAOA(data)
                    ct_pos_phase_quant = np.zeros_like(self.WEIGHTS)

                    for i in range(self.visible_units):
                        for j in range(self.hidden_units):
                            model_expectation = average_data(counts, self.Zi_Zj(visible_indices[i], hidden_indices[j]))
                            ct_pos_phase_quant[i][j] = model_expectation
                    c_pos_phase_quant += ct_pos_phase_quant

                    if self.BIAS is not None:
                        ct_pos_phase_quant_bias = np.zeros_like(self.BIAS)
                        for i in range(self.hidden_units):
                            model_expectation = average_data(counts, self.Zi(hidden_indices[i]))
                            ct_pos_phase_quant_bias[i] = model_expectation
                        c_pos_phase_quant_bias *= ct_pos_phase_quant_bias

                    pro_bar = ('==' * pro_step) + ('--' * (pro_size - pro_step))
                    print('\r[{0}] {1}/{2}'.format(pro_bar, pro_step, pro_size), end='')
                    pro_step += 1

                c_pos_phase_quant *= (1. / float(len(DATA)))
                if self.BIAS is not None:
                    c_pos_phase_quant_bias *= (1. / float(len(DATA)))

            neg_visible_activations = np.dot(pos_hidden_states, self.WEIGHTS.T)
            neg_visible_probs = sigmoid(neg_visible_activations)

            neg_hidden_activations = np.dot(neg_visible_probs, self.WEIGHTS)
            neg_hidden_probs = sigmoid(neg_hidden_activations)

            neg_phase_classical = np.dot(
                neg_visible_probs.T, neg_hidden_probs) * 1. / len(DATA)

            new_weights += learning_rate * \
                           (classical_percentage * (pos_phase_classical - neg_phase_classical) +
                            quantum_percentage * (c_pos_phase_quant - unc_neg_phase_quant))

            if self.BIAS is not None:
                new_bias = new_bias + learning_rate * \
                           (quantum_percentage * (c_pos_phase_quant_bias - unc_neg_phase_quant_bias))

            self.WEIGHTS = deepcopy(new_weights)
            print(self.WEIGHTS)
            if self.BIAS is not None:
                self.BIAS = deepcopy(new_bias)
            with open("RBM_info.txt", "w") as f:
                np.savetxt(f, self.WEIGHTS)
                if self.BIAS is not None:
                    np.savetxt(f, self.BIAS)
            with open("RBM_history.txt", "a") as f:
                np.savetxt(f, self.WEIGHTS)
                if self.BIAS is not None:
                    np.savetxt(f, self.BIAS)
                f.write(str('*' * 72) + '\n')
            print('')

        print("Training Done! ")

    def transform(self, DATA):
        return sigmoid(np.dot(DATA, self.WEIGHTS))


if __name__ == '__main__':

    qbm = QBM(num_visible=6, num_hidden=2,bias=True)

    train_data = [[1, 1, 1, 1, 1, 1],
                  [1, 1, -1, -1, 1, 1], [1, -1, -1, -1, 1, 1], [1, 1, 1, -1, -1, -1]]

    qbm.train(DATA=train_data, n_epochs=100, quantum_percentage=1.0, classical_percentage=0.0)

    print(qbm.transform(train_data))

QAOA

The self-made library QAOA is a general-purpose extension of the code used in the previously introduced Implementation of QAOA without Qiskit Aqua.

QAOA.py


# coding: utf-8

from qiskit import BasicAer, execute
from qiskit.quantum_info.analysis import average_data

from scipy.optimize import minimize
import numpy as np
import random


def classica_minimize(cost_func, initial_params, options, method='powell'):
    result = minimize(cost_func, initial_params, options=options, method=method)
    return result.x


class QAOA:

    def __init__(self, qc, observable, num_shots, p=1, initial_params=None):
        self.QC = qc
        self.obs = observable
        self.SHOTS = num_shots
        self.P = p
        if initial_params is None:
            self.initial_params = [0.1 for _ in range(self.P * 2)]
        else:
            self.initial_params = initial_params

    def QAOA_output_layer(self, params):
        qc = self.QC(params)
        backend = BasicAer.get_backend('qasm_simulator')
        results = execute(qc, backend, shots=self.SHOTS).result()
        counts = results.get_counts(qc)
        expectation = average_data(counts, self.obs)
        return expectation

    def minimize(self):
        initial_params = np.array(self.initial_params)
        opt_params = classica_minimize(self.QAOA_output_layer, initial_params,
                                       options={'maxiter':500}, method='powell')
        return opt_params

    def qaoa_run(self):
        opt_params = self.minimize()
        qc = self.QC(opt_params)
        backend = BasicAer.get_backend('qasm_simulator')
        results = execute(qc, backend, shots=self.SHOTS).result()
        counts = results.get_counts(qc)
        return counts

if __name__ == '__main__':
    pass

Calculation result

This time, we conducted computer experiments using the same data on the classic restricted Boltzmann machine and compared them. Some of the classic restricted Boltzmann machines were implemented by another person, so I used them as a reference. Implementation of constraint Boltzmann machine with python

Execution environment

OS: macOS Catalina ver 10.15.5 CPU: intel Core i7 Memory: 16 GB

Parameters

Learning rate: 0.2 epochs: 100

Results of the classic Boltzmann machine

[[7.57147606e-220 4.69897282e-106]
 [1.00548918e-308 1.00000000e+000]
 [1.00000000e+000 3.79341879e-197]
 [1.00000000e+000 7.56791638e-202]
 [1.00000000e+000 2.85625356e-196]
 [1.00000000e+000 2.51860403e-243]
 [0.00000000e+000 1.00000000e+000]
 [9.02745172e-142 1.00000000e+000]]

QABoM results

[[0.59991237 0.68602485]
 [0.74497707 0.89553788]
 [0.26436565 0.113528  ]
 [0.2682931  0.1131361 ]
 [0.58273232 0.3268427 ]
 [0.41413436 0.14647751]
 [0.74056361 0.94317095]
 [0.7317069  0.8868639 ]]

By the way, this takes almost half a day to calculate. (I forgot to measure ...) For the time being, the same result as the classic RBM was obtained. However, the solution is not as clear as the classic one.

Summary

This time, I introduced the implementation of QABoM that was published in the paper. I think I'll do a little more computer experiments and update the article.

I'm sorry for my poor Japanese. We will strive to improve our language skills.

List of references

QABoM: A quantum algorithm to train neural networks using low-depth circuits RBM: Implementation of constraint Boltzmann machine with python

Recommended Posts

Qiskit: Implementation of quantum Boltzmann machine
Qiskit: Implementation of Quantum Hypergraph States
Qiskit: Implementation of Quantum Circuit Learning (QCL)
Quantum computer implementation of quantum walk 2
Quantum computer implementation of quantum walk 3
Quantum computer implementation of quantum walk 1
Quantum computer implementation of 3-state quantum walk
[With simple explanation] Scratch implementation of deep Boltzmann machine with Python ②
[With simple explanation] Scratch implementation of deep Boltzmann machine with Python ①
Qiskit: Implementation of QAOA without Qiskit Aqua
Machine learning algorithm (implementation of multi-class classification)
[Codeing interview] Implementation of enigma cryptographic machine (Python)
Qiskit: Realizing artificial neurons with quantum circuits (implementation)
Quantum teleportation with Qiskit!
About testing in the implementation of machine learning models
Implementation of Fibonacci sequence
Qiskit: Quantum Fourier Transform
Implementation of TF-IDF using gensim
Implementation of MathJax on Sphinx
Basics of Machine Learning (Notes)
Explanation and implementation of SocialFoceModel
Implementation of game theory-Prisoner's dilemma-
Implementation of independent component analysis
Python implementation of particle filters
Importance of machine learning datasets
Implementation of quicksort in Python
Deep reinforcement learning 2 Implementation of reinforcement learning
Implementation of Scale-space for SIFT