As you study quantum mechanics, you will find that linear algebra is a good representation of quantum mechanics. The Hamiltonian is represented by a matrix, the energy is represented by an eigenvalue, and the eigenstate is represented by an eigenvector. And the Schrodinger equation results in a perennial equation. In the laboratory of condensed matter physics, the equation is solved using a computer to predict the physical properties (of course, other things are also done).
However, it will be after entering the laboratory that the calculator will be used in university classes. Before I was assigned to the laboratory, I hope that it will be useful for getting to know the theory of physical properties in the current situation that "I use laboratory equipment but not a computer".
The purpose of this article is to bridge the "solvable" problem that you learn in a classroom lecture with the "numerically solve" problem that you learn using a computer.
The relationship between the spin operator $ \ hat {S_z} $ and its eigenstate is given as follows.
\hat{S_z} \ket{\uparrow} = \frac{\hbar}{2} \ket{\uparrow} ,~~~~~\hat{S_z} \ket{\downarrow} = -\frac{\hbar}{2} \ket{\downarrow}
If you express these in a matrix,
\frac{\hbar}{2}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\begin{pmatrix}
1 \\
0
\end{pmatrix}
= \frac{\hbar}{2} \begin{pmatrix}
1 \\
0
\end{pmatrix},~~~~~
\frac{\hbar}{2}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\begin{pmatrix}
0 \\
1
\end{pmatrix}
= \frac{-\hbar}{2} \begin{pmatrix}
0 \\
1
\end{pmatrix}
It is rewritten as. As you can see from this display
\begin{pmatrix}
a \\
b
\end{pmatrix} = a \ket{\uparrow} + b \ket{\downarrow}
It doesn't matter what function $ \ ket {\ uparrow} $ is actually, only its coefficient is calculated. This is based on the fact that "orthonormal bases can be used to represent any new base."
Now, is the eigenstate of $ \ hat {S_z} $ the eigenstate of $ \ hat {S_y} $? Of course not. So what happens if you let $ \ hat {S_y} $ act on $ \ ket {\ uparrow} $?
This can be understood from the ladder operator. That is, the following equation
S^{+}\ket{\uparrow} = 0,~~~~~ S^{+}\ket{\downarrow} = \hbar \ket{\uparrow}\\
S^{-}\ket{\uparrow} = \hbar \ket{\downarrow},~~~~~S^{-} \ket{\downarrow} = 0
And the definition of the ladder operator
\hat{S_x}\ket{\uparrow} = \frac{\hbar}{2} \ket{\downarrow}, ~~~~~\hat{S_x}\ket{\downarrow} = \frac{\hbar}{2} \ket{\uparrow}\\
\hat{S_y} \ket{\uparrow} = i\frac{\hbar}{2}\ket{\downarrow}, ~~~~~\hat{S_y} \ket{\downarrow} = -i\frac{\hbar}{2} \ket{\uparrow}
Is obtained. Certainly $ \ ket {\ uparrow} $ was not in the unique state of $ \ hat {S_y} $.
So what is the unique state of $ \ hat {S_y} $? You can predict it, but let's dare to ask it systematically. First of all, the matrix display of $ \ hat {S_y} \ ket {\ uparrow} = i \ frac {\ hbar} {2} \ ket {\ downarrow} $ is
\frac{\hbar}{2}\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}
\begin{pmatrix}1\\0\end{pmatrix} = i\frac{\hbar}{2}\begin{pmatrix}0\\1\end{pmatrix}
Is. Here, the matrix on the left side is $ \ sigma_y $ of Pauli matrices. By diagonalizing this $ \ sigma_y $, the eigenstate of $ \ hat {S_y} $ is obtained. Diagonalizing $ \ sigma_y $, the eigenvalues and eigenvectors are
\lambda_1=1, ~~ u_1=\frac{1}{\sqrt{2}}\begin{pmatrix}1 \\ i \end{pmatrix}・ ・ ・(1)\\
\lambda_2=-1, ~~ u_2=\frac{1}{\sqrt{2}}\begin{pmatrix}-1\\i\end{pmatrix}・ ・ ・(2)
Will be. If you check if it is actually in a unique state,
\hat{\sigma}_yu_1=\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}\frac{1}{\sqrt{2}}\begin{pmatrix}1 \\ i \end{pmatrix}=
\frac{1}{\sqrt{2}}\begin{pmatrix}-i^2\\i\end{pmatrix} =
\frac{1}{\sqrt{2}}\begin{pmatrix}1\\i\end{pmatrix} = \lambda_1\cdot u_1
It was confirmed that it was in a unique state.
Let's calculate the expected value of $ \ sigma_x $ using this eigenstate $ u_1 $.
\bbraket{u_1}{\sigma_x}{u_1} = \frac{1}{\sqrt{2}}\begin{pmatrix}1 & -i \end{pmatrix}
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}
\frac{1}{\sqrt{2}}\begin{pmatrix}1\\i\end{pmatrix}
=0
Therefore, the expected value is 0.
Implement the analysis calculation of 1. in python.
import numpy as np
sigma_y = np.array([
[0,-1j],
[1j,0]
])
eigenvalues, eigenvectors = np.linalg.eigh(sigma_y)
print(eigenvalues[0], end=" ")
print(eigenvectors[0])
# output : -1.0 [-0.70710678+0.j -0.70710678+0.j]
It does not match. This depends on how the eigenvectors are stored. You can understand it well by performing the following operations.
print(eigenvectors @ sigma_y @ np.linalg.eigh(eigenvectors))
# output : [[-1 0.+1e-16j]
# [ 0.+1e-16 1 ]]
In other words, the eigenvectors obtained by np.linalg.eigh () are two eigenvectors arranged vertically.
eigenvectors[0] =[u1(x1),u2(x1)]
eigenvectors[1] =[u1(x2),u2(x2)]
Will be. So the eigenvectors obtained by np.linalg.eigh () need to be transposed.
import numpy as np
sigma_y = np.array([
[0,-1j],
[1j,0]
])
eigenvalues, eigenvectors = np.linalg.eigh(sigma_y)
evs = np.transpose(eigenvectors)
print(eigenvalues[0], end=" ")
print(evs[0])
# output : -1.0 [-0.71+0.j 0.+0.71j] ( 1/sqrt(2)=0.71 )
This matches the eigenvalues and eigenvectors of $ \ sigma_y $ calculated analytically in (2).
Diagonalization of matrices occurs in various physical situations. Here, as an example, let's try to "find the ground state with a one-dimensional two-site model with spin".
As a Hamiltonian
H=H_{kin}+H_{\mu}+H_{U}\\
H_{kin} = -t\sum_{\sigma}(c^{*}_{2,\sigma}c_{1,\sigma}+c^{*}_{1,\sigma}c_{2,\sigma})\\
H_{\mu} = \sum_{\sigma}\sum_{i=1,2}(-\mu_i)c^*_{i,\sigma}c_{i,\sigma}\\
H_{U} = U\sum_{i=1,2}n_{i,\uparrow}n_{i,\downarrow}
As a state, consider "half filling with one upspin and one downspin, with the number of particles and spins conserved". At this time, there are four possible states.
\psi_1 = |(\uparrow\downarrow)_1,(0)_2>\\
\psi_2 = |(0)_1,(\uparrow\downarrow)_2>\\
\psi_3 = |(\uparrow)_1,(\downarrow)_2>\\
\psi_4 = |(\downarrow)_1,(\uparrow)_2>
It is assumed that each state is standardized.
$ \ psi_1 $ is in the following state.
Let $ \ phi $ be the state represented using these states, and represent it as follows. Of course, this $ \ phi $ can represent all states.
\phi=a\psi_1+b\psi_2+c\psi_3+d\psi_4=\begin{pmatrix}a \\ b \\ c \\ d \end{pmatrix}
However,
H\phi=\begin{pmatrix}
-2\mu_1+U & 0 & -t & -t \\
0 &-2\mu_2+U & -t & -t \\
-t & -t & -\mu_1-\mu_2 & 0 \\
-t & -t & 0 & -\mu_1-\mu_2
\end{pmatrix}\phi
The Hamiltonian can be displayed in a matrix as in. By diagonalizing this matrix, let's find the eigenstate of $ \ phi $. What to do is the same as in Section 2.
import numpy as np
#Calculation condition
mu_1 = 0.1
mu_2 = 0.4
t = 1.0
U = 3.0
H = np.array([
[-2*mu_1 + U, 0, -t, -t],
[ 0, -2*mu_2 + U, -t, -t],
[ -t, -t, -mu_1 - mu_2, 0],
[ -t, -t, 0, -mu_1 - mu_2]
])
eigenvalues, eigenvectors = np.linalg.eigh(H)
evs = np.transpose(eigenvectors)
for i in range(4):
print(eigenvalues[i], end=" ")
print(evs[i])
# output
# -1.51 [ 0.29 0.34 0.63 0.63 ]
# -0.50 [ 1e-17 1e-17 -0.71 0.71 ]
# 2.44 [ 0.54 -0.83 0.10 0.10 ]
# 3.57 [-0.79 -0.44 0.30 0.30 ]
Looking at the calculation results, the state with the smallest eigenvalue, that is, the ground state, is $ [0.29 ~~ 0.34 ~~ 0.63 ~~ 0.63] $. Since the calculation condition has a large Coulomb force, a lot of $ \ psi_3 and \ psi_4 $ are included. The way $ \ psi_1 and \ psi_2 $ are mixed is not 1: 1 because the energy of the site is different.
Recommended Posts