[PYTHON] Pauli matrices and biquaternions in Clifford algebra

The length with direction can be expressed by a vector. Clifford algebra makes it possible to handle areas and volumes by combining vectors. Introduce Pauli matrices as a representation matrix of three-dimensional Clifford algebra and confirm that they are isomorphic to biquaternions. A calculation by SymPy is attached.

This is a series of articles.

  1. Complex vector considered as a real vector
  2. Bicomplex numbers considered by representation matrix
  3. Quaternion considered by representation matrix
  4. Pauli matrices and biquaternions in Clifford algebra ← This article

This article has related articles.

If you've read this article and are interested in Clifford algebra, we recommend the following articles:

Clifford algebra

This is a calculation method that allows you to handle geometric objects (length, area, volume, etc.) algebraically.

I will briefly explain to the extent necessary this time.

Basis vector

The vector that represents the coordinate axes is called the basis vector. In 3D $ xyz $ coordinates, $ x $ axial basis vector is $ \ mathbf {e_1} $, $ y $ axial basis vector is $ \ mathbf {e_2} $, $ z $ axial basis vector Let be $ \ mathbf {e_3} $.

\mathbf{e_1},\mathbf{e_2},\mathbf{e_3}=
\left(\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right),
\left(\begin{matrix} 0 \\ 1 \\ 0 \end{matrix}\right),
\left(\begin{matrix} 0 \\ 0 \\ 1 \end{matrix}\right)

1-vector

Basis vectors allow you to write vectors in the style of algebraic expressions.

\begin{align}
   \left(\begin{matrix} x \\ y \\ z \end{matrix}\right)
&= \left(\begin{matrix} x \\ 0 \\ 0 \end{matrix}\right)
 + \left(\begin{matrix} 0 \\ y \\ 0 \end{matrix}\right)
 + \left(\begin{matrix} 0 \\ 0 \\ z \end{matrix}\right) \\
&=x\left(\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right)
 +y\left(\begin{matrix} 0 \\ 1 \\ 0 \end{matrix}\right)
 +z\left(\begin{matrix} 0 \\ 0 \\ 1 \end{matrix}\right) \\
&=x\mathbf{e_1}+y\mathbf{e_2}+z\mathbf{e_3}
\end{align}

Here is a concrete example.

\left(\begin{matrix} 2 \\ 3 \\ 4 \end{matrix}\right)
=2\mathbf{e_1}+3\mathbf{e_2}+4\mathbf{e_3}

A vector represented by a linear combination (constant multiple and sum) of the basis vector in this way is called ** 1-vector ** in Clifford algebra terminology.

2-vector

In everyday life, we think of units as follows.

Applying this to the basis, we can think of it as follows.

$ \ mathbf {e_1e_2} $ is the basis for area and the coefficients are for the area value.

Here is a simple example.

(2\mathbf{e_1})(3\mathbf{e_2})=6\mathbf{e_1e_2}

The left side represents a rectangle with sides 2 and 3, and the $ 6 $ on the right side represents the area.

The area represented by combining two bases in this way is called ** 2-vector ** in Clifford algebra terminology. Just as a 1-vector represents the size (length) of a line segment, a 2-vector represents the size (area) of a surface.

The cross product of two 1-vectors diagonally intersecting each other represents the area of the parallelogram they stretch.

direction

Just as a 1-vector is a directional length, a 2-vector is a directional area. The width of the expression changes depending on the dimension.

2D

Since the parallelogram is always on the $ xy $ plane, its orientation is represented by a sign. The sign is influenced by how the basis is taken. It is positive if the two 1-vectors that stretch the surface have the same positional relationship as the basis (counterclockwise in the right-handed system).

3D

The surface on which the parallelogram rests faces in various directions. The orientation is expressed as the ratio of the components to the coordinate plane.

3-vector

In the same way, you can define a ** 3-vector ** that represents a volume. This time, 4 dimensions and above are out of scope, so 3 dimensions are assumed.

Here is a simple example.

(2\mathbf{e_1})(3\mathbf{e_2})(4\mathbf{e_3})=24\mathbf{e_1e_2e_3}

The left side represents a rectangular parallelepiped with sides 2, 3 and 4, and the right side $ 24 $ represents volume.

The cross products of the three 1-vectors that are oblique to each other represent the volume of the parallelepiped they stretch.

In 3-vectors, the orientation is represented by a sign. The sign is influenced by how the basis is taken. It is positive if the three 1-vectors that stretch the surface have the same positional relationship as the basis. The image is a little difficult, but you should be aware of the behavior that the sign is inverted by mirror image inversion.

Expression expansion

Written in the style of algebraic expressions, you can treat vector multiplication as if it were an ordinary algebraic expression.

For comparison, consider the expansion of a normal expression that does not use vectors.

2 variables

>>> from sympy import *
>>> x,y=symbols("x y")
>>> a,b,c,d=symbols("a b c d")
>>> (a*x+b*y)*(c*x+d*y)
(a*x + b*y)*(c*x + d*y)
>>> expand((a*x+b*y)*(c*x+d*y))
a*c*x**2 + a*d*x*y + b*c*x*y + b*d*y**2
>>> expand((a*x+b*y)*(c*x+d*y)).collect(x*y)
a*c*x**2 + b*d*y**2 + x*y*(a*d + b*c)
\begin{align}
&(ax+by)(cx+dy) \\
&=ax(cx+dy) \\
&\quad +by(cx+dy) \\
&=acx^2+adxy \\
&\quad +bc\underbrace{yx}_{xy}+bdy^2 \\
&=acx^2+bdy^2+\underbrace{(ad+bc)xy}_{Cross term}
\end{align}

Note that we use commutative $ xy = yx $ to group the likes.

3 variables

>>> x,y,z=symbols("x y z")
>>> a1,a2,a3,b1,b2,b3=symbols("a1:4 b1:4")
>>> (a1*x+a2*y+a3*z)*(b1*x+b2*y+b3*z)
(a1*x + a2*y + a3*z)*(b1*x + b2*y + b3*z)
>>> expand((a1*x+a2*y+a3*z)*(b1*x+b2*y+b3*z))
a1*b1*x**2 + a1*b2*x*y + a1*b3*x*z + a2*b1*x*y + a2*b2*y**2 + a2*b3*y*z + a3*b1*x*z + a3*b2*y*z + a3*b3*z**2
>>> expand((a1*x+a2*y+a3*z)*(b1*x+b2*y+b3*z)).collect([x*y,y*z,z*x])
a1*b1*x**2 + a2*b2*y**2 + a3*b3*z**2 + x*y*(a1*b2 + a2*b1) + x*z*(a1*b3 + a3*b1) + y*z*(a2*b3 + a3*b2)
\begin{align}
&(a_1+a_2y+a_3z)(b_1x+b_2y+b_3z) \\
&=a_1x(b_1x+b_2y+b_3z) \\
&\quad +a_2y(b_1x+b_2y+b_3z) \\
&\quad +a_3z(b_1x+b_2y+b_3z) \\
&=a_1b_1x^2+a_1b_2xy+a_1b_3\underbrace{xz}_{zx} \\
&\quad +a_2b_1\underbrace{yx}_{xy}+a_2b_2y^2+a_3b_3yz \\
&\quad +a_3b_1zx+a_3b_2\underbrace{zy}_{yz}+a_3b_3z^2 \\
&=a_1b_1x^2+a_2b_2y^2+a_3b_3z^2 \\
&\quad +\underbrace{(a_1b_2+a_2b_1)xy+(a_2b_3+a_3b_2)yz+(a_3b_1+a_1b_3)zx}_{Cross term} \\
\end{align}

Cross-term variables are arranged cyclically ($ x → y → z → x → \ cdots $).

Geometric product

Clifford algebra provides ** geometric products ** as vector multiplications. This allows you to calculate the inner product and the outer product from two vectors at the same time.

The products of different bases combine.

[Example]\ \mathbf{e_1e_2}

The product of the same bases is $ 1 $. This is the rule for calculating the dot product.

[Example]\ \mathbf{e_1e_1}=1

If you change the order of the combined bases, the sign will be reversed. This is called ** anticommutative **. This is the rule for calculating the cross product.

[Example]\ \mathbf{e_1e_2}=-\mathbf{e_2e_1}

2 ingredients

A two-dimensional 1-vector is represented by two components. Calculate their geometric product. Compare it with the expansion of the two-variable expression above.

\begin{align}
&(a\mathbf{e_1}+b\mathbf{e_2})(c\mathbf{e_1}+d\mathbf{e_2}) \\
&=a\mathbf{e_1}(c\mathbf{e_1}+d\mathbf{e_2}) \\
&\quad +b\mathbf{e_2}(c\mathbf{e_1}+d\mathbf{e_2}) \\
&=ac\underbrace{\mathbf{e_1e_1}}_{1}+ad\mathbf{e_1e_2} \\
&\quad +bc\underbrace{\mathbf{e_2e_1}}_{-\mathbf{e_1e_2}}+bd\underbrace{\mathbf{e_2e_2}}_{1} \\
&=\underbrace{(ac+bd)}_{inner product}+\underbrace{(ad-bc)\mathbf{e_1e_2}}_{Cross product} \\
\end{align}

You can see that the inner product can be obtained from the rule that the same basis is $ 1 $, and the outer product can be obtained from the anticommutative property. The outer product is the cross product part.

The outer product factor $ ad-bc $ is the area of a parallelogram spanned by two 1-vectors. This matches the determinant of a matrix of two 1-vectors side by side.

\det\left(\begin{array}{c|c}a&c\\b&d\end{array}\right)=ad-bc

3 ingredients

A three-dimensional 1-vector is represented by three components. Calculate their geometric product. Compare it with the expansion of the three-variable expression above.

\begin{align}
&(a_1\mathbf{e_1}+a_2\mathbf{e_2}+a_3\mathbf{e_3})(b_1\mathbf{e_1}+b_2\mathbf{e_2}+b_3\mathbf{e_3}) \\
&=a_1\mathbf{e_1}(b_1\mathbf{e_1}+b_2\mathbf{e_2}+b_3\mathbf{e_3}) \\
&\quad +a_2\mathbf{e_2}(b_1\mathbf{e_1}+b_2\mathbf{e_2}+b_3\mathbf{e_3}) \\
&\quad +a_3\mathbf{e_3}(b_1\mathbf{e_1}+b_2\mathbf{e_2}+b_3\mathbf{e_3}) \\
&=a_1b_1\underbrace{\mathbf{e_1e_1}}_{1}+a_1b_2\mathbf{e_1e_2}+a_1b_3\underbrace{\mathbf{e_1e_3}}_{-\mathbf{e_3e_1}} \\
&\quad +a_2b_1\underbrace{\mathbf{e_2e_1}}_{-\mathbf{e_1e_2}}+a_2b_2\underbrace{\mathbf{e_2e_2}}_{1}+a_2b_3\mathbf{e_2e_3} \\
&\quad +a_3b_1\mathbf{e_3e_1}+a_3b_2\underbrace{\mathbf{e_3e_2}}_{-\mathbf{e_2e_3}}+a_3b_3\underbrace{\mathbf{e_3e_3}}_{1} \\
&=\underbrace{(a_1b_1+a_2b_2+a_3b_3)}_{inner product} \\
&\quad +\underbrace{(a_1b_2-a_2b_1)\mathbf{e_1e_2}+(a_2b_3-a_3b_2)\mathbf{e_2e_3}+(a_3b_1-a_1b_3)\mathbf{e_3e_1}}_{Cross product} \\
\end{align}

The cross product has three components because the parallelogram spanned by two vectors is projected onto the plane on the axes of $ xy, yz, zx $ represented by the base of the 2-vector. The orientation of the surface is expressed by the three components.

The area of a parallelogram is obtained as the square root of the sum of the squares of the three components.

\sqrt{(a_1b_2-a_2b_1)^2+(a_2b_3-a_3b_2)^2+(a_3b_1-a_1b_3)^2}

Formal sum

There are eight types of all bases in three dimensions, including unmarked scalars. Formally arrange everything in the form of polynomials. (Such a polynomial is called ** formal sum **)

\underbrace{a_0}_{scalar}+\underbrace{a_1\mathbf{e_1}+a_2\mathbf{e_2}+a_3\mathbf{e_3}}_{1-vector}+\underbrace{a_4\mathbf{e_1e_2}+a_5\mathbf{e_2e_3}+a_6\mathbf{e_3e_1}}_{2-vector}+\underbrace{a_7\mathbf{e_1e_2e_3}}_{3-vector}

Pseudovector / pseudoscalar

Focusing on the number of terms, the scalar and 3-vector are one term each, and the 1-vector and 2-vector are three terms each.

Since the number of terms is the same, the 2-vector may be treated as a pseudo-vector. A 1-vector derived from such a 2-vector is called a ** pseudovector **. Typical pseudovectors appear as a type of cross product called the vector product.

Similarly, 3-vectors may be treated as pseudo-scalars. Scalars derived from such 3-vectors are called ** pseudoscalars **.

\underbrace{a_0}_{scalar}+\underbrace{a_1\mathbf{e_1}+a_2\mathbf{e_2}+a_3\mathbf{e_3}}_{vector}+\underbrace{a_4\mathbf{e_1e_2}+a_5\mathbf{e_2e_3}+a_6\mathbf{e_3e_1}}_{擬vector}+\underbrace{a_7\mathbf{e_1e_2e_3}}_{擬scalar}

The three-dimensional pseudovector corresponds to the normal to the surface represented by the 2-vector.

Pauli matrices

In the explanation so far, the 1-vector is represented by a vector, but the 2-vector and 3-vector are shown as algebraic expressions. Isn't it possible to express these together in a matrix of the same size and calculate the geometric product as it is by the matrix product? In other words, can Clifford algebra be represented by a matrix?

Since the formal sum is 8 terms, the amount of information of $ ℝ ^ 8 $ is required to store all the coefficients. A square matrix is easy to calculate the product of matrices, so a quadratic complex square matrix fits perfectly.

Find such a matrix. And we see that it is called ** Pauli matrices **.

scalar

The base of the scalar is $ 1 $. Since it is the identity element of multiplication, we assign the identity matrix.

1↦I=\left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right)

1-vector

Complex each basis, increase columns and fill with unknowns.

\begin{align}
\left(\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right),
\left(\begin{matrix} 0 \\ 1 \\ 0 \end{matrix}\right),
\left(\begin{matrix} 0 \\ 0 \\ 1 \end{matrix}\right)
&\xrightarrow{Complexification}
\left(\begin{matrix} 1 \\ 0 \end{matrix}\right),
\left(\begin{matrix} i \\ 0 \end{matrix}\right),
\left(\begin{matrix} 0 \\ 1 \end{matrix}\right) \\
&\xrightarrow{Add more columns}
\left(\begin{matrix} 1 & x_1 \\ 0 & y_1 \end{matrix}\right),
\left(\begin{matrix} i & x_2 \\ 0 & y_2 \end{matrix}\right),
\left(\begin{matrix} 0 & x_3 \\ 1 & y_3 \end{matrix}\right)
\end{align}

Find the solution because the square of the basis is $ 1 $. Since solve specifies an equation such that $ = 0 $, the identity matrix _1 on the right side is transposed to the left side to form -_1.

>>> x1,x2,x3,y1,y2,y3=symbols("x1:4 y1:4")
>>> _1=eye(2)
>>> solve(Matrix([[1,x1],[0,y1]])**2-_1,[x1,y1])
[(0, -1), (0, 1)]
>>> solve(Matrix([[I,x2],[0,y2]])**2-_1,[x2,y2])
[]
>>> solve(Matrix([[0,x3],[1,y3]])**2-_1,[x3,y3])
[(1, 0)]
\begin{align}
\left(\begin{matrix} 1 & x_1 \\ 0 & y_1 \end{matrix}\right)^2&=I&
∴(x_1,y_1)&=(0,-1),(0,1) \\
\left(\begin{matrix} i & x_2 \\ 0 & y_2 \end{matrix}\right)^2&=I&
∴(x_2,y_2)&=No solution\\
\left(\begin{matrix} 0 & x_3 \\ 1 & y_3 \end{matrix}\right)^2&=I&
∴(x_3,y_3)&=(1,0)
\end{align}

$ (x_1, y_1) $ has two solutions, but $ (0,1) $ is the identity matrix and has already been assigned to the scalar, so we adopt $ (0, -1) $.

$ (x_2, y_2) $ has no solution. If you check the components of the matrix, you can see that it is not established before finding the unknown because the constant components are different.

>>> Matrix([[I,x2],[0,y2]])**2
Matrix([
[-1, x2*y2 + I*x2],
[ 0,        y2**2]])
\left(\begin{matrix} i & x_2 \\ 0 & y_2 \end{matrix}\right)^2
=\left(\begin{matrix} -1 & x_2y_2+x_2i \\ 0 & y_2^2 \end{matrix}\right)
≠\left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right)

This doesn't align the bases, so let's move $ i $ to the second line instead.

>>> x4,y4=symbols("x4 y4")
>>> solve(Matrix([[0,x4],[I,y4]])**2-_1,[x4,y4])
[(-I, 0)]
\begin{align}
\left(\begin{matrix} 0 & x_4 \\ i & y_4 \end{matrix}\right)^2&=I&
∴(x_4,y_4)&=(-i,0)
\end{align}

Now you have three bases. Let's say $ E $.

>>> E=[Matrix([[1,0],[0,-1]]),Matrix([[0,1],[1,0]]),Matrix([[0,-I],[I,0]])]
E_0,E_1,E_2:=
\left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right),
\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right),
\left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)

It's not what we originally expected, so we need to consider which one to assign to $ \ mathbf {e_1}, \ mathbf {e_2}, \ mathbf {e_3} $.

3-vector

To determine the representation matrix of the 1-vector basis, consider the 3-vector basis.

In Clifford algebra, the square of the 3-vector basis $ \ mathbf {e_1e_2e_3} $ gives $ -1 $. You can check it by repeating the exchange in the same way as bubble sort.

\begin{align}
(\mathbf{e_1e_2e_3})^2
&=\mathbf{e_1e_2}\underbrace{\mathbf{e_3e_1}}_{Exchange}\mathbf{e_2e_3} \\
&=-\mathbf{e_1}\underbrace{\mathbf{e_2e_1}}_{Exchange}\mathbf{e_3e_2e_3} \\
&=\mathbf{e_1e_1e_2}\underbrace{\mathbf{e_3e_2}}_{Exchange}\mathbf{e_3} \\
&=-\underbrace{\mathbf{e_1e_1}}_{1}\underbrace{\mathbf{e_2e_2}}_{1}\underbrace{\mathbf{e_3e_3}}_{1} \\
&=-1
\end{align}

$ \ mathbf {e_1e_2e_3} $ is equated with the imaginary number $ i $ because it becomes $ -1 $ when squared ($ \ mathbf {e_1e_2e_3} = i $).

Product order

Check the product by rearranging the order of $ E $ obtained earlier.

>>> import itertools
>>> r=list(range(3))
>>> r
[0, 1, 2]
>>> p=list(itertools.permutations(r))
>>> p
[(0, 1, 2), (0, 2, 1), (1, 0, 2), (1, 2, 0), (2, 0, 1), (2, 1, 0)]
>>> for i,j,k in p: print(i,j,k,E[i]*E[j]*E[k])
...
0 1 2 Matrix([[I, 0], [0, I]])
0 2 1 Matrix([[-I, 0], [0, -I]])
1 0 2 Matrix([[-I, 0], [0, -I]])
1 2 0 Matrix([[I, 0], [0, I]])
2 0 1 Matrix([[I, 0], [0, I]])
2 1 0 Matrix([[-I, 0], [0, -I]])
\begin{align}
E_0E_1E_2
&=\left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right)
  \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)
  \left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)
 =\left(\begin{matrix} i & 0 \\ 0 & i \end{matrix}\right)=iI \\
E_0E_2E_1
&=\left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right)
  \left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)
  \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)
 =\left(\begin{matrix}-i & 0 \\ 0 &-i \end{matrix}\right)=-iI \\
E_1E_0E_2
&=\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)
  \left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right)
  \left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)
 =\left(\begin{matrix}-i & 0 \\ 0 &-i \end{matrix}\right)=-iI \\
E_1E_2E_0
&=\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)
  \left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)
  \left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right)
 =\left(\begin{matrix} i & 0 \\ 0 & i \end{matrix}\right)=iI \\
E_2E_0E_1
&=\left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)
  \left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right)
  \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)
 =\left(\begin{matrix} i & 0 \\ 0 & i \end{matrix}\right)=iI \\
E_2E_1E_0
&=\left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)
  \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)
  \left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right)
 =\left(\begin{matrix}-i & 0 \\ 0 &-i \end{matrix}\right)=-iI
\end{align}

We found that the three combinations of $ E_0E_1E_2, E_1E_2E_0, and E_2E_0E_1 $ are $ i $ times the identity matrix.

Selection

The rest is a matter of decision. We choose this as the representation matrix for $ \ mathbf {e_1} $ because $ E_1 $ has a simple form (unsigned, mirror image inversion of the identity matrix). Once $ \ mathbf {e_1} $ is determined, the later combination will be determined automatically. This combination is called ** Pauli matrices ** and is written as $ σ_1, σ_2, σ_3 $.

>>> s1,s2,s3=Matrix([[0,1],[1,0]]),Matrix([[0,-I],[I,0]]),Matrix([[1,0],[0,-1]])
\mathbf{e_1},\mathbf{e_2},\mathbf{e_3} \mapsto σ_1,σ_2,σ_3 :=
\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right),
\left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right),
\left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right)

3-Check the representation of the basis of the vector again.

\mathbf{e_1e_2e_3}=i \quad\cong\quad σ_1σ_2σ_3=iI

In this article, $ \ mathbf {e_n} $ and $ σ_n $ are used properly as follows.

2-vector

Since the representation matrix has been decided, it will be a mechanical calculation.

Check the representation matrix for all combinations.

>>> s1*s2
Matrix([
[I,  0],
[0, -I]])
>>> s2*s1
Matrix([
[-I, 0],
[ 0, I]])
>>> s2*s3
Matrix([
[0, I],
[I, 0]])
>>> s3*s2
Matrix([
[ 0, -I],
[-I,  0]])
>>> s3*s1
Matrix([
[ 0, 1],
[-1, 0]])
>>> s1*s3
Matrix([
[0, -1],
[1,  0]])
\begin{align}
\mathbf{e_1e_2}&\mapsto σ_1σ_2=\left(\begin{matrix} i & 0 \\ 0 &-i \end{matrix}\right) \\
\mathbf{e_2e_1}&\mapsto σ_2σ_1=\left(\begin{matrix}-i & 0 \\ 0 & i \end{matrix}\right) \\
\mathbf{e_2e_3}&\mapsto σ_2σ_3=\left(\begin{matrix} 0 & i \\ i & 0 \end{matrix}\right) \\
\mathbf{e_3e_2}&\mapsto σ_3σ_2=\left(\begin{matrix} 0 &-i \\-i & 0 \end{matrix}\right) \\
\mathbf{e_3e_1}&\mapsto σ_3σ_1=\left(\begin{matrix} 0 & 1 \\-1 & 0 \end{matrix}\right) \\
\mathbf{e_1e_3}&\mapsto σ_1σ_3=\left(\begin{matrix} 0 &-1 \\ 1 & 0 \end{matrix}\right)
\end{align}

Check anti-commutative property.

>>> s1*s2==-s2*s1
True
>>> s2*s3==-s3*s2
True
>>> s3*s1==-s1*s3
True

Hodge dual

Using the property that the geometric product of the same basis is $ 1 $, we can break the 3-vector basis $ i = \ mathbf {e_1e_2e_3} $ into a 2-vector basis.

\begin{align}
i\mathbf{e_1}&=\mathbf{e_1e_2e_3e_1}=\mathbf{e_1e_1e_2e_3}=\mathbf{e_2e_3} \\
i\mathbf{e_2}&=\mathbf{e_1e_2e_3e_2}=-\mathbf{e_1e_2e_2e_3}=-\mathbf{e_1e_3}=\mathbf{e_3e_1} \\
i\mathbf{e_3}&=\mathbf{e_1e_2e_3e_3}=\mathbf{e_1e_2}
\end{align}

The bases multiplied by $ i $ cancel each other out, leaving the rest of the bases behind. Such a complementary relationship is called a ** Hodge dual **.

\mathbf{e_1} \overset{Hodge dual}{\longleftrightarrow} \mathbf{e_2e_3} \\
\mathbf{e_2} \overset{Hodge dual}{\longleftrightarrow} \mathbf{e_3e_1} \\
\mathbf{e_3} \overset{Hodge dual}{\longleftrightarrow} \mathbf{e_1e_2}

Let's do the same calculation with Pauli matrices.

>>> I*s1
Matrix([
[0, I],
[I, 0]])
>>> I*s2
Matrix([
[ 0, 1],
[-1, 0]])
>>> I*s3
Matrix([
[I,  0],
[0, -I]])
>>> I*s1==s2*s3
True
>>> I*s2==s3*s1
True
>>> I*s3==s1*s2
True
iσ_1,iσ_2,iσ_3=
\left(\begin{matrix} 0 & i \\ i & 0 \end{matrix}\right),
\left(\begin{matrix} 0 & 1 \\-1 & 0 \end{matrix}\right),
\left(\begin{matrix} i & 0 \\ 0 &-i \end{matrix}\right) \\
iσ_1=σ_2σ_3,\ iσ_2=σ_3σ_1,\ iσ_3=σ_1σ_2

Clifford algebra swaps bases and erases them, but Pauli matrices only multiply components by imaginary numbers. It is interesting that the same result is obtained even though the calculation method is completely different. If you look only at the calculation of Pauli matrices, you cannot see that the basis of the 3-vector is broken, so you can interpret it by combining it with the viewpoint of Clifford algebra.

Summary

In Clifford algebra, the standard notation is to line up the bases (eg $ \ mathbf {e_1e_2} $). On the other hand, in Pauli matrices calculation, it is standard to calculate the components and enclose $ i $ as a common factor.

Shows the correspondence of the bases.

\begin{align}
1                  &\mapsto  I   =\left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right) \\
\mathbf{e_1}       &\mapsto  σ_1=\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) \\
\mathbf{e_2}       &\mapsto  σ_2=\left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right) \\
\mathbf{e_3}       &\mapsto  σ_3=\left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right) \\
\mathbf{e_1e_2}    &\mapsto iσ_3=\left(\begin{matrix} i & 0 \\ 0 &-i \end{matrix}\right) \\
\mathbf{e_2e_3}    &\mapsto iσ_1=\left(\begin{matrix} 0 & i \\ i & 0 \end{matrix}\right) \\
\mathbf{e_3e_1}    &\mapsto iσ_2=\left(\begin{matrix} 0 & 1 \\-1 & 0 \end{matrix}\right) \\
\mathbf{e_1e_2e_3} &\mapsto iI   =\left(\begin{matrix} i & 0 \\ 0 & i \end{matrix}\right)
\end{align}

Compare formal sum expressions.

\begin{align}
&\underbrace{a_0}_{scalar}+\underbrace{a_1\mathbf{e_1}+a_2\mathbf{e_2}+a_3\mathbf{e_3}}_{1-vector}+\underbrace{a_4\mathbf{e_1e_2}+a_5\mathbf{e_2e_3}+a_6\mathbf{e_3e_1}}_{2-vector}+\underbrace{a_7\mathbf{e_1e_2e_3}}_{3-vector} \\
&\mapsto \underbrace{a_0I}_{scalar}+\underbrace{a_1σ_1+a_2σ_2+a_3σ_3}_{1-vector}+\underbrace{a_4iσ_3+a_5iσ_1+a_6iσ_2}_{2-vector}+\underbrace{a_7iI}_{3-vector} \\
&=a_0\underbrace{\left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right)}_{I}
 +a_1\underbrace{\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)}_{σ_1}
 +a_2\underbrace{\left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)}_{σ_2}
 +a_3\underbrace{\left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right)}_{σ_3} \\
&\quad
 +a_4\underbrace{\left(\begin{matrix} i & 0 \\ 0 &-i \end{matrix}\right)}_{iσ_3}
 +a_5\underbrace{\left(\begin{matrix} 0 & i \\ i & 0 \end{matrix}\right)}_{iσ_1}
 +a_6\underbrace{\left(\begin{matrix} 0 & 1 \\-1 & 0 \end{matrix}\right)}_{iσ_2}
 +a_7\underbrace{\left(\begin{matrix} i & 0 \\ 0 & i \end{matrix}\right)}_{iI} \\
&=\left(\begin{matrix} a_0+a_3+a_4i+a_7i & a_1-a_2i+a_5i+a_6 \\ a_1+a_2i+a_5i-a_6 & a_0-a_3-a_4i+a_7i \end{matrix}\right)
\end{align}

In Pauli matrices representation, pseudoscalars and pseudovectors are represented by multiplying the bases of scalars and 1-vectors by $ i $, which represents the Hodge duality. The terms are rearranged so that the correspondence is easy to understand.

\begin{align}
&\underbrace{a_0I}_{scalar}+\underbrace{a_1σ_1+a_2σ_2+a_3σ_3}_{vector} \\
&+\underbrace{i}_{Hodge dual}
(\underbrace{a_7I}_{Pseudoscalar}+\underbrace{a_5σ_1+a_6σ_2+a_4σ_3}_{Pseudovector})
\end{align}

Quaternion

2-The basis of the vector is squared to $ -1 $. It may be easier to calculate with Pauli matrices.

(\mathbf{e_1e_2})^2=\mathbf{e_1e_2e_1e_2}=-\mathbf{\underbrace{e_1e_1}_{1}\underbrace{e_2e_2}_{1}}=-1 \\
(iσ_3)^2=\underbrace{i^2}_{-1}\underbrace{σ_3^2}_{1}=-1

2-Since there are three types of vector bases, it seems that they can correspond to the quaternion $ i, j, k $.

Code

Try to reproduce $ ij = k $. However, if you simply multiply it, it will have a minus.

(\mathbf{e_2e_3})(\mathbf{e_3e_1})=\mathbf{e_2e_3e_3e_1}=-\mathbf{e_1e_2} \\
(iσ_1)(iσ_2)=i^2σ_1σ_2=-iσ_3

In that case, the idea of reversal is to add a minus to the basis corresponding to $ i, j, k $ from the beginning.

\begin{align}
i,j,k
&\mapsto -\mathbf{e_2e_3},-\mathbf{e_3e_1},-\mathbf{e_1e_2} \\
&\mapsto -iσ_1,-iσ_2,-iσ_3
\end{align}

This way, $ ij = k $ can be reproduced well.

\underbrace{(-\mathbf{e_2e_3})}_{i}\underbrace{(-\mathbf{e_3e_1})}_{j}=\underbrace{-\mathbf{e_1e_2}}_{k} \\
\underbrace{(-iσ_1)}_{i}\underbrace{(-iσ_2)}_{j}=\underbrace{-iσ_3}_{k} \\

Summary

Quaternions correspond to scalars (signs as they are) and 2-vectors (sign inversions).

\begin{align}
&\underbrace{a}_{scalar}+\underbrace{bi+cj+dk}_{2-vector} \\
&\mapsto a-b\mathbf{e_2e_3}-c\mathbf{e_3e_1}-d\mathbf{e_1e_2} \\
&\mapsto aI-biσ_1-ciσ_2-diσ_3 \\
&=a\left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right)
 -b\left(\begin{matrix} 0 & i \\ i & 0 \end{matrix}\right)
 -c\left(\begin{matrix} 0 & 1 \\-1 & 0 \end{matrix}\right)
 -d\left(\begin{matrix} i & 0 \\ 0 &-i \end{matrix}\right) \\
&=a \underbrace{\left(\begin{matrix} 1& 0 \\ 0& 1 \end{matrix}\right)}_{1}
 +b \underbrace{\left(\begin{matrix} 0&-i \\-i& 0 \end{matrix}\right)}_{i}
 +c \underbrace{\left(\begin{matrix} 0&-1 \\ 1& 0 \end{matrix}\right)}_{j}
 +d \underbrace{\left(\begin{matrix}-i& 0 \\ 0& i \end{matrix}\right)}_{k} \\
&=\left(\begin{matrix}a-di & -bi-c \\ -bi+c & a+di\end{matrix}\right) \\
&=\left(\begin{matrix}(a+di)^* & -(c+bi) \\ (c+bi)^* & a+di\end{matrix}\right)
\end{align}

Let's use the Clifford algebra as a reference.

\begin{align}
&a+b\mathbf{e_1e_2}+c\mathbf{e_2e_3}+d\mathbf{e_3e_1} \\
&\mapsto aI+biσ_3+ciσ_1+diσ_2 \\
&\mapsto a-bk-ci-dj
\end{align}

Biquaternion

Interpreting the quaternion in Clifford algebra shows that it corresponds to a scalar and a 2-vector. However, that is incomplete as a Clifford algebra, so we will consider a method of extending the quaternion to handle 1-vectors.

You can convert to a 1-vector by multiplying the 2-vector by an imaginary number (3-the basis of the vector).

i(\mathbf{e_2e_3})=\mathbf{e_1e_2e_3e_2e_3}=-\mathbf{e_1e_2e_2e_3e_3}=-\mathbf{e_1} \\
i(iσ_1)=-σ_1

If you add a new imaginary $ h $ that is the basis of the 3-vector to the quaternion, it will be isomorphic to the Clifford algebra by compounding with the existing elements $ i, j, k $. Such an extended quaternion is called a ** biquaternion **.

\begin{align}
&(a_0+a_1i+a_2j+a_3k)+\underbrace{(a_4+a_5i+a_6j+a_7k)}_{composite}\underbrace{h}_{add to} \\
&=\underbrace{a_0}_{scalar}+\underbrace{a_1i+a_2j+a_3k}_{2-vector}+\underbrace{a_4h}_{3-vector}+\underbrace{a_5hi+a_6hj+a_7hk}_{1-vector}
\end{align}

Check the correspondence between $ hi, hj, hk $ and Pauli matrices.

\begin{align}
hi &\mapsto \underbrace{i}_{h}\underbrace{(-iσ_1)}_{i}=σ_1 \\
hj &\mapsto \underbrace{i}_{h}\underbrace{(-iσ_2)}_{j}=σ_2 \\
hk &\mapsto \underbrace{i}_{h}\underbrace{(-iσ_3)}_{k}=σ_3 \\
∴hi,hj,hk &\mapsto σ_1,σ_2,σ_3
\end{align}

Correspondence with Clifford algebra and Pauli matrices based on biquaternion is as follows.

\begin{align}
&\underbrace{a_0}_{scalar}+\underbrace{a_1i+a_2j+a_3k}_{2-vector}+\underbrace{a_4h}_{3-vector}+\underbrace{a_5hi+a_6hj+a_7hk}_{1-vector} \\
&\mapsto a_0-a_1\mathbf{e_2e_3}-a_2\mathbf{e_3e_1}-a_3\mathbf{e_1e_2}+a_4\mathbf{e_1e_2e_3}+a_5\mathbf{e_1}+a_6\mathbf{e_2}+a_7\mathbf{e_3} \\
&\mapsto a_0I-a_1iσ_1-a_2iσ_2-a_3iσ_3+a_4iI+a_5σ_1+a_6σ_2+a_7σ_3 \\
&=a_0\left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right)
 -a_1\left(\begin{matrix} 0 & i \\ i & 0 \end{matrix}\right)
 -a_2\left(\begin{matrix} 0 & 1 \\-1 & 0 \end{matrix}\right)
 -a_3\left(\begin{matrix} i & 0 \\ 0 &-i \end{matrix}\right) \\
&\quad
 +a_4\left(\begin{matrix} i & 0 \\ 0 & i \end{matrix}\right)
 +a_5\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)
 +a_6\left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)
 +a_7\left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right) \\
&=a_0\underbrace{\left(\begin{matrix} 1& 0 \\ 0& 1 \end{matrix}\right)}_{1}
 +a_1\underbrace{\left(\begin{matrix} 0&-i \\-i& 0 \end{matrix}\right)}_{i}
 +a_2\underbrace{\left(\begin{matrix} 0&-1 \\ 1& 0 \end{matrix}\right)}_{j}
 +a_3\underbrace{\left(\begin{matrix}-i& 0 \\ 0& i \end{matrix}\right)}_{k} \\
&\quad
 +a_4\underbrace{\left(\begin{matrix} i& 0 \\ 0& i \end{matrix}\right)}_{h}
 +a_5\underbrace{\left(\begin{matrix} 0& 1 \\ 1& 0 \end{matrix}\right)}_{hi}
 +a_6\underbrace{\left(\begin{matrix} 0&-i \\ i& 0 \end{matrix}\right)}_{hj}
 +a_7\underbrace{\left(\begin{matrix} 1& 0 \\ 0&-1 \end{matrix}\right)}_{hk} \\
&=\left(\begin{matrix} a_0-a_3i+a_4i+a_7 & -a_1i-a_2+a_5-a_6i \\ -a_1i+a_2+a_5+a_6i & a_0+a_3i+a_4i-a_7 \end{matrix}\right) \\
&=\left(\begin{matrix} (a_0+a_7)-(a_3-a_4)i & -\{(a_2-a_5)+(a_1+a_6)i\} \\ (a_2+a_5)-(a_1-a_6)i & (a_0-a_7)+(a_3+a_4)i \end{matrix}\right) \\
&=\left(\begin{matrix} \{(a_0+a_7)+(a_3-a_4)i\}^* & -\{(a_2-a_5)+(a_1+a_6)i\} \\ \{(a_2+a_5)+(a_1-a_6)i\}^* & (a_0-a_7)+(a_3+a_4)i \end{matrix}\right)
\end{align}

Let's use the Clifford algebra as a reference.

\begin{align}
&a_0+a_1\mathbf{e_1}+a_2\mathbf{e_2}+a_3\mathbf{e_3}+a_4\mathbf{e_1e_2}+a_5\mathbf{e_2e_3}+a_6\mathbf{e_3e_1}+a_7\mathbf{e_1e_2e_3} \\
&\mapsto a_0I+a_1σ_1+a_2σ_2+a_3σ_3+a_4iσ_3+a_5iσ_1+a_6iσ_2+a_7iI \\
&\mapsto a_0+a_1hi+a_2hj+a_3hk-a_4k-a_5i-a_6j+a_7h
\end{align}

Thus biquaternions, three-dimensional Clifford algebras and Pauli matrices are isomorphic. From this point of view, it can be said that the object handled by the quaternion is a three-dimensional Euclidean space, and the quaternion is completed in the form of a biquaternion.

In Clifford algebra, the quadratic form $ hi, hj, hk $ is in the quadratic form $ \ mathbf {e_1}, \ mathbf {e_2}, \ mathbf {e_3} $, and conversely, in the biquaternion. It is interesting to note that $ i, j, k $ in the linear form for numbers is $ \ mathbf {e_3e_2}, \ mathbf {e_1e_3}, \ mathbf {e_2e_1} $ in the quadratic form for Clifford algebras (quaternions and Clifford). The sign of the algebraic 2-vector is different, but it is expressed by changing the order due to anti-exchangeability). It may seem confusing as to what is atomic and what is complex as a concept, but I feel that it is safer to follow the interpretation of Clifford algebra if it has a geometrical meaning.

Split quaternion

In the previous article (Bicomplex numbers considered by representation matrix), I saw that bicomplex numbers and two-dimensional Clifford algebra almost correspond. There was a difference in some calculation results. On the other hand, if you limit the biquaternion to two dimensions, you can get perfect isomorphism.

A 1-vector in 2D can be handled only by $ hj, hk $. The only 2-vector obtained by the compound is $ i $. There is no 3-vector in 2D because the 2-vector is a pseudoscalar.

\underbrace{a}_{scalar}+\underbrace{bi}_{2-vector}+\underbrace{chj+dhk}_{1-vector}

For convenience, replace the characters. The number in which the biquaternion is limited to two dimensions and the characters are replaced in this way is called a ** split-quaternion **.

a+bi+chj+dhk\ \mapsto\ a+bi+cj+dk

Correspondence with Clifford algebra and Pauli matrices based on split-quaternion is as follows. Unlike bicomplex numbers, it is completely isomorphic.

\begin{align}
&\underbrace{a}_{scalar}+\underbrace{bi}_{2-vector}+\underbrace{cj+dk}_{1-vector} \\
&\mapsto a-b\mathbf{e_2e_3}+c\mathbf{e_2}+d\mathbf{e_3} \\
&\mapsto aI-biσ_1+cσ_2+dσ_3 \\
&=a\left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right)
 -b\left(\begin{matrix} 0 & i \\ i & 0 \end{matrix}\right)
 +c\left(\begin{matrix} 0 &-i \\ i & 0 \end{matrix}\right)
 +d\left(\begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix}\right) \\
&=a\underbrace{\left(\begin{matrix} 1& 0 \\ 0& 1 \end{matrix}\right)}_{1}
 +b\underbrace{\left(\begin{matrix} 0&-i \\-i& 0 \end{matrix}\right)}_{i}
 +c\underbrace{\left(\begin{matrix} 0&-i \\ i& 0 \end{matrix}\right)}_{j}
 +d\underbrace{\left(\begin{matrix} 1& 0 \\ 0&-1 \end{matrix}\right)}_{k} \\
&=\left(\begin{matrix} a+d & -bi-ci \\ -bi+ci & a-d \end{matrix}\right)
\end{align}

Let's use the Clifford algebra as a reference.

\begin{align}
&a+b\mathbf{e_2}+c\mathbf{e_3}+d\mathbf{e_2e_3} \\
&\mapsto aI+bσ_2+cσ_3+diσ_1 \\
&\mapsto a+bi+cj-dk
\end{align}

Related article

The description of Clifford algebra in this article has been reorganized based on the following article.

See the following articles for Hodge duals and bubble sort.

See the following articles for biquaternions and split-quaternions.

reference

I referred to the relationship between biquaternions and Clifford algebras. Thanks to this article, I was able to fill in the last piece.

It explains in detail how quaternions and Pauli matrices form an eight-dimensional space (the same space represented by biquaternions) and how to find representation matrices.

I referred to how to solve equations with SymPy and the output format of equations.

I referred to the permutation in Python.

Hypercomplex number

The variations of Hypercomplex number are summarized.

Basic So Decomposition type Decomposition typeSo So曲 二重(So対) Multiple
Real number Dualnumber
Complex number 双Complexnumber 分解型Complexnumber 二重Complexnumber 多重Complexnumber
Quaternion 双Quaternion 分解型Quaternion 分解型双Quaternion 双曲Quaternion 二重Quaternion
Octonion 双Octonion 分解型Octonion
Sedenion

Recommended Posts

Pauli matrices and biquaternions in Clifford algebra
Eigenvalues and eigenvectors: Linear algebra in Python <7>
Linear Independence and Basis: Linear Algebra in Python <6>
Identity matrix and inverse matrix: Linear algebra in Python <4>
Inner product and vector: Linear algebra in Python <2>
Matrix Calculations and Linear Equations: Linear Algebra in Python <3>
First Computational Physics: Quantum mechanics and linear algebra in python.