[Science / technical calculation by Python] Comparison of convergence speeds of SOR method, Gauss-Seidel method, and Jacobi method for Laplace equation, partial differential equation, boundary value problem

Introduction

Describes how to solve elliptic partial differential equations using Python.

Qiita article [Scientific / technical calculation by Python] Numerical solution of two-dimensional Laplace-Poisson equation for electrostatic potential by Jacobi method, elliptic partial differential equation, boundary value problem [1]

Then, the Laplace equation was solved using the Jacobi method. The Jacobi method is said to be slow to converge and impractical. ** In this paper, we solve the Laplace equation using the Gauss-Seidel method and the sequential over-relaxation method (SOR method), which are said to be faster than the Jacobi method, and compare the convergence speeds. ** **

less

The method is briefly summarized in the addendum, so please refer to it.


Contents

Use almost the same calculation code used in Qiita article.

Compare the convergence speeds of the Jacobi method, Gauss-Seidel method, and SOR method. Specifically, the number of iterations for the number of meshes N is examined in each scheme. N chose four points, N = 100, 1000, 10000, 100000.

The SOR method becomes the Gauss-Seidel method in special cases.


SOR


%%time
"""
Laplace equation:Numerical solution,SOR method: 
ω =Gauss at 1-Become the Seidel method
Rectangle area
14 Aug. 2017
"""
#%matplotlib nbagg 
import numpy as np
import matplotlib.pyplot as plt



# 
delta_Lx=0.03
delta_Ly=0.03
LLx = 10 #Rectangle width(x direction)
LLy= 10 #Rectangle width(y direction)

Lx = int(LLx/delta_Lx)
Ly = int(LLy/delta_Ly)

V = 2.0 #Voltage
convegence_criterion = 10**-5

phi = np.zeros([Lx+1,Ly+1])

phi_Bound= np.zeros([Lx+1,Ly+1])
phi_Bound[0,:] = V #boundary condition

# for SOR method
aa_recta=0.5*(np.cos(np.pi/Lx)+np.cos(np.pi/Ly)) #
omega_SOR_recta = 2/(1+np.sqrt(1-aa_recta**2)) #Optimal acceleration parameters for rectangular areas
print("omega_SOR_rect=",omega_SOR_recta)


#Main
delta = 1.0
n_iter=0
conv_check=[]
while delta > convegence_criterion:
    phi_in=phi.copy()
    if n_iter % 40 ==0:  #Monitoring of convergence status
        print("iteration No =", n_iter, "delta=",delta)
    conv_check.append([n_iter, delta])
    for i in range(Lx+1):
        for j in range(Ly+1):
            if i ==0 or i ==Lx or j==0 or j==Ly:
                phi[i,j] = phi_Bound[i,j]
            else: 
               phi[i,j] = phi[i,j]+omega_SOR_recta *((phi[i+1,j] + phi[i-1,j] + phi[i,j+1] + phi[i,j-1])/4-phi[i,j]) #Update by SOR method

    delta = np.max(abs(phi- phi_in))

    n_iter+=1
    
print ("The number of total iteration =",n_iter)
print("data_points=",Lx*Ly)
#for plot        
plt.xlabel('X')
plt.ylabel('Y')
plt.imshow(phi,cmap='hsv')

plt.show()

Result (1) Display of solution

First, the solution of Laplace's equation itself is shown in the figure below. This is obtained from Qiita article. Calculated with 100x100 mesh.

t.png


Result (2) Comparison of convergence speed

The table shows the data score ($ Nx \ times Ny $) dependence of the number of iterations required to converge.

Number of grids: N\times N SOR method Gauss-Seidel method Jacobi method
100 (10x10) 21 89 166
1089 (33x33) 68 721 1302
10000 (100x100) 201 4416 7476
110889 (333x333) 663 22334 30913

** You can see that SOR method is overwhelmingly fast </ font>. ** ** It can also be seen that, as expected, the number of iterations of the [Addendum] ** SOR method is proportional to N. The convergence speed of the Gauss-Seidel method is about twice as fast as that of the Jacobi method. However, there is still a remarkable speed difference from the SOR method, and it is not practical.

A log-log graph of this table is shown below. The slope shows the grid score dependence of the number of iterations.

tt.png

The horizontal axis shows the number of grid points, and the vertical axis shows the number of iterations required for the vertical axis to converge. The linear change indicates that the power index of the grid score dependence of the number of iterations is constant. It can be seen that the slope of the SOR method is smaller than that of the Gauss-Seidel-Jacobi method, and the convergence order is large.


Conclusion

** If you know the optimal acceleration parameters, you should use the SOR method. </ font> Realistic calculations ($ Nx, Ny \ gt $ 100) are more than 20 times faster than the Gauss-Seidel method. ** **


Addendum

Addendum (1): Gauss-Seidel method: -Slight improvement of Jacobi method-

[Gauss-Seidel method](https://ja.wikipedia.org/wiki/%E3%82%AC%E3%82%A6%E3%82%B9%EF%BC%9D%E3%82%B6% E3% 82% A4% E3% 83% 87% E3% 83% AB% E6% B3% 95) is Jacobi in the addendum to Qiita Articles Formula by law (12)

$ \phi(x_i,y_j) = \frac{1}{4} [\phi(x_{i+1},y_{j})+\phi(x_{i-1},y_{j})+\phi(x_{i},y_{j+1})+\phi(x_{i},y_{j-1})]+\frac{\rho(x_{i},y_{j})}{4\epsilon_0}\Delta^2 $

$ (N + 1) $ Step Representation

$ \phi(x_i,y_j)^{(n+1)} = \frac{1}{4} [\phi(x_{i+1},y_{j})^{(n)}+\phi(x_{i-1},y_{j})^{(n)}+\phi(x_{i},y_{j+1})^{(n)}+\phi(x_{i},y_{j-1})^{(n)}]+\frac{\rho(x_{i},y_{j})}{4\epsilon_0}\Delta^2 \tag{1} $

In, $ \ phi (x_ {i-1}, y_j) ^ {(n)} $ and $ \ phi (x_ {i}, y_ {j-1}) ^ {(n)} $ are ** actually Since it has been updated in advance, the value of the $ n $ step is not used. Use $ \ phi (x_ {i-1}, y_j) ^ {(n + 1)} $ and $ \ phi (x_ {i}, y_ {j-1}) ^ {(n + 1)} $ Method **. In other words

$ \phi(x_i,y_j)^{(n+1)} = \frac{1}{4} [\phi(x_{i+1},y_{j})^{(n)}+\phi(x_{i-1},y_{j})^{(n+1)}+\phi(x_{i},y_{j+1})^{(n)}+\phi(x_{i},y_{j-1})^{(n+1)}]+\frac{\rho(x_{i},y_{j})}{4\epsilon_0}\Delta^2 \tag{2} $ And. This is ** because $ \ phi (x_i, y_j) $ is updated sequentially, so the memory usage of the code can be saved compared to the Jacobi method ** It can be expected that the convergence will be slightly faster than the Jacobi method (the number of iterations for a large mesh will be about half that of the Jacobi method [1]), but this method also has a slower convergence.

Addendum (2): Numerical solution: SOR method: -Slight improvement of Jacobi method-

One of the practical algorithms is ** sequential over-relaxation (SOR method [2]) **. As shown below, this method accelerates the update of $ \ phi (x_i, y_j) ^ {(n)} $ by the Gauss-Seidel method by applying the acceleration parameter $ \ omega $. ** **

The iterative process of the Gauss-Seidel method can be rewritten as follows [1].

\phi(x_i,y_j)^{(n+1)} = \phi(x_i,y_j)^{(n)}+\Delta \phi(x,y)^{(n)} \tag{3}

\Delta \phi(x,y)^{(n)}=\frac{1}{4} [\phi(x_{i+1},y_{j})^{(n)}+\phi(x_{i-1},y_{j})^{(n)}+\phi(x_{i},y_{j+1})^{(n)}+\phi(x_{i},y_{j-1})^{(n)}]+\frac{\rho(x_{i},y_{j})}{4\epsilon_0}\Delta^2-\phi(x_i,y_j)^{(n)} \tag{4}

Here, $ \ Delta \ phi (x, y) $ represents the amount of update of $ \ phi (x, y) ^ {(n)} $ by the Gauss-Seidel method. By the way, ** In the Gauss-Seidel method, $ \ phi (x, y) ^ {(n)} $ often changes monotonously in the iterative process **. As an example, the figure shows the changes in the iterative process of some $ \ phi $ values when solving Laplace's equation with a 10x10 mesh. You can see that it is changing monotonously.

tt.png Changes in $ \ phi $ during the iterative process.

** The SOR method is a method to actively utilize this monotonous change. Since $ \ phi $ changes monotonically, the update amount $ \ Delta \ phi (x, y) $ (Equation (4)) predicted by the Gauss-Seidel method can be made larger to increase the convergence speed. That's right. ** This idea is the essence of the SOR method.

Therefore, iterative process of multiplying $ \ Delta \ phi (x, y) $ of equation (4) by $ \ omega_ {SOR} $

$ \phi(x_i,y_j)^{(n+1)} = \phi(x_i,y_j)^{(n)}+\omega_{SOR}[\frac{\phi(x_{i+1},y_{j})^{(n)}+\phi(x_{i-1},y_{j})^{(n)}+\phi(x_{i},y_{j+1})^{(n)}+\phi(x_{i},y_{j-1})^{(n)}}{4} +\frac{\rho(x_{i},y_{j})}{4\epsilon_0}\Delta^2-\phi(x_i,y_j)^{(n)}]\tag{5} $

think of.

** This is the SOR method. $ \ Omega_ {SOR} $ is called the over-relaxation parameter. When $ \ omega_ {SOR} $ = 1, it becomes the Gauss-Seidel method. ** **

There is an arbitrariness in how to give $ \ omega_ {SOR} $. Choosing the optimal (best convergence rate) may result in much faster convergence than the Gauss-Seidel method. Various ways to find the best $ \ omega_ {SOR} $ have been investigated [2].

** We know the optimal $ \ omega_ {SOR} $ for the Laplace (Poisson) equation in the rectangular region, **

\omega_{SOR} = \frac{2}{1-\sqrt(\lambda^2)}\tag{5} \lambda=\frac{1}{2}[cos\frac{\pi}{N_x}+cos\frac{\pi}{N_y}] \tag{6}

Is. For a sufficiently large grid $ Nx \ times Ny >> 1 $, it becomes $ \ omega_ {SOR} \ approx 2 $.

Let's say ** $ Nx = Ny = N $. At this time, the number of iterations required to converge the calculation using the SOR method is proportional to $ N ^ 1 $. That of the Jacobi method and the Gauss-Seidel method is proportional to $ N ^ 2 $. When N is large enough, the convergence speed of the SOR method is significantly faster than that of the Gauss-Seidel method and Jacobi method [2] </ font> **. The results examined in this paper strongly support this.

Note that ** implementation is easy **. Use $ \ omega_ {SOR} $ to keep equation (5)


 phi[i,j] = phi[i,j]+omega_SOR*((phi[i+1,j] + phi[i-1,j] + phi[i,j+1] + phi[i,j-1])/4-phi[i,j])

All you have to do is write [2].


References

[1] My Qiita article, [[Scientific / technical calculation by Python] Numerical solution of two-dimensional Laplace-Poisson equation by Jacobi method for electrostatic potential, elliptic partial differential equation, boundary value problem](http: // qiita. com / sci_Haru / items / 6b80c7eb8d4754fb5c2d)

[2] ["Numerical Recipe in Sea Japanese Version-Recipe for Numerical Calculation in C Language"](https://www.amazon.co.jp/%E3%83%8B%E3%83%A5 % E3% 83% BC% E3% 83% A1% E3% 83% AA% E3% 82% AB% E3% 83% AB% E3% 83% AC% E3% 82% B7% E3% 83% 94-% E3% 82% A4% E3% 83% B3-% E3% 82% B7% E3% 83% BC-% E6% 97% A5% E6% 9C% AC% E8% AA% 9E% E7% 89% 88- C% E8% A8% 80% E8% AA% 9E% E3% 81% AB% E3% 82% 88% E3% 82% 8B% E6% 95% B0% E5% 80% A4% E8% A8% 88% E7% AE% 97% E3% 81% AE% E3% 83% AC% E3% 82% B7% E3% 83% 94-Press-William-H / dp / 4874085601 / ref = dp_return_1? _ Encoding = UTF8 & n = 465392 & s = books) Technical Review, 1993.

Recommended Posts