Find the general term of the extended Fibonacci sequence (k-Bonacci sequence: sum of the previous k terms) using linear algebra and Python

(It seems that the formula may not be displayed from a smartphone, so in that case, please see it from a PC etc. Sorry.)

What is the extended Fibonacci sequence?

The Fibonacci sequence is a sequence that adds the previous two terms to form the next term. Expressed as an expression

\begin{cases}
\begin{align}
a_{n+2} &= a_{n+1} + a_{n}\\ 
a_0 &= 0\\
a_1 &= 1
\end{align}
\end{cases}

It will be. The extended Fibonacci sequence is the extension of this to the next term, which is the sum of the previous $ k $ terms. I searched for the name, but couldn't find it, so I'll call it the $ k $ -Bonatch sequence here. Expressed as an expression

\begin{cases}
\begin{align}
a_{n+k} &= a_{n+k-1}+a_{n+k-2}+\dots+a_{n}\\
&=\displaystyle \sum_{i=n}^{n+k-1}a_i
\\ a_0 &= a_1 = \dots = a_{k-2} = 0\\a_{k-1} &= 1
\end{align}
\end{cases}

It will be. In the previous article (Finding general terms of tribonatch sequences with linear algebra and Python), we dealt with tribonatch sequences with $ k = 3 $. This time we will expand it.

Preparation

(It is almost the same as Calculate the general term of the Tribonacci sequence with linear algebra and Python, so you can skip it.)

Let $ V $ be the space formed by a sequence of real numbers that satisfies the recurrence formula. Given the first $ k $ term $ a_0, a_1, \ dots, a_ {k-1} $ in the sequence $ \ {a_n } $, the sequence $ \ {a_n \ in $ n \ geq k $ } $ Is uniquely determined.

\begin{align}
\boldsymbol{v_0}&=\{\overbrace{1,0,0,\dots,0}^{k pieces},1,\dots \}\\ 
\boldsymbol{v_1}&=\{0,1,0,\dots,0,1,\dots \}\\
& \vdots \\
\boldsymbol{v_{k-1}}&=\{0,0,0,\dots,1,1,\dots\}
\end{align}

will do. For $ c_0, c_1, \ dots, c_ {k-1} \ in \ mathbb {C} $, $ c_0 \ boldsymbol {v_0} + c_1 \ boldsymbol {v_1} + \ dots + c_ {k-1} If \ boldsymbol {v_ {k-1}} = \ boldsymbol {0} $ holds,

c_0\{1,0,0,\dots \}+c_1\{0,1,0,\dots \}+\dots+c_{k-1}\{0,\dots,1,\dots \}\\=\{c_0,c_1,\dots,c_{k-1},\dots \}=\{0,0,\dots,0,\dots \}

Therefore, $ c_0 = c_1 = \ dots = c_ {k-1} = 0 $. Therefore, $ \ boldsymbol {v_0}, \ boldsymbol {v_1}, \ dots, \ boldsymbol {v_ {k-1}} $ are known to be linearly independent.

next,

\boldsymbol{a}= \{ a_0,a_1,\dots,a_{k-1},\dots \}

Is an arbitrary source of $ V $

\begin{align}
\boldsymbol{a}&=\{ a_0,0,0,\dots \}+\{0,a_1,0,\dots \} +\dots+\{0,\dots,0,a_{k-1},\dots \}  \\
 &=a_0\{1,0,0,\dots \}+a_1\{0,1,0,\dots \}+\dots+a_{k-1}\{0,\dots,0,1,\dots \}  \\
&=a_0\boldsymbol{v_0}+a_1\boldsymbol{v_1}+\dots+a_{k-1}\boldsymbol{v_{k-1}}
\end{align}

It can be represented by a linear combination of $ \ boldsymbol {v_0}, \ boldsymbol {v_1}, \ dots, \ boldsymbol {v_ {k-1}} $. So $ \ boldsymbol {v_0}, \ boldsymbol {v_1}, \ dots, \ boldsymbol {v_ {k-1}} $ will generate $ V $.

From the above, $ \ boldsymbol {v_0}, \ boldsymbol {v_1}, \ dots, \ boldsymbol {v_ {k-1}} $ is linearly independent and generates $ V $, so it is the basis of $ V $.

Now, map $ f: V \ rightarrow V $

f(\{ a_n\}_{n=0}^{\infty})=\{ a_n\}_{n=1}^{\infty}

will do. $ \ boldsymbol {a} = \ {a_0, a_1, a_2, \ dots } \ in V $, $ f (\ boldsymbol {a}) = \ {a_1, a_2, a_3, \ dots } $ It satisfies the recurrence formula, so it is $ f (\ boldsymbol {a}) \ in V $.

\boldsymbol{a}=\{ a_n\}_{n=0}^{\infty}\in V \\
\boldsymbol{b}=\{ b_n\}_{n=0}^{\infty}\in V \\
c\in \mathbb{C}

Then

f(\boldsymbol{a}+\boldsymbol{b})=f(\{ a_n+b_n\}_{n=0}^{\infty})=\{ a_n+b_n\}_{n=1}^{\infty}=\{ a_n\}_{n=1}^{\infty}+\{ b_n\}_{n=1}^{\infty}=f(\boldsymbol{a})+f(\boldsymbol{b})\\
f(c\boldsymbol{a})=f(c\{ a_n\}_{n=0}^{\infty})=c\{ a_n\}_{n=1}^{\infty}=cf(\boldsymbol{a})

Is true, so $ f $ is a linear transformation of $ V $.

Representing the recurrence formula as a matrix

Regarding $ \ boldsymbol {v_0}, \ boldsymbol {v_1}, \ dots, \ boldsymbol {v_ {k-1}} $

\begin{align}
f(\boldsymbol{v_0})&=\{0,0,\dots,0,1,\dots \}=\boldsymbol{v_{k-1}}\\
f(\boldsymbol{v_1})&=\{1,0,\dots,0,1,\dots \}=\boldsymbol{v_0}+\boldsymbol{v_{k-1}}\\
f(\boldsymbol{v_2})&=\{0,1,\dots,0,1,\dots \}=\boldsymbol{v_1}+\boldsymbol{v_{k-1}}\\
\vdots \\
f(\boldsymbol{v_{k-1}})&=\{0,0,\dots,1,1,\dots \}=\boldsymbol{v_{k-2}}+\boldsymbol{v_{k-1}}
\end{align}

So

[f(\boldsymbol{v_0})\quad f(\boldsymbol{v_1})\quad f(\boldsymbol{v_2}) \quad \cdots \quad f(\boldsymbol{v_{k-1}})]=
[\boldsymbol{v_0}\ \boldsymbol{v_1}\ \boldsymbol{v_2} \cdots \boldsymbol{v_{k-1}}]
\begin{bmatrix}
0 & 1 & 0 & 0 & \cdots & 0\\
0 & 0 & 1 & 0 & \cdots & 0\\
0 & 0 & 0 & 1 & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & 0 & \cdots & 1\\
1 & 1 & 1 & 1 & \cdots & 1
\end{bmatrix}

Can be expressed as. Therefore, the representation matrix for the basis $ \ boldsymbol {v_0}, \ boldsymbol {v_1}, \ dots, \ boldsymbol {v_ {k-1}} $ of $ f $ is

\overbrace{
\begin{bmatrix}
0 & 1 & 0 & 0 & \cdots & 0\\
0 & 0 & 1 & 0 & \cdots & 0\\
0 & 0 & 0 & 1 & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & 0 & \cdots & 1\\
1 & 1 & 1 & 1 & \cdots & 1
\end{bmatrix}
}^{k rows k columns}

is. Let this representation matrix be $ A $.

Relationship between eigenvalues and common ratios

(It is the same as Calculate the general term of the Tribonacci sequence with linear algebra and Python, so you can skip it.)

\boldsymbol{p}=\{ r^{n} \}_{n=0}^{\infty}=\{1,r,r^2,\dots \}

Belongs to $ V $

f(\boldsymbol{p})=f(\{ r^{n} \}_{n=0}^{\infty})=\{ r^{n} \}_{n=1}^{\infty}=\{ r^{n+1} \}_{n=0}^{\infty}=r\{ r^{n} \}_{n=0}^{\infty}=r\boldsymbol{p}

Therefore, $ \ boldsymbol {p} $ becomes an eigenvector with an eigenvalue of $ r $. Conversely, from the above equation, we can also see that the eigenvector of the eigenvalue $ r $ of $ f $ is a geometric progression with a common ratio of $ r $. Therefore, we know that the common ratio and the eigenvalue are equal.

Find the eigenvalue

Find the eigenvalue of $ f $. The characteristic polynomial of $ A $ uses $ I $ as the identity matrix.

\begin{align}
\varphi_k(\lambda)&=|A-\lambda I|\\
&=\begin{vmatrix}
-\lambda & 1 & 0 & 0 & \cdots & 0\\
0 & -\lambda & 1 & 0 & \cdots & 0\\
0 & 0 & -\lambda & 1 & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & 0 & \cdots & 1\\
1 & 1 & 1 & 1 & \cdots & 1-\lambda
\end{vmatrix}
\end{align}

When the cofactor expansion is performed along the first column,

\begin{align}
\varphi_k&=(-1)^{1+1}(-\lambda) \begin{vmatrix}
-\lambda & 1 & 0 & \cdots & 0\\
0 & -\lambda & 1 & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & 1\\
1 & 1 & 1 & \cdots & 1-\lambda
\end{vmatrix} + (-1)^{k+1}\cdot 1 \begin{vmatrix}
1 & 0 & 0 & \cdots & 0\\
-\lambda & 1 & 0 & \cdots & 0\\
0 & -\lambda & 1 & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & 1\\
\end{vmatrix} \\
&=-\lambda \varphi_{k-1} - (-1)^{k}\cdot 1\cdot 1\\
&=-\lambda \varphi_{k-1} - (-1)^{k}
\end{align}

Can be expressed as a recurrence formula. Dividing both sides by $ (-1) ^ k $

\begin{align}
\frac{\varphi_k}{(-1)^k} &= -\lambda \frac{\varphi_{k-1}}{(-1)^k} - 1\\
&=\lambda \frac{\varphi_{k-1}}{(-1)^{k-1}} - 1
\end{align}

If you set $ b_k = \ displaystyle \ frac {\ varphi_k} {(-1) ^ k} $,

\begin{align}
b_k &= \lambda b_{k-1} - 1\\
b_k - \frac{1}{\lambda - 1} &= \lambda \left(b_{k-1} - \frac{1}{\lambda - 1}\right)\\
&=\cdots \\
&= \lambda^{k-2}\left(b_{2} - \frac{1}{\lambda - 1}\right)
\end{align}

And the first term is

\begin{align}
b_2- \frac{1}{\lambda - 1} &= \displaystyle \frac{\varphi_2}{(-1)^2} - \frac{1}{\lambda - 1}\\
&=\varphi_2 - \frac{1}{\lambda - 1}\\
&=\begin{vmatrix}
-\lambda & 1 \\
1 & 1-\lambda
\end{vmatrix}- \frac{1}{\lambda - 1}\\
&=\lambda^2 - \lambda - 1- \frac{1}{\lambda - 1}\\
&=\frac{\lambda^3-2\lambda^2}{\lambda-1}
\end{align}

is. Therefore,

\begin{align}
b_k &- \frac{1}{\lambda - 1}= \lambda^{k-2} \frac{\lambda^3-2\lambda^2}{\lambda-1}\\
b_k &= \frac{\lambda^{k+1}-2\lambda^k}{\lambda-1} + \frac{1}{\lambda - 1} \\
&= \frac{\lambda^{k+1}-2\lambda^k+1}{\lambda-1}\\
&=\lambda^k - \lambda^{k-1} - \lambda^{k-2} - \dots - \lambda^2 - \lambda - 1\quad (\because assembly division)
\end{align}

Was derived. You can see that the equation $ \ varphi_k (\ lambda) = (-1) ^ k b_k = 0 $ is equivalent to $ b_k = 0 $.

However, $ b_k = 0 $ cannot be solved easily. In a later chapter, we will use a calculator to find it numerically.

If $ k $ of solutions are $ \ lambda_0, \ lambda_1, \ dots, \ lambda_ {k-1} $, then the geometric progression that is the eigenvector corresponding to each solution

\boldsymbol{u_0}=\{ \lambda_0^{n} \}_{n=0}^{\infty}\\
 \boldsymbol{u_1}=\{ \lambda_1^{n} \}_{n=0}^{\infty}\\
\vdots \\
 \boldsymbol{u_{k-1}}=\{ \lambda_{k-1}^{n} \}_{n=0}^{\infty}

Belongs to $ V $. These are linearly independent because they are eigenvectors that correspond to the different eigenvalues of $ f $, assuming that $ \ lambda_0, \ lambda_1, \ dots, \ lambda_ {k-1} $ are all different. Also, $ \ dim V = k $. Therefore, $ \ boldsymbol {u_0}, \ boldsymbol {u_1}, \ dots, \ boldsymbol {u_ {k-1}} $ is the basis of $ V $.

Determining the coefficient

From the above, there is a certain $ c_0, c_1, \ dots, c_ {k-1} $,

\boldsymbol{a_n}=c_0\boldsymbol{u_0}+c_1\boldsymbol{u_1}+\dots+c_{k-1}\boldsymbol{u_{k-1}}

Because it can be expressed as

\begin{align}
\boldsymbol{a_n}&=c_0\boldsymbol{u_0}+c_1\boldsymbol{u_1}+\dots+c_{k-1}\boldsymbol{u_{k-1}}\\
\Leftrightarrow \{a_0,a_1,\dots,a_{k-1},\dots \}&=c_0\{ \lambda_0^{n} \}_{n=0}^{\infty}+c_1\{ \lambda_1^{n} \}_{n=0}^{\infty}+\dots+c_{k-1}\{ \lambda_{k-1}^{n} \}_{n=0}^{\infty}\\
\Leftrightarrow \{0,0,\dots,1,\dots \}&=c_0\{ 1,\lambda_0,\dots,\lambda_0^{k-1},\dots \}+c_1\{1,\lambda_1,\dots,\lambda_1^{k-1},\dots \}+\dots+c_{k-1}\{ 1,\lambda_{k-1},\dots,\lambda_{k-1}^{k-1},\dots\} \\
\Leftrightarrow \{0,0,\dots,1,\dots \}&=\{ c_0+c_1+\dots+c_{k-1},\ c_0\lambda_0+c_1\lambda_1+\dots+c_{k-1}\lambda_{k-1},\dots, c_0\lambda_0^{k-1}+c_1\lambda^{k-1}+\dots+c_{k-1}\lambda^{k-1}, \dots\}
\end{align}

Expressing this as a matrix,

\begin{bmatrix}
1 & 1 & 1 & \cdots & 1\\
\lambda_0 & \lambda_1 & \lambda_2 & \cdots & \lambda_{k-1} \\
\lambda_0^2 & \lambda_1^2 & \lambda_2^2 & \cdots & \lambda_{k-1}^2\\
\vdots & \vdots & \vdots & \ddots & \vdots  \\
\lambda_0^{k-1} & \lambda_1^{k-1} & \lambda_2^{k-1} & \cdots & \lambda_{k-1}^{k-1}
\end{bmatrix}
\begin{bmatrix}
c_1 \\ c_2 \\ c_3 \\ \vdots \\ c_{k-1}
\end{bmatrix}
=
\begin{bmatrix}
0 \\ 0 \\ 0 \\ \vdots \\ 1
\end{bmatrix}

is. The determinant of the matrix on the left is the Vandermonde determinant $ V_k $,

V_k = \begin{vmatrix}
1 & 1 & 1 & \cdots & 1\\
\lambda_0 & \lambda_1 & \lambda_2 & \cdots & \lambda_{k-1} \\
\lambda_0^2 & \lambda_1^2 & \lambda_2^2 & \cdots & \lambda_{k-1}^2\\
\vdots & \vdots & \vdots & \ddots & \vdots  \\
\lambda_0^{k-1} & \lambda_1^{k-1} & \lambda_2^{k-1} & \cdots & \lambda_{k-1}^{k-1}
\end{vmatrix}
 =\prod_{0\leq i < j \leq k-1}\big(\lambda_j - \lambda_i\big)

Is known as. Using Cramer's rule, for $ 0 \ leq m \ leq k-1 $

\begin{align}
c_m &= \frac{\begin{vmatrix}
1 & 1 & \cdots & 1 & 0 & 1 & \cdots & 1\\
\lambda_0 & \lambda_1 & \cdots & \lambda_{m-1} & 0 & \lambda_{m+1} & \cdots & \lambda_{k-1} \\
\lambda_0^2 & \lambda_1^2 & \cdots & \lambda_{m-1}^2 & 0 & \lambda_{m+1}^2 & \cdots & \lambda_{k-1}^2\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots  \\
\lambda_0^{k-1} & \lambda_1^{k-1} & \cdots & \lambda_{m-1}^{k-1} & 1 & \lambda_{m+1}^{k-1} & \cdots & \lambda_{k-1}^{k-1}
\end{vmatrix}}{V_k}\\
&=\frac{(-1)^{m}\begin{vmatrix}
0 & 1 & 1 & \cdots & 1 & 1 & \cdots & 1\\
0 & \lambda_0 & \lambda_1 & \cdots & \lambda_{m-1}  & \lambda_{m+1} & \cdots & \lambda_{k-1} \\
0 & \lambda_0^2 & \lambda_1^2 & \cdots & \lambda_{m-1}^2  & \lambda_{m+1}^2 & \cdots & \lambda_{k-1}^2\\
0 & \vdots & \vdots & \vdots & \vdots  & \vdots & \ddots & \vdots  \\
1 & \lambda_0^{k-1} & \lambda_1^{k-1} & \cdots & \lambda_{m-1}^{k-1}  & \lambda_{m+1}^{k-1} & \cdots & \lambda_{k-1}^{k-1}
\end{vmatrix}}{V_k}\quad (\because column is exchanged m times)\\
&=\frac{(-1)^{m+k-1}\begin{vmatrix}
1 & \lambda_0^{k-1} & \lambda_1^{k-1} & \cdots & \lambda_{m-1}^{k-1}  & \lambda_{m+1}^{k-1} & \cdots & \lambda_{k-1}^{k-1}\\
0 & 1 & 1 & \cdots & 1 & 1 & \cdots & 1\\
0 & \lambda_0 & \lambda_1 & \cdots & \lambda_{m-1}  & \lambda_{m+1} & \cdots & \lambda_{k-1} \\
0 & \lambda_0^2 & \lambda_1^2 & \cdots & \lambda_{m-1}^2  & \lambda_{m+1}^2 & \cdots & \lambda_{k-1}^2\\
0 & \vdots & \vdots & \vdots & \vdots  & \vdots & \ddots & \vdots  \\
0 & \lambda_0^{k-2} & \lambda_1^{k-2} & \cdots & \lambda_{m-1}^{k-2}  & \lambda_{m+1}^{k-2} & \cdots & \lambda_{k-1}^{k-2}
\end{vmatrix}}{V_k}\quad (\because line k-Exchange once)\\
&=\frac{(-1)^{m+k-1}\begin{vmatrix}
1 & 1 & \cdots & 1 & 1 & \cdots & 1\\
\lambda_0 & \lambda_1 & \cdots & \lambda_{m-1}  & \lambda_{m+1} & \cdots & \lambda_{k-1} \\
\lambda_0^2 & \lambda_1^2 & \cdots & \lambda_{m-1}^2  & \lambda_{m+1}^2 & \cdots & \lambda_{k-1}^2\\
\vdots & \vdots & \vdots & \vdots  & \vdots & \ddots & \vdots  \\
\lambda_0^{k-2} & \lambda_1^{k-2} & \cdots & \lambda_{m-1}^{k-2}  & \lambda_{m+1}^{k-2} & \cdots & \lambda_{k-1}^{k-2}
\end{vmatrix}}{V_k}\\
&=\frac{(-1)^{m+k-1}\displaystyle \prod_{0\leq i < j \leq k-1 and i\neq m and j\neq m}\big(\lambda_j - \lambda_i\big)}{\displaystyle \prod_{0\leq i < j \leq k-1}\big(\lambda_j - \lambda_i\big)}\quad (\because Vandermonde determinant)\\
&=\frac{(-1)^{m+k-1}}{\displaystyle \prod_{0\leq i < j \leq k-1 and(i=m or j=m)}\big(\lambda_j - \lambda_i\big)}\\
&=\frac{(-1)^{m+k-1}}{\displaystyle \prod_{0\leq i < m}\big(\lambda_m - \lambda_i\big)\prod_{m < j \leq k-1}\big(\lambda_j - \lambda_m\big)}\\
&=\frac{(-1)^{m+k-1}}{\displaystyle \prod_{0\leq i < m}\big(\lambda_m - \lambda_i\big)\prod_{m < j \leq k-1}(-1)\big(\lambda_m - \lambda_j\big)}\\
&=\frac{(-1)^{m+k-1}}{\displaystyle \prod_{0\leq i < m}\big(\lambda_m - \lambda_i\big)(-1)^{k-m-1}\prod_{m < j \leq k-1}\big(\lambda_m - \lambda_j\big)}\\
&=\frac{(-1)^{(m+k-1)-(k-m-1)}}{\displaystyle \prod_{0\leq i < m}\big(\lambda_m - \lambda_i\big)\prod_{m < j \leq k-1}\big(\lambda_m - \lambda_j\big)}\\
&=\frac{(-1)^{2m}}{\displaystyle \prod_{0\leq i < m}\big(\lambda_m - \lambda_i\big)\prod_{m < j \leq k-1}\big(\lambda_m - \lambda_j\big)}\\
&=\frac{1}{\displaystyle \prod_{0\leq i <m,m<i\leq k-1}\big(\lambda_m - \lambda_i\big)}
\end{align}

Is required.

From the above,

\begin{align}
\boldsymbol{a_n} &= c_0 \boldsymbol{u_0} + c_1 \boldsymbol{u_1} +\dots+ c_{k-1} \boldsymbol{u_{k-1}}\\
&=\left\{ \frac{\lambda_0^n}{\displaystyle \prod_{1\leq i \leq k-1}\big(\lambda_0 - \lambda_i\big)} + \frac{\lambda_1^n}{\displaystyle \prod_{0\leq i <1,1 <j\leq k-1}\big(\lambda_1 - \lambda_i\big)} +\dots+ \frac{\lambda_{k-1}^n}{\displaystyle \prod_{0\leq i \leq k-2}\big(\lambda_{k-1} - \lambda_i\big)}\right\}_{n=0}^{\infty}\\
&=\left\{\sum_{i=0}^{k-1}\frac{\lambda_i^n}{\displaystyle \prod_{0\leq j <i, i< j\leq k-1}\big(\lambda_i - \lambda_j\big)}\right\}_{n=0}^{\infty}
\end{align}

So

a_n = \sum_{i=0}^{k-1}\frac{\lambda_i^n}{\displaystyle \prod_{0\leq j <i, i< j\leq k-1}\big(\lambda_i - \lambda_j\big)}

Was led.

Implementation example and computational complexity comparison

So far, we know the shape of the general term, but we don't know the specific values of $ \ lambda_0, \ lambda_1, \ dots, \ lambda_ {k-1} $. Algebraic solutions do not always exist for equations of degree 5 or higher, so find numerical solutions. This time as well, we will use Python's numpy library to find it. Consider the following issues.

Given the natural numbers $ K, N $. $ K $ -Print the $ N $ term of the Bonatch sequence.

Implementation example 1: Calculate as defined

teigi.py


k, n = map(int, input().split())

a = [0] * max(n+1,k)
a[k-1] = 1

for i in range(k,n+1):
    for j in range(1,k+1):
        a[i] += a[i-j]

print(a[n])

If you calculate according to the definition, you need to add $ K $ times for each $ i (K \ leq i \ leq N) $, so $ O (K (NK)) = O (KN) $. I will.

Implementation example 2: Use general terms

ippankou.py


import numpy as np
import warnings 
warnings.filterwarnings('ignore')       #Warning occurs when calculating complex numbers, so delete it.

k, n = map(int, input().split())

equation = [1]
for i in range(k):
    equation.append(-1)     # equation = [1, -1, -1, ... , -1] = lambda^k - lambda^{k-1} - ... - lambda -1 = 0

lam = np.roots(equation)     # lambda_i

a = 0

for i in range(k):
    prod = 1
    for j in range(k):
        if j != i:
            prod *= lam[i] - lam[j]
    a += np.power(lam[i],n) / prod
    
        

print(int(round(a)))

Calculated using the general terms, each $ i (0 \ leq i \ leq K) $ is multiplied by each $ j (0 \ leq j <i, i <j \ leq K) $ and $ N $ Since we need $ O (\ log N) $ for multiplication, we have $ O (K (K + \ log N)) = O (K ^ 2 + K \ log N) $. It seems that this method has an error of about 1/10 ^ {14} $ of the numerical value due to the floating point number, as in the case of the previous $ k = 3 $.

Implementation example 3: Use the following recurrence formula (save the sum of terms k and swap only the beginning and end)

\begin{align}
a_{n+k} &= \sum_{i=n}^{n+k-1}a_i\\
a_{n+k+1} &= \sum_{i=n+1}^{n+k}a_i \\
&= a_{n+k}+\sum_{i=n}^{n+k-1}a_i-a_n\\
&= a_{n+k}+a_{n+k}-a_n\\
&= 2a_{n+k} - a_n
\end{align}

zenkashiki.py


k, n = map(int, input().split())

a = [0] * max(n+1,k)
a[k-1] = 1
a[k] = 1

for i in range(0,n-k):
    a[i+k+1] = 2 * a[i+k] - a[i] 

print(a[n])

In this way, if you think of an algorithm such as the cumulative sum that stores the sum of $ k $ terms without finding the general term, you can find it with the computational complexity $ O (N) $.

Execution example

2 5
5

3 50
3122171529233

4 25
1055026

10 50  
1082392024832

50 30
0

100 150
1125899906842618

1000 1010
1024

Computational complexity comparison

Implementation 1 Implementation 2 Implementation 3
As defined General term Recurrence formula
O(NK) O(K^2+K\log N) O(N)

By comparing the computational complexity of Implementation 2 and Implementation 3, it is better to compare $ N $ and $ K ^ 2 + K \ log N $ and consider which is faster before calculating.

Comparison of execution time

measure_time.py


import time
import numpy as np
import warnings 
warnings.filterwarnings('ignore')       #Warning occurs when calculating complex numbers, so delete it.

k, n = map(int, input().split())
m = 100        # loop count

"""
#Implementation 1
start = time.time()
for loop in range(m):

    a = [0] * max(n+1,k)
    a[k-1] = 1

    for i in range(k,n+1):
        for j in range(1,k+1):
            a[i] += a[i-j]

elapsed_time = time.time() - start
elapsed_time /= m
print ("Implementation 1: {0}".format(elapsed_time) + "[sec]")
"""


#Implementation 2
start = time.time()
for loop in range(m):

    equation = [1]
    for i in range(k):
        equation.append(-1)     # equation = [1, -1, -1, ... , -1] = lambda^k - lambda^{k-1} - ... - lambda -1 = 0

    lam = np.roots(equation)     # lambda_i

    a = 0

    for i in range(k):
        prod = 1
        for j in range(k):
            if j != i:
                prod *= lam[i] - lam[j]
        a += np.power(lam[i],n) / prod

elapsed_time = time.time() - start
elapsed_time /= m
print ("Implementation 2: {0}".format(elapsed_time) + "[sec]")


#Implementation 3
start = time.time()
for loop in range(m):

    a = [0] * max(n+1,k)
    a[k-1] = 1
    a[k] = 1

    for i in range(0,n-k):
        a[i+k+1] = 2 * a[i+k] - a[i] 

elapsed_time = time.time() - start
elapsed_time /= m
print ("Implementation 3: {0}".format(elapsed_time) + "[sec]")

Example of measurement result

# N ~ K^2+Klog N
10 500 
Implementation 2: 0.00041536092758178713[sec]
Implementation 3: 0.0003914594650268555[sec]

# N > K^2+Klog N
3 10000
Implementation 2: 0.0002478790283203125[sec]
Implementation 3: 0.012493629455566407[sec]

# N < K^2+Klog N
100 500
Implementation 2: 0.01716961145401001[sec]
Implementation 3: 0.0002404618263244629[sec]

As I thought in the complexity comparison, implementation 2 is confirmed to be fast when $ N> K ^ 2 + K \ log N $, and implementation 3 is confirmed to be fast when $ N <K ^ 2 + K \ log N $. I will.

Recommended Posts

Find the general term of the extended Fibonacci sequence (k-Bonacci sequence: sum of the previous k terms) using linear algebra and Python
Find the general terms of the Tribonacci sequence with linear algebra and Python
Find the geometric mean of n! Using Python
Fibonacci sequence using Python
Find the critical path of PERT using breadth-first search and depth-first search
Get and set the value of the dropdown menu using Python and Selenium