PRML Chapter 7 Regression by related vector machine (RVM; relevance vector machine) is implemented in python.
Jupyter notebook summarizing the code and experimental results.
Conditional probability distribution of the real-valued target variable $ t $ for the given input vector $ \ mathbf {x} $
Here, $ \ beta = \ sigma ^ {-2} $ is defined by the noise accuracy parameter (the reciprocal of the noise variance), and the mean value is defined by the following linear model.
A kernel function with individual training data as one argument is used as the basis function.
The total number of parameters is $ M = N + 1 $. The likelihood function is given by the following equation.
A Gauss prior distribution with an average of 0 is used as the prior distribution of the parameter vector $ \ mathbf {w} $. RVM uses different hyperparameters $ \ alpha {i} $ for each weight parameter $ w {i} $. __ That is, the prior distribution for weights is as follows.
Where $ \ alpha_ {i} $ represents the precision of the corresponding weight parameter $ w_ {i} $, $ \ mathbf {\ alpha} = (\ alpha_ {1}, \ dots, \ alpha_ {M}) ^ {T} $. Maximizing the evidence for these hyperparameters makes most of the hyperparameters infinite, concentrating the posterior distribution of the corresponding weight parameters at zero point. Then, the basis functions corresponding to these parameters (kernel functions representing the distances to the corresponding data points) do not play any role in the prediction, so they can be removed and a sparse model is obtained.
The posterior probabilities for the weight vector are again Gaussian and are expressed in the following form.
Here, the mean and covariance are given by the following equations.
However, $ \ mathbf {\ Phi} $ is the element $ \ Phi_ {ni} = \ phi_ {i} (\ mathbf {x_ {n}}) $ for $ i = 1, \ dots, N $, and $ N \ times M $ design matrix with element $ \ Phi_ {nM} = 1 $ for $ n = 1, \ dots, N $, $ \ mathbf {A} = \ rm {diag} (\ alpha_ {i}) $.
The values of $ \ mathbf {\ alpha} $ and $ \ beta $ are obtained by a second type of maximum likelihood estimation, also known as __evidence approximation __. To perform the second type maximum likelihood estimation, first integrate the weight parameters.
Since this equation is a convolutional integral of two Gaussian distributions, the integral can be executed analytically, and the log-likelihood can be obtained as follows.
Here, $ \ mathbf {t} = (t_ {1}, \ dots, t_ {N}) ^ {T} $. We also defined the $ N \ times N $ matrix $ \ mathbf {C} $ as follows.
By setting the derivative of the obtained log-likelihood to 0, we obtain the hyperparameter update equation as follows.
Here, $ \ gamma_ {i} $ is a quantity that indicates how well the corresponding weight parameter $ w_ {i} $ is specified in the data, and is defined by the following equation.
Learning hyperparameters proceeds as follows using the above results. First, the mean $ \ mathbf {m} $ and covariance $ \ mathbf {\ Sigma} $ of posterior probabilities are estimated from the initial values of $ \ mathbf {\ alpha} $ and $ \ beta $ selected appropriately. Next, we estimate the hyperparameters from the obtained values. This process is repeated alternately until the appropriate convergence conditions are met.
For vectors unrelated to the output $ \ mathbf {t} $, the value of $ \ alpha $ is $ \ infinity $. In other words, the column with the large value of $ \ alpha ^ {-1} $ is the related vector.
Recommended Posts