[PYTHON] Naive Bayes (multiclass classification)

What is Naive Bayes?

For multi-class classification. It is famous for being used for classification of spam emails.

Probability calculation belonging to a class using Bayes' theorem

If y is the objective variable and x ($ x_ {1} $ ~ $ x_ {n} $) is the explanatory variable, then Bayes' theorem is used. , The probability of which class a given sample belongs to can be calculated below.

P(y | x_{1},・ ・ ・ X_{n}) = \frac{P(y) * P(x_{1},・ ・ ・ X_{n} | y)} {P(x_{1},・ ・ ・ X_{n})}

By the way, "naive" comes from the assumption that the explanatory variables are independent of each other. The fact that the explanatory variables are independent of each other means that:

P(x_{i} | y, x_{1},・ ・ ・,x_{i+1},・ ・ ・,x_{n}) = P(x_{i} | y)

"$ X_ {i} $ when y happens is not affected by other x states." The reason for doing this is that the formula for the molecule on the right side of Bayesian definition becomes simpler.

By assuming independenceP(x_{1},・ ・ ・ X_{n} | y) = \prod_{i=1}^n (x_{i} | y)Because it can be written

The final formula of Bayes' theorem is as follows.

P(y | x_{1},・ ・ ・ X_{n}) = \frac{P(y) * \prod_{i=1}^n (x_{i} | y)} {P(x_{1},・ ・ ・ X_{n})}

At this time, the denominator $ P (x_ {1}, ... x_ {n}) $ is a fixed value, so you don't have to think about calculating the probability of belonging to a class, so you can ignore it.

P(y | x_{1},・ ・ ・ X_{n}) \propto P(y) * \prod_{i=1}^n (x_{i} | y) \\

($ \ Proto $ is a symbol that means proportional.)

With this, the probability of being classified into each class comes out, so if you determine that it is the class with the highest probability, you can estimate the class.

\hat{y} = {\arg \max}_{y} P(y) * \prod_{i=1}^n (x_{i} | y)

Gaussian Naive Bayes

When the explanatory variable is continuous, the method that assumes that it follows a normal distribution is called Gaussian Naive Bayes in particular.

The probability of belonging to the class at that time can be expressed as follows.

p(x = v | c) = \frac{1}{\sqrt{2 * \pi * \sigma_c^{2}}} * \mathrm{e}^{-\frac{(v - \mu_{c})^{2}}{2 * \sigma_c^{2}}}

Try it (sklearn)

Try Gaussian Naive Bayes using the iris dataset.

{do_gaussian_naive_bayes.py}


from sklearn import datasets                           #For data loading
from sklearn.cross_validation import train_test_split  #Learning/For test data creation
from sklearn import metrics                            #For accuracy evaluation
from sklearn.naive_bayes import GaussianNB             #For Gaussian Naive Bayes execution

#Data preparation
iris = datasets.load_iris()
X = iris.data
Y = iris.target
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=0)

#Gaussian Naive Bayes Run
model = GaussianNB()              #Instance generation
model.fit(X_train, Y_train)       #Model creation execution

#Predictive execution
predicted = model.predict(X_test) #Predictive execution
metrics.accuracy_score(Y_test, predicted) #Precision calculation
> 1.0

The accuracy was 100%.

Recommended Posts

Naive Bayes (multiclass classification)
SVM (multi-class classification)
Keras multiclass classification Iris
K-nearest neighbor method (multiclass classification)
ROC curve for multiclass classification
Implement naive bayes in Python 3.3
Machine learning ② Naive Bayes Summary
Multi-class, multi-label classification of images with pytorch
Machine learning algorithm (implementation of multi-class classification)
Text filtering with naive bayes in sklearn