・ About SVM (Support Vector Machine) ・ Finally, as a personal hobby, use it in computer shogi. I tried together.
SVM:Understanding Support Vector Machine algorithm from examples (along with code) http://www.analyticsvidhya.com/blog/2015/10/understaing-support-vector-machine-example-code/?utm_content=buffer4951f&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
Although it is an English article, this is my personal recommendation. Graphically explaining the concept of SVM without explanation by mathematical formulas, I think it's very easy to get along with.
・ Classification is performed by drawing the boundary surface. -Since no mean or variance is used, even if data is inserted or changed, No need for overall recalculation You can see that.
Summarizes PRML (Pattern Recognition and Machine Learning) I already have a great article, see that.
http://aidiary.hatenablog.com/entry/20100501/1272712699
The above does not strictly define how to draw the boundary surface, An optimization problem that maximizes the margin (the shortest distance between the classification boundary and the training data). You can see that we should solve the dual problem of the convex planning problem.
http://aidiary.hatenablog.com/entry/20100502/1272804952
By making the margin a non-linear function, more complex classification is possible. However, if there is still overlap, classification is difficult.
http://aidiary.hatenablog.com/entry/20100503/1272889097
So far, it is called a hard margin SVM, It was a technique that assumed that the data could be completely separated in the input space x.
On the other hand, the soft margin SVM is used to overcome the classification when there is an overlap as described above. This is a method of penalizing misclassification.
The more misclassifications there are, the more penalties are added to prevent minimization. As a result, try to minimize misclassification as much as possible. It will try to adjust the parameters.
The higher the penalty, the more accurate the classification. No matter how fine the classification boundaries are ...
Checkmate prediction by SVM and its application Makoto Miwa, Department of Fundamental Informatics, Graduate School of Frontier Sciences, The University of Tokyo (at that time) http://repository.dl.itc.u-tokyo.ac.jp/dspace/bitstream/2261/187/1/K-00177.pdf
SVMs are used to predict checkmate in shogi.
In shogi, it is very important to judge whether there is a checkmate, "By using checkmate prediction using SVM, we were able to reduce the search time to 62%." That is the purpose of this paper.
For computer shogi ・ Application of machine learning to evaluation function ・ Search algorithm to search for aspects It is written in detail about, so I think it will be very educational.
In "2. About the method called SVM", I explained step by step. Practically, from the calculation cost and simplicity, it is approximately 1. I think the linear SVM of is enough.
Some studies apply non-linear SVMs, but I personally think this is counterproductive. The accuracy obtained is very small for the calculation cost.
Supplement: Utilization of machine learning in computer shogi and go http://www.slideshare.net/TakashiKato2/ss-57966067
This is what I raised, but see here for how to use machine learning in computer shogi.
Recommended Posts