Aidemy 2020/10/28
Hello, it is Yope! I am a liberal arts student, but I was interested in the possibilities of AI, so I went to the AI-specialized school "Aidemy" to study. I would like to share the knowledge gained here with you, and I am summarizing it on Qiita. I am very happy that many people have read the previous summary article. Thank you! This is the second post of supervised learning. Nice to meet you.
What to learn this time ・ About model generalization
-Although the predictions in regression analysis are based on functions, there is a range of actual price fluctuations, and even if the input data is the same, the results may differ. ・ Under these assumptions, if the model relies too much on historical data, the prediction will fail. This is called overfitting, and preventing overfitting is called generalization.
-As a means of generalization in linear regression, regularization is used. Regularization is the attempt to generalize the model by penalizing the complexity of the __ model. -There are two types of regularization: __ "L1 regularization" __ and __ "L2 regularization" __. -L1 regularization is to reduce unnecessary information and perform regularization by bringing the coefficient of data that will have a small effect on the prediction closer to 0. ・ L2 regularization is to prevent overfitting and perform regularization by setting a limit on the size of the __coefficient.
-_ Lasso regression __ refers to a regression model that uses L1 regularization. -L1 regularization is highly effective when there is a lot of extra information, so for example, lasso regression is used when the number of parameters (number of columns) is large relative to the number of __ data (number of rows). -How to use the lasso regression should be like __model = Lasso () __.
-__ Ridge regression __ refers to a regression model that uses L2 regularization. -L2 regularization is easy to generalize because there is an upper limit on the coefficient range. -How to use ridge regression should be like __model = Ridge () __.
-ElasticNet regression refers to a regression model that uses a combination of L1 regularization and L2 regularization. -It has a great merit because it has a point that selects __ information of L1 regularization __ and a point that it is easy to generalize __ of L2 regularization __. -To use ElasticNet regression, set __model = ElasticNet () __. -Also, if you specify __ "l1_ratio = 0.3" __ etc. in the argument, you can specify the ratio of L1 regularization and L2 regularization.
-Execute the above three regression models
・ Result
-There is __regularization __ as a means of generalization in linear regression. -Regularization includes __L1 regularization and L2 regularization __, the regression using the former is __Lasso regression __, the latter is ridge regression, and the regression using both is _ElasticNet regression Called _.
This time is over. Thank you for reading until the end.
Recommended Posts