In this article, we will do a simple grid search using scikit-learn (Python). It's a hassle to check every time, so I chose a template.
What is grid search?
This time, we will do a grid search using scikit-learn's GridSearchCV. Official page: https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
Please refer to the following page.
Cross-validation and grid search: https://qiita.com/Takayoshi_Makabe/items/d35eed0c3064b495a08b
This time, we will perform a grid search assuming a regression problem.
from sklearn.metrics import mean_absolute_error #MAE
from sklearn.metrics import mean_squared_error #MSE
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
RMSE RMSE is not in the scikit-learn package, so you define the function yourself.
def rmse(y_true,y_pred):
#Calculate RMSE
rmse = np.sqrt(mean_squared_error(y_true,y_pred))
print('rmse',rmse)
return rmse
K Fold
kf = KFold(n_splits=5,shuffle=True,random_state=0)
Linear SVR When doing linear support vectors, it seems faster to use LinearSVR than to use SVR.
from sklearn.svm import LinearSVR
params_cnt = 10
max_iter = 1000
params = {"C":np.logspace(0,1,params_cnt), "epsilon":np.logspace(-1,1,params_cnt)}
'''
epsilon : Epsilon parameter in the epsilon-insensitive loss function.
Note that the value of this parameter depends on the scale of the target variable y.
If unsure, set epsilon=0.
C : Regularization parameter.
The strength of the regularization is inversely proportional to C.
Must be strictly positive.
https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVR.html
'''
gridsearch = GridSearchCV(
LinearSVR(max_iter=max_iter,random_state=0),
params,
cv=kf,
scoring=make_scorer(rmse,greater_is_better=False),
return_train_score=True,
n_jobs=-1
)
gridsearch.fit(X_trainval, y_trainval)
print('The best parameter = ',gridsearch.best_params_)
print('RMSE = ',-gridsearch.best_score_)
LSVR = LinearSVR(max_iter=max_iter,random_state=0,C=gridsearch.best_params_["C"], epsilon=gridsearch.best_params_["epsilon"])
Kernel SVR
from sklearn.svm import SVR
params_cnt = 10
params = {"kernel":['rbf'],
"C":np.logspace(0,1,params_cnt),
"epsilon":np.logspace(-1,1,params_cnt)}
gridsearch = GridSearchCV(
SVR(gamma='auto'),
params, cv=kf,
scoring=make_scorer(rmse,greater_is_better=False),
n_jobs=-1
)
'''
epsilon : Epsilon parameter in the epsilon-insensitive loss function.
Note that the value of this parameter depends on the scale of the target variable y.
If unsure, set epsilon=0.
C : Regularization parameter.
The strength of the regularization is inversely proportional to C.
Must be strictly positive.
https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html
'''
gridsearch.fit(X_trainval, y_trainval)
print('The best parameter = ',gridsearch.best_params_)
print('RMSE = ',-gridsearch.best_score_)
print()
KSVR =SVR(
kernel=gridsearch.best_params_['kernel'],
C=gridsearch.best_params_["C"],
epsilon=gridsearch.best_params_["epsilon"]
)
RandomForest Random forest is a guy who doesn't have to tune hyperparameters too much, It may not make much sense, but I made it, so I will post it.
from sklearn.ensemble import RandomForestRegressor
params = {
"max_depth":[2,5,10],
"n_estimators":[10,20,30,40,50] n_The larger the estimators, the higher the accuracy, so when you have time, you should increase it. But it takes time
}
gridsearch = GridSearchCV(
RandomForestRegressor(random_state=0),
params,
cv=kf,
scoring=make_scorer(rmse,greater_is_better=False),
n_jobs=-1
)
'''
n_estimators : The number of trees in the forest.
max_depth : The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
'''
gridsearch.fit(X_trainval, y_trainval)
print('The best parameter = ',gridsearch.best_params_)
print('RMSE = ',-gridsearch.best_score_)
print()
RF = RandomForestRegressor(random_state=0,n_estimators=gridsearch.best_params_["n_estimators"], max_depth=gridsearch.best_params_["max_depth"])
GridSearchCV is convenient because you can tune in just a few lines. This time, I made it with 3 models, but of course other models can be done.
Recommended Posts