[PYTHON] Try fine tuning (transfer learning), which is the mainstream with images with keras, with data learning

When using the created model in an online service, I would like to update the existing model every day with the newly accumulated data, but it takes time and money to batch all the data every day. In image training, it is common to have a trained model such as VGG16 read the image you want to discriminate and fine-tune it. So, this time, I saved a model built with ordinary data and tried Fine Tuning the model.

Only the points will be introduced here, so please see the sample code that actually works from the following. https://github.com/tizuo/keras/blob/master/%E8%BB%A2%E7%A7%BB%E5%AD%A6%E7%BF%92%E3%83%86%E3%82%B9%E3%83%88.ipynb

Build a base model

This time, we will divide the iris data appropriately and train it in two parts. First, define the base model. The point is just to put the name parameter in the layer you want to inherit.

python


model_b = Sequential()
model_b.add(Dense(4, input_shape=(4, ), name='l1'))
model_b.add(Activation('relu'))
model_b.add(Dense(4, input_shape=(4, ), name='l2'))
model_b.add(Activation('relu'))
model_b.add(Dense(3, name='cls'))
model_b.add(Activation('softmax'))

Save model weights

After fitting with the data for the base, save the model weights.

python


model_b.save_weights('my_model_weights.h5')

Prepare the model to put in

Make the layer name correspond. This example adds a Dropout layer to prevent over-reflection of new data.

python


model_n = Sequential()
model_n.add(Dense(4, input_shape=(4, ), name='l1'))
model_n.add(Activation('relu'))
model_n.add(Dense(4, input_shape=(4, ), name='l2'))
model_n.add(Activation('relu'))
model_n.add(Dropout(0.5))
model_n.add(Dense(3, name='cls'))
model_n.add(Activation('softmax'))

Loading & learning weights

Load the weights into the newly created model and train the rest of the models.

python


#Loading weights
model_n.load_weights('my_model_weights.h5', by_name=True)

#compile&Run
model_n.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model_n.fit(new_X, new_Y, epochs=50, batch_size=1, verbose=1)

It seems that it can also be used when you want to learn separately the amount of data that does not rotate due to memory over.

Recommended Posts

Try fine tuning (transfer learning), which is the mainstream with images with keras, with data learning
Implemented "slanted triangular learning rate" in Keras, which is effective in BERT fine tuning
Judging whether or not it is my child from the picture of Shiba Inu by deep learning (2) Data increase, transfer learning, fine tuning
Try to solve the fizzbuzz problem with Keras
Record of the first machine learning challenge with Keras
(Deep learning) Images were collected from the Flickr API and discriminated by transfer learning with VGG16.
Try scraping the data of COVID-19 in Tokyo with Python
CNN with keras Try it with the image you picked up
Reinforcement learning in the shortest time with Keras with OpenAI Gym
Try to extract the features of the sensor data with CNN