[PYTHON] "Garbage classification by image!" App creation diary day2-Fine-tuning with VGG16-

Introduction

"Classify garbage by image!" Today, the second day of the application creation diary, I would like to finally create a model. I would like to fine-tun the model using VGG16. Let's do it now.


Article list

-"Trash classification by image!" App creation diary day1 ~ Data set creation ~ -"Trash classification by image!" App creation diary day2-Fine-tuning with VGG16- ← Imakoko -"Trash classification by image!" App creation diary day3 ~ Web application with Django ~ -"Trash classification by image!" App creation diary day4 ~ Prepare the front end with Bootstrap ~ -"Trash classification by image!" App creation diary day5-Prepare front end with Bootstrap 2- -"Trash classification by image!" App creation diary day6 ~ Correction of directory structure ~

Synopsis up to the last time

Last time, I took various pictures to create a data set because it was a data set creation. The folder structure is as follows.

train ├ Combustible waste │ └ Images (same below) ├ Recyclables ├ Non-burnable garbage ├ Packaging container plastics └ Harmful garbage val ├ Combustible waste │ └ Images (same below) ├ Recyclables ├ Non-burnable garbage ├ Packaging container plastics └ Harmful garbage

We will create a model based on this.

Library import

Load the required libraries.


from keras.applications.vgg16 import VGG16
from keras.models import Sequential, Model
from keras.layers import Input, Dropout, Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
import numpy as np
import matplotlib.pyplot as plt
from glob import glob

Also, let's set the parameters. First, specify the class to classify. You can specify each one individually, but if you make a typo, it will be annoying, so try to get them all at once.


#Class to classify
classes = glob("train/*")
classes = [c.split("\\", 1)[-1] for c in classes]
nb_classes = len(classes)

If you put the executable file in the same directory as the train folder, classes will be fetched as ['train \\ non-burnable garbage','train \\ packaging plastics' ,,,] on the first line. .. Therefore, if you divide it by split into all elements in the inclusion notation, you can take out only the necessary parts such as ['non-burnable garbage','packaging container plastics' ,,,]. (Depending on the OS, the directory delimiter is / or \\ (\ is an escape sequence, so two are required), so if you are not using windows, please check the obtained one again.

Next, specify the image-related parameters.


#Set image size
img_width, img_height = 150, 150

#Specifying the image folder
train_dir = 'train'
val_dir = 'val'

#Batch size
batch_size = 16

Create data

This time I will use ImageDataGenerator because I want to inflate the data. This allows you to specify how to inflate.


#Inflating treatment
train_datagen = ImageDataGenerator(
    rotation_range=90, #± How many times to rotate
    width_shift_range=0.1, #How much to move in the horizontal direction
    height_shift_range=0.1, #How much to move in the vertical direction
    rescale=1.0 / 255, #0~Normalize to 1
    zoom_range=0.2, #How much to expand
    horizontal_flip=True, #Whether to flip horizontally
    vertical_flip=True #Whether to flip in the vertical direction
)

val_datagen = ImageDataGenerator(rescale=1.0 / 255)

The point is to perform inflating processing for train data and only scaling processing for test data. The parameters are as described in the comments, but please refer to the following for more detailed parameter talk.

I would like to apply the above process to the actual image. By using a function called flow_from_directory, data is created nicely from the directory. * In order to perform this process, the folder structure must be created properly according to the specifications.


#Generate generator
train_generator = train_datagen.flow_from_directory(
    train_dir, #Path to directory
    target_size=(img_width, img_height), #Image size after resizing
    color_mode='rgb', #Specifying the image channel
    classes=classes, #List of classes (images must be in the subdirectories specified here)
    class_mode='categorical', #"categorical","binary","sparse"Such
    batch_size=batch_size,
    shuffle=True)

val_generator = val_datagen.flow_from_directory(
    val_dir,
    target_size=(img_width, img_height),
    color_mode='rgb',
    classes=classes,
    class_mode='categorical',
    batch_size=batch_size,
    shuffle=True)

Model building

I will finally make a model. In addition, I refer to the following articles including the structure and parameters.

-Fine tuning VGG16 using GPU to make face recognition AI

As the structure to make, VGG16 is used for the convolution layer, and the fully connected layer is designed by yourself. Also, the weights are not learned up to layer15, but the last convolutional layer and fully connected layer are learned.


# VGG16
input_tensor = Input(shape=(img_width, img_height, 3))
vgg16 = VGG16(include_top=False, weights='imagenet', input_tensor=input_tensor)

First, load VGG16. The parameters are:

--include_top: Whether to include the fully connected layer --weights: It seems that you can only select None (random initialization) or 'imagenet' at the moment as to what kind of weights to use.

Next, define the fully connected layer.


#Fully connected layer
top_model = Sequential()
top_model.add(Flatten(input_shape=vgg16.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(nb_classes, activation='softmax'))
top_model.summary()

The fully connected layer is as follows.

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten_3 (Flatten)          (None, 8192)              0         
_________________________________________________________________
dense_6 (Dense)              (None, 256)               2097408   
_________________________________________________________________
dropout_3 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_7 (Dense)              (None, 5)                 1285      
=================================================================
Total params: 2,098,693
Trainable params: 2,098,693
Non-trainable params: 0

Now that we have an exit to classify into the desired number of classes, we will combine it with VGG16.


vgg_model = Model(inputs=vgg16.input, outputs=top_model(vgg16.output))

#Fixed weight
for layer in vgg_model.layers[:15]:
    layer.trainable = False

vgg_model.compile(loss='categorical_crossentropy',
          optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
          metrics=['acc'])
vgg_model.summary()

The way to combine is the same as the way to write the Functional API. Since the optimization function is Fine-tuning, use SGD with a low learning rate. The model looks like this:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_4 (InputLayer)         [(None, 150, 150, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 150, 150, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 150, 150, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 75, 75, 64)        0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 75, 75, 128)       73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 75, 75, 128)       147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 37, 37, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 37, 37, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 37, 37, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 37, 37, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 18, 18, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 18, 18, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 18, 18, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 18, 18, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 9, 9, 512)         0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 4, 4, 512)         0         
_________________________________________________________________
sequential_3 (Sequential)    (None, 5)                 2098693   
=================================================================
Total params: 16,813,381
Trainable params: 9,178,117
Non-trainable params: 7,635,264

Let's learn now.


history = vgg_model.fit(
    train_generator, #Training generator
    steps_per_epoch=len(train_generator), #Number of batches per epoch
    epochs=30,
    validation_data=val_generator,
    validation_steps=len(val_generator))


#acc, val_acc plot
plt.plot(history.history["acc"], label="acc", ls="-", marker="o")
plt.plot(history.history["val_acc"], label="val_acc", ls="-", marker="x")
plt.ylabel("acc")
plt.xlabel("epoch")
plt.legend(loc="best")
plt.savefig("acc")
plt.close()

plt.plot(history.history["loss"], label="loss", ls="-", marker="o")
plt.plot(history.history["val_loss"], label="val_loss", ls="-", marker="x")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(loc="best")
plt.savefig("loss")
plt.close()

loss.png

I think I was able to learn in a good way.

Finally, save and load this model to complete the model creation.


#Save
open("model.json", 'w').write(vgg_model.to_json())
vgg_model.save_weights('param.hdf5')

Forecast

Next, I would like to make predictions using the upper model.


import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing import image
from keras.models import model_from_json
model = model_from_json(open("model.json").read())
model.load_weights('param.hdf5')

img_width, img_height = 150, 150
classes = ['Non-burnable garbage', 'Packaging container plastics', 'Combustible waste', 'Hazardous waste', 'Resources']

classes are specified directly by name in consideration of when they were deployed.

Images do not need to be generators, so they are loaded directly.


filename = "val/Resources/IMG_20201108_105804.jpg "
img = image.load_img(filename, target_size=(img_height, img_width))
x = image.img_to_array(img)
x = x / 255.0 #Normalization
x = np.expand_dims(x, axis=0)

#Predict the person in the image
pred = model.predict(x)[0]
#View results
result = {c:s for (c, s) in zip(classes, pred*100)}
result = sorted(result.items(), key=lambda x:x[1], reverse=True)
print(result)

The result looks like this.

IMG_20201108_110533.jpg IMG_20201108_114503.jpg
'Resources', 99.783165 'Non-burnable garbage', 99.97801
'Non-burnable garbage', 0.1700096 'Resources', 0.014258962
'Packaging container plastics', 0.04342786 'Packaging container plastics', 0.007412854
'Combustible waste', 0.00205229 'Combustible waste', 0.0002818475
'Hazardous waste', 0.0013515248 'Hazardous waste', 3.024669e-05

Since we couldn't prepare many kinds of garbage, it is possible that the data set at the time of training contained something similar, but it seems that we can distinguish it properly.

Next time, I'd like to incorporate this model into Django. looking forward to!


Article list

-"Trash classification by image!" App creation diary day1 ~ Data set creation ~ -"Trash classification by image!" App creation diary day2-Fine-tuning with VGG16- ← Imakoko -"Trash classification by image!" App creation diary day3 ~ Web application with Django ~ -"Trash classification by image!" App creation diary day4 ~ Prepare the front end with Bootstrap ~ -"Trash classification by image!" App creation diary day5-Prepare front end with Bootstrap 2- -"Trash classification by image!" App creation diary day6 ~ Correction of directory structure ~

Question

I personally have one question. When doing vgg_model.fit, the number of batches per epoch is specified insteps_per_epoch = len (train_generator) (= number of images / number of batches), but this is the data used per epoch. The number is the same as the original number of images, and I think that it has not been inflated. Of course, it is possible to train with various images by stacking epochs, but I think that it is a general method to inflate and train within one epoch, so if the method is different, please tell me please.

By the way, if you set steps_per_epoch to a value larger thanlen (train_generator), the following error will occur.

WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 900 batches). You may need to use the repeat() function when building your dataset.

When I use train_generator.repeat (), I get an error saying 'DirectoryIterator' object has no attribute'repeat'. Where should I specify it? If anyone can understand it, I would be grateful if you could teach me.

References

-Fine tuning VGG16 using GPU to make face recognition AI

-Dog and cat recognition by Fine-tuning of VGG16 (2)

Recommended Posts

"Garbage classification by image!" App creation diary day2-Fine-tuning with VGG16-
"Garbage classification by image!" App creation diary day1 ~ Data set creation ~
"Trash classification by image!" App creation diary day3 ~ Web application with Django ~
"Classify garbage by image!" App creation diary day5 ~ Prepare front end with Bootstrap 2 ~
"Trash classification by image!" App creation diary day8 ~ heroku deployment ~
"Classify garbage by image!" App creation diary day4 ~ Prepare the front end with Bootstrap ~
"Trash classification by image!" App creation diary day6 ~ Correction of directory structure ~
Image classification with self-made neural network by Keras and PyTorch
Easy image classification with TensorFlow
Challenge image classification by TensorFlow2 + Keras 4 ~ Let's predict with trained model ~
Image classification with wide-angle fundus image dataset
Creating an image splitting app with Tkinter
Deep learning learned by implementation 2 (image classification)
Image classification with Keras-From preprocessing to classification test-
Multi-label classification by random forest with scikit-learn
Image processing with Lambda + OpenCV (gray image creation)
Cooking object detection with yolo + image classification