[PYTHON] Classify mnist numbers by unsupervised learning with keras [Autoencoder]

Introduction

Unsupervised learning is generally less accurate than supervised learning, but at the cost of many benefits. Specifically, as a scene where unsupervised learning is useful

**-Data whose pattern is not well understood --Time-varying data --Unlabeled data **

And so on.

Unsupervised learning learns the structure behind the data from the data itself. This allows you to take advantage of more unlabeled data, which may pave the way for new applications.

Last time, we classified by unsupervised learning using PCA and t-SNE. https://qiita.com/nakanakana12/items/af08b9f605a48cad3e4e

But after all I want to use the trendy deep learning, so in this article ** Unsupervised learning with autoencoder ** to hold. A detailed explanation of the autoencoder itself is omitted. Please refer to the references.

Library import

python


import keras
import random
import matplotlib.pyplot as plt
from matplotlib import cm
import seaborn as sns
import pandas as pd
import numpy as np
import plotly.express as px
import os

from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.metrics import confusion_matrix
from sklearn.manifold import TSNE


from keras import backend as K
from keras.models import Sequential, Model, clone_model
from keras.layers import Activation, Dense, Dropout, Conv2D,MaxPooling2D,UpSampling2D
from keras import callbacks

from keras.layers import BatchNormalization, Input, Lambda
from keras import regularizers
from keras.losses import mse, binary_crossentropy

sns.set("talk")


Data preparation

Download mnist data and perform preprocessing. In the pre-processing, normalization and channel position adjustment are performed.

python


fashion_mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

#Normalization
train_images = (train_images - train_images.min()) / (train_images.max() - train_images.min())
test_images = (test_images - test_images.min()) / (test_images.max() - test_images.min())

print(train_images.shape,test_images.shape)

#Adjusting the channel position
image_height, image_width = 28,28
train_images = train_images.reshape(train_images.shape[0],28*28)
test_images = test_images.reshape(test_images.shape[0],28*28)
print(train_images.shape, test_images.shape)

Creating an autoencoder

Create a model of the autoencoder. I was impressed by the extremely small number of codes.

Here, we will create an autoencoder ** that compresses to ** 36 dimensions. By simply connecting two fully connected layers, the first layer compresses to 36 dimensions and the second layer returns to the original size. In other words, the first layer is the encoder and the second layer is the decoder.

The writing style here is the same as ordinary supervised learning.

python


model = Sequential()

#Encoder
model.add(Dense(36, activation="relu", input_shape=(28*28,)))
#decoder
model.add(Dense(28*28,activation="sigmoid"))

model.compile(optimizer="adam",loss="binary_crossentropy")
model.summary()

Learning autoencoder

Next, let's train the autoencoder. ** The point here is that the correct answer data uses image data instead of labels. ** ** Learning was completed at 156 epoch in my environment.

python


fit_callbacks = [
    callbacks.EarlyStopping(monitor='val_loss',
                            patience=5,
                            mode='min')
]

#Train the model
#Train to correct data_Use images
model.fit(train_images, train_images,
          epochs=200,
          batch_size=2024,
          shuffle=True,
          validation_data=(test_images, test_images),
          callbacks=fit_callbacks,
          )

Let's check the learning result. You can see that the loss has converged to a certain value.

python


#Check for test data loss
score = model.evaluate(test_images, test_images, verbose=0)
print('test xentropy:', score)

#Visualize test data loss
score = model.evaluate(test_images, test_images, verbose=0)
print('test xentropy:', score)

image.png

Autoencoder model creation

Next, take out only the encoder part from the previous model and make a model.

#Dimensional compression model
encoder = clone_model(model)
encoder.compile(optimizer="adam", loss="binary_crossentropy")
encoder.set_weights(model.get_weights())
#Delete the last layer
encoder.pop()

Visualize 36-dimensional data using the extracted autoencoder. ** The image is 36-dimensional in the middle layer, but you can see that the original data has been restored in the output layer. ** ** It's kind of strange.

python


#Visualize by selecting 10 points from test data

p = np.random.randint(0, len(test_images), 10)
x_test_sampled = test_images[p]

#Apply the selected sample to AutoEncoder
x_test_sampled_pred = model.predict(x_test_sampled,verbose=0)
#Call only encoder
x_test_sampled_enc = encoder.predict(x_test_sampled,verbose=0)

#Visualize the processing result
fig, ax = plt.subplots(3, 10,figsize=[20,10])
for i, label in enumerate(test_labels[p]):
    #The original image
    img = x_test_sampled[i].reshape(image_height, image_width)
    ax[0][i].imshow(img, cmap=cm.gray_r)
    ax[0][i].axis('off')
    #Image compressed by AutoEncoder
    enc_img = x_test_sampled_enc[i].reshape(6, 6)
    ax[1][i].imshow(enc_img, cmap=cm.gray_r)
    ax[1][i].axis('off')
    #Image restored by AutoEncoder
    pred_img = x_test_sampled_pred[i].reshape(image_height, image_width)
    ax[2][i].imshow(pred_img, cmap=cm.gray_r)
    ax[2][i].axis('off')

image.png

Classification of images by k-means

Finally, classification by k-means is performed using 36-dimensional data. It is classified into 10 clusters, and the highest number for each cluster is used as the prediction label.

python


#Creating dimensionally reduced data
x_test_enc = encoder.predict(train_images)
print(x_test_enc.shape)

#k-Classification by menas
KM = KMeans(n_clusters = 10)
result = KM.fit(x_test_enc)

#Evaluation by confusion matrix
df_eval = pd.DataFrame(confusion_matrix(train_labels,result.labels_))
df_eval.columns = df_eval.idxmax()
df_eval = df_eval.sort_index(axis=1)
df_eval

image.png

When I checked the confusion matrix, it seems that I couldn't classify it into 10 well.

Now let's visualize the image of each cluster.

python


#Display 5 images from each cluster

fig, ax = plt.subplots(5,10,figsize=[15,8])
for col_i in range(10):
  idx_list = random.sample(set(np.where(result.labels_ == col_i)[0]), 5)
  ax[0][col_i].set_title("cluster:" + str(col_i), fontsize=12)
  for row_i, idx_i in enumerate(idx_list):
      ax[row_i][col_i].imshow((train_images[idx_i].reshape(image_height, image_width)), cmap=cm.gray_r)
      ax[row_i][col_i].axis('off')

image.png

** If you look at each cluster, you can see that they have similar features in the image even if the numbers are different. ** ** For example, cluster 0 is thick and cluster 4 is thin. From this, it can be seen that ** the data is well characterized by the image itself, not just the data represented by the label **.

It's interesting to be able to classify by information outside the label. It means that you can add new labels.

At the end

This time, I used an autoencoder to classify mnist numbers without supervised learning. ** I couldn't classify according to the data on the label, but when I visualized the cluster, the information outside the label became apparent **.

The great thing about unsupervised learning is that you can get information outside the label.

If you find it helpful, it would be encouraging if you could use LGTM.

References

What is an autoencoder? Explain the mechanism of pre-learning and how to use it now! !! https://it-trend.jp/development_tools/article/32-0024

AutoEncoder with Keras https://qiita.com/fukuit/items/2f8bdbd36979fff96b07

Python: Write AutoEncoder in Keras https://blog.amedama.jp/entry/keras-auto-encoder

Recommended Posts

Classify mnist numbers by unsupervised learning with keras [Autoencoder]
I tried to classify mnist numbers by unsupervised learning [PCA, t-SNE, k-means]
Classify articles with tags specified by Qiita by unsupervised learning
Classify anime faces by sequel / deep learning with Keras
Unsupervised learning of mnist with autoencoder and clustering and evaluating latent variables
I tried to classify MNIST by GNN (with PyTorch geometric)
MNIST (DCNN) with Keras (TensorFlow backend)
Classify anime faces with deep learning with Chainer
Attempt to classify emoji fonts with Keras
Deep Understanding Object Detection by Deep Learning by Keras
Anomaly detection using MNIST by Autoencoder (PyTorch)
Unsupervised learning of mnist with variational auto encoder, clustering and evaluating latent variables
[For AI beginners] I will explain mnist_mlp.py line by line (learn MNIST with Keras)
[For AI beginners] I will explain mnist_cnn.py line by line (learn MNIST with Keras)
I tried to move GAN (mnist) with keras
Learn to recognize handwritten numbers (MNIST) with Caffe
Deep learning learned by implementation ~ Anomaly detection (unsupervised learning) ~
99.78% accuracy with deep learning by recognizing handwritten hiragana
Parallel learning of deep learning by Keras and Kubernetes
Classify machine learning related information by topic model
Learn by running with new Python! Machine learning textbook Makoto Ito numpy / keras Attention!