[PYTHON] [Anomaly detection] Detect image distortion by deep distance learning

There is a lot of research on image anomaly detection using deep learning.

As an approach to detect abnormal products using only normal data for industrial products, So far, @ daisukelab's Self-supervised learning is the most influential. In this study As a problem, the accuracy of some images was sluggish, and I made a trial and error on how to create abnormal class data **. There was a comment.

Therefore, in this article, I would like to explore ** various ** automatic generation methods for abnormal images.

image.png

From the conclusion

Image consideration

Prior research has succeeded in catching many defects in a very simple method. In addition, @ daisukelab's fasi.ai code is very clean and readable. If you are interested in image anomaly detection, we recommend that you read it.

In previous studies, the AUC was below 0.9 for the four data of Screw, Transistor, Grid, and Pill of MVtech AD. As a result, it was a content of trial and error.

This time, I would like to target data other than Pill. The three data other than Pill are as follows.

ダウンロード.png

Looking at the abnormal image, there are many things that ** "distortion" rather than scratches and dirt, and this detection is Since it is difficult, it is presumed that AUC is difficult to grow.

Therefore, we will search for a method to automatically generate ** "distortion data" from normal data ** and boost the anomaly detection performance.

Automatic generation of "distortion data"

There seem to be various methods for creating distortion data, but in this article, we will explain how to rotate a part of the image. I adopted it.

image.png

This is easy to do with Keras' Data Generator.

from keras.preprocessing.image import ImageDataGenerator
    small_fig = fig[begin1:end1,begin2:end2]

    datagen = ImageDataGenerator(
               rotation_range=30,
               width_shift_range=0.2,
              height_shift_range=0.2,
              horizontal_flip=False,
              vertical_flip=False)

    for d in datagen.flow(np.expand_dims(small_fig, axis=0), batch_size=1):
        # datagen.Since flow loops infinitely, it exits the loop
        break

    fig[begin1:end1,begin2:end2] = d[0]

Applies to meaningful places

However, there is no point in rotating part of the ** background image **.

image.png

Therefore, we implemented an algorithm that automatically detects the object.

image.png

With Screw etc., the direction and position of the test piece are slightly different in the training data. On the other hand, if the training data is averaged, an image in which the test piece disappears can be obtained. (2.Training data average)。

After that, the difference between the averaged image and the target image is taken and binarized (Binary map). By doing so, the range of the test piece can be detected automatically. While performing automatic detection If you rotate a part of the image, you will get the following image.

image.png

I got a distorted image.

Procedure of this method

Experiment

Experiments were conducted using MVtech AD. I put the whole code here [https://github.com/shinmura0/AutoEncoder_vs_MetricLearning/blob/master/AnomalyDetection_self_supervised.ipynb). It can be executed in Colab.

conditions

AUC results

In the experimental results, class 2 (two classes, "normal" and "abnormal with a line") is the baseline, class3 (class2 class + "distorted anomaly") is the proposed method.

image.png

In Screw, when the proposed method (class3) is applied to class2, the AUC decreases. It's gone. The reason for the decline will be discussed later.

image.png

In Grid, as expected, the proposed method (class3) is better than the baseline (class2). AUC has risen. I think I was able to detect the distortion as I aimed. Abnormal part detected We will verify that it is done by visualization.

image.png

In Transistor, AUC increased when the proposed method (class3) was applied as in Grid. An improvement of 0.1 or more can be confirmed for ArcFace.

Visualization results

As shown in Previous research, I also tried to visualize the abnormal part by Grad-CAM. However, it is based on "AdaCos". Therefore, there are two abnormal classes, I'm running Grad-CAM in each class.

First is the Grid data.

ダウンロード (1).png

The visualization of "Distortion" fits nicely on all data. However, if there are multiple abnormalities, it seems that only one can be detected.

Next is Transistor.

image.png

In Transistor, the visualization of "Slight line" fits well for most data. The result is the opposite of Grid. Which class of anomalies should be seen around here, depending on the type of data It may change.

Experiment (extra edition)

Finally, let's change the base model (MobileNetV2 → EfficentNetB0).

conditions

The conditions are almost the same as in the previous experiment. The differences are as follows.

AUC results

image.png

With Screw, the AUC is less than 0.5 even in class 2.

image.png

With Grid, the result is similar to MobileNet V2. After all, ** class3 gives a better score. ** **

image.png

With Transistor, the result is similar to MobileNet V2. After all, you will get a better score if you use class3.

Summary of experimental results

Let's compare the results so far by the median.

image.png

Excluding Screw, the AUC increases ** when the proposed method (class3) is applied to the baseline (class2). You can check. Also, as for the types of distance learning, it seems that there is such a difference. I can't see it, but ArcFace shows a relatively good score.

The difference in AUC by the CNN model seems to be good for EfficientNet B0, which has achieved AUC over 0.9. Applying EfficientNetB7 or something may increase the AUC more significantly (although in the image) The minimum size seems to be 600 x 600. heavy. By the way, B0 is 224 x 224. )

Consideration (Why did Screw's AUC decrease?)

Consider the cause of the decrease in AUC of Screw.

Simply put, by adding the "Distorted Anomaly" class ** "Large" distortion is considered anomalous Now detectable **, but ** "small" distortions are now recognized as near normal ** It seems that it has become.

In the two-class classification of the previous research, by drawing a thin line, not only scratches and discoloration, but also the tip of the screw By drawing a line, I was able to simulate "small" distortion. However, in the third class With the introduction, CNN tasks become more difficult.

Until now, in the two-class classification, "small" distortion was a rather thin line class. On the other hand, due to the three-class classification, ** CNN's special amount extraction specializes in line detection and rotation detection, ** The "small" distortion seems to be close to normal class. As a result, the decrease in AUC It is presumed that he was invited.

image.png

The figure above is a diagram in which Grad-CAM is applied to AdaCos (class3) trained by Screw. All images It is an abnormal image. As you can see, the small distortion at the tip of the screw has not been detected at all.

Furthermore, if you change from MobileNetV2 to EfficientNetB0, the AUC will be 0.5 even in class2. The result was lower. This is because EfficientNetB0 can extract features more clearly. The result is that the feature extraction of "thin lines" goes too well, and the "small" distortion approaches normal. It seems that it has been done.

Then, isn't it possible to simulate "small" distortion by reducing the range of partial rotation? However, if the range is made too small, it cannot be detected by CNN and learning will not proceed at all. Become.

To improve this, increase the resolution of the image or align the orientation of the test product. It is necessary to devise a special image. As mentioned in Prior research, If anomalous data is available, you can also look closely at the image and analyze it. It seems to be effective.

As a recommended procedure, first of all, two classes are classified by self-supervised learning. afterwards, If you try 3 classifications and the AUC worsens, analyze the image well. It will be a process.

Summary

Recommended Posts

[Anomaly detection] Detect image distortion by deep distance learning
Deep learning learned by implementation ~ Anomaly detection (unsupervised learning) ~
Deep Understanding Object Detection by Deep Learning by Keras
[Anomaly detection] Try using the latest method of deep distance learning
Abnormal value detection by unsupervised learning: Mahalanobis distance (implementation)
Abnormal value detection by unsupervised learning: Mahalanobis distance (theory)
Deep learning image recognition 1 theory
I tried to implement anomaly detection by sparse structure learning
Deep learning learned by implementation 1 (regression)
Deep learning image recognition 2 model implementation
[AI] Deep Learning for Image Denoising
Image recognition model using deep learning in 2016
Produce beautiful sea slugs by deep learning
Anomaly detection using MNIST by Autoencoder (PyTorch)
ECG data anomaly detection by Matrix Profile
Image alignment: from SIFT to deep learning
[Deep learning] Nogizaka face detection ~ For beginners ~
Deep learning image recognition 3 after model creation
Deep Learning
Chainer and deep learning learned by function approximation
Read & implement Deep Residual Learning for Image Recognition
99.78% accuracy with deep learning by recognizing handwritten hiragana
Video frame interpolation by deep learning Part1 [Python]
Parallel learning of deep learning by Keras and Kubernetes
Implementation of Deep Learning model for image recognition
Deep learning learned by implementation (segmentation) ~ Implementation of SegNet ~