[PYTHON] Build an image classification model explosively with Azure Custom Vision and implement it with Flask

Overview

At the study session, I built a supervised learning web application using image data with reference to the following books.

Machine learning utilization guide to learn in practice

The created application is as follows. URL:https://pokemonclassfication.herokuapp.com/

ポケモンアプリのサンプル.mov.gif

When you upload a Pokemon image, the image is classified and the estimated Pokemon information is displayed. We leverage Azure's Custom Vision for our analytics model and Flask for our web framework.

This article mainly describes the following.

  1. Create an image data collection script with Google Image Crawler
  2. Build an analytical model with Custom Vison
  3. Model validation
  4. Implementation utilizing Custom Vision API

For a detailed explanation of Flask's implementation, refer to the book " Machine learning utilization guide to learn in practice "and" Official reference " Recommended to read.

Click here for the web app code: Pokemon image classification app code

1. Create an image data collection script with Google Image Crawler

A large number of images are required to build an image classification model in supervised learning. After all, collecting images manually is a waste of man-hours, so I want to automate it to some extent.

The image collection script makes use of the Python package icrawler. Instead of just collecting, 20% of the collected images are used as test data, so we collect them in separate folders. The code used is as follows.

collect_img_poke.py


import os
import glob
import random
import shutil
from icrawler.builtin import GoogleImageCrawler

#Root directory to save images
root_dir = 'pokemon/'
#Pokemon image search keyword list
pokemonnames = ['Pikachu','Squirtle','Charmander','Bulbasaur','Snorlax']

#Number of collected image data
data_count = 100

for pokemonname in pokemonnames:
    crawler = GoogleImageCrawler(storage={'root_dir':root_dir + pokemonname + '/train'})
    
    filters = dict(
        size = 'large',
        type = 'photo'
    )
    
    #Performing crawling
    crawler.crawl(
        keyword=pokemonname,
        filters=filters,
        max_num=data_count
    )
    
    #Delete all files if the test directory from the last run exists
    
    if os.path.isdir(root_dir + pokemonname + '/test'):
        shutil.rmtree(root_dir + pokemonname + '/test')
    os.makedirs(root_dir + pokemonname + '/test')
    
    #Get a list of download files
    filelist = glob.glob(root_dir + pokemonname + '/train/*')
    #Extract 20% of downloads as test data
    test_ratio = 0.2
    testfiles = random.sample(filelist, int(len(filelist) * test_ratio))
    
    for testfile in testfiles:
        shutil.move(testfile, root_dir + pokemonname + '/test/')

The above is Machine learning utilization guide to learn in practice It is almost the same as the contents described in. However, this code alone will not work as of September 13, 2020.

It seems that the cause is that the API of Google image search has changed. Here is a workaround for this. Reference: Google Crawler is down # 65

It seems that the parse method of /icrawler/builtin/google.py should be changed as follows. The commented out content will be the content before the change.

def parse(self, response):
    soup = BeautifulSoup(
        response.content.decode('utf-8', 'ignore'), 'lxml')
    #image_divs = soup.find_all('script')
    image_divs = soup.find_all(name='script')
    for div in image_divs:
        #txt = div.text
        txt = str(div)
        #if not txt.startswith('AF_initDataCallback'):
        if 'AF_initDataCallback' not in txt:
            continue
        if 'ds:0' in txt or 'ds:1' not in txt:
            continue
        #txt = re.sub(r"^AF_initDataCallback\({.*key: 'ds:(\d)'.+data:function\(\){return (.+)}}\);?$",
        #             "\\2", txt, 0, re.DOTALL)
        #meta = json.loads(txt)
        #data = meta[31][0][12][2]
        #uris = [img[1][3][0] for img in data if img[0] == 1]
        
        uris = re.findall(r'http.*?\.(?:jpg|png|bmp)', txt)
        return [{'file_url': uri} for uri in uris]

You can check the directory where icrawler actually exists by the following method.

>>import icrawler
>>icrawler.__path__
['/~~~~~~~~/python3.7/site-packages/icrawler']

By the way, I think it has the following hierarchy.

├── pokemon
│ ├── Pikachu
│   │   ├── test
│   │   └── train
│ ├── Charmander
│   │   ├── test
│   │   └── train
│ ├── Squirtle
│   │   ├── test
│   │   └── train
│ ├── Bulbasaur
│   │   ├── test
│   │   └── train
│ └── Snorlax
│       ├── test
│       └── train
│
└── collect_img_poke.py

2. Build an analytical model with Custom Vison

Custom Vision is one of the Azure services that allows you to build image classification analysis models without programming. Another feature is that it is easy to publish as an API. To take advantage of this, you need to have an Azure subscription. You can use it for free for a month. Azure registration Custom Vision Portal

Refer to the following for the procedure to build Custom Vision with GUI. Quick Start: How to Build Classifiers in Custom Vision

Below are the simple steps after signing in to the Custom Vision portal.

  1. Select New Project to create your first project. The Create New Project dialog box is displayed. Refer to any settings for the text box and the following settings for the check boxes.

image.png

  1. Click Add Image. image.png

  2. Tag the image and upload it. This time, I'm uploading images of Pikachu, Charmander, Squirtle, Bulbasaur, and Snorlax.

image.png

  1. Click Train in the upper right corner, select Quick Training, and then click Train in the modal window. image.png

The image classification app is built by the above procedure. The information of the construction model is as follows.

image.png

The accuracy is rugged this time because the automatically collected images are uploaded without scrutiny. If you want to improve the accuracy, please scrutinize the automatically collected images. Maybe Pikachu and people who are cosplaying are mixed.

  1. Then simply click the Publish button in the upper left and this analysis model will be published on the web as an API. image.png

  2. Click on the Prediction URL for information on connecting to the API. image.png

This time, in order to upload and utilize the image, we will utilize the URL and Prediction-Key described below in the image.

The feature of Azure Custom Vision is that you can easily build an image classification model by the above procedure and publish it on the Web as an API.

3. Model validation

Let the construction model analyze the image downloaded for the test and check the accuracy rate. Create the following files in the same hierarchy as collect_img_poke.py. Execute the file after setting the public URL and Key of the created model.

predictions_poke_test.py


import glob
import requests
import json

base_url = '<API URL>'
prediction_key = '<Key>'

poke_root_dir = 'pokemon/'
#List of Pokemon names to be verified
pokemonnames = ['Pikachu','Squirtle','Charmander','Bulbasaur','Snorlax']

for pokename in pokemonnames:
    testfiles = glob.glob(poke_root_dir + pokename + '/test/*')
    data_count = len(testfiles)
    true_count = 0
    
    for testfile in testfiles:
        headers = {
            'Content-Type': 'application/json',    
            'Prediction-Key': prediction_key
        }
        
        params = {}
        predicts = {}
        data = open(testfile, 'rb').read()
        response = requests.post(base_url, headers=headers, params=params, data=data)
        results = json.loads(response.text)
        
        try:
            #Loop by the number of tags in the prediction result
            for prediction in results['predictions']:
                #Predicted Pokemon and its probability are linked and stored
                predicts[prediction['tagName']] = prediction['probability']
            #Select the Pokemon with the highest probability as the prediction result
            prediction_result = max(predicts, key=predicts.get)
        
            #Increase the number of correct answers if the prediction results match
            if pokename == prediction_result:
                true_count += 1

        #Image size>At 6MB, squeeze until you get an error in the Custom Vision limit.
        except KeyError:
            data_count -= 1
            continue
    
    #Calculation of correct answer rate
    accuracy = (true_count / data_count) * 100
    print('Pokemon name:' + pokename)
    print('Correct answer rate:' + str(accuracy) + '%')

It is OK if the following result is output. The accuracy of the model is low, but you probably make a rough judgment based on the color.

>>>python predictions_poke_test.py 
Pokemon name:Pikachu
Correct answer rate:95.45454545454545%
Pokemon name:Squirtle
Correct answer rate:95.23809523809523%
Pokemon name:Charmander
Correct answer rate:81.81818181818183%
Pokemon name:Bulbasaur
Correct answer rate:85.0%
Pokemon name:Snorlax
Correct answer rate:95.83333333333334%

4. Implementation utilizing Custom Vision API

This app is implemented in Flask. The outline of the app is shown below.

image.png

The web application analyzes the uploaded image with Custom Vision and acquires the analysis result. From the analysis result, detailed information in the DB is acquired, and that information is embedded in HTML and displayed on the screen. This MTV model is used by Flask and Django. For more information on Flask, see " Practical Machine Learning Utilization Guide </" Check a> ”and“ Official Reference ". Here, we will describe from the cooperation with Cusom Vision corresponding to ② to ⑤ in the figure to the acquisition of DB information.

Click here for the full code: Pokemon image classification app code

Below is the corresponding code.

models.py


import flaski.database
import requests
import json
import os


base_url = '<API URL>'
prediction_key = '<Key>'
POKEMON_FOLDER =  './static/images/pokemon/'

#Prediction probability threshold (percentage)
threshold = 60

#Get Pokemon information from DB and return it as dictionary type
def get_pokemon_data(pokemonname):
    ses = flaski.database.db_session()
    pokemon = flaski.database.Pokemon
    pokemon_data = ses.query(pokemon).filter(pokemon.pokemon_name == pokemonname).first()

    pokemon_data_dict = {}
    if not pokemon_data is None:
        pokemon_data_dict['pokemon_name'] = pokemon_data.pokemon_name
        pokemon_data_dict['wiki_url']        = pokemon_data.wiki_url
        pokemon_data_dict['picture_path']    = os.path.join(POKEMON_FOLDER, pokemon_data.pokemon_name + '.png')

    return pokemon_data_dict

#Model API call
def callAPI(uploadFile):
    #Predictive execution
    headers = {
        'Content-Type': 'application/json',
        'Prediction-Key': prediction_key
    }
    params = {}
    predicts = {}
    data = open(uploadFile, 'rb').read()
    response = requests.post(base_url, headers=headers, params=params, data=data)
    response_list = json.loads(response.text)
    result = []

    try:
        #Loop by the number of tags in the prediction result
        for prediction in response_list['predictions']:
            if len(get_pokemon_data(prediction['tagName'])) != 0:
            #Adopt one with a probability greater than the threshold
                if prediction['probability'] * 100 > threshold:
                    result.append(get_pokemon_data(prediction['tagName']))
        return result

    #Image size>If it is 6MB, squeeze it until an error occurs due to the limitation of Custom Vision.
    except KeyError:
        return result


The def call API (uploadFile) is the process of uploading an image to Custom Vision and acquiring data, which is almost the same as the content of the script described in 3. Model verification. The difference is that we have set the Prediction Probability Threshold (percentage): threshold so that we only get more precision information. In the case of the above code, we are getting information on Pokemon with a precision rate of 60% or more.

def get_pokemon_data (pokemonname) is connected to the DB table created by Flask by the following process. Then, the information of a specific Pokemon is acquired by the filter process.

ses = flaski.database.db_session()
pokemon_data = ses.query(pokemon).filter(pokemon.pokemon_name == pokemonname).first()

In this way, Flask makes it easy to get DB information without using SQL. You can also create DB tables with code.

The above code is used to acquire the analysis results from Custom Vision. As you can see, developing apps using Custom Vision is very easy.

Finally

Azure Custom Vision was a great starting point for developing machine learning apps because it makes it easy to build image classification models. Despite the detailed explanation in the book I referred to, the reason I wrote the article was that the book published last year did not respond to various specification changes (crawler, etc.). Therefore, this article describes the newly implemented contents. I don't mention much about the implementation of Flask, because it's understandable if you read the book.

I hope this article gives you a fluffy overview of creating machine learning apps.

Recommended Posts

Build an image classification model explosively with Azure Custom Vision and implement it with Flask
I made an image classification model and tried to move it on mobile
POST the image with json and receive it with flask
Take an image with Pepper and display it on your tablet
[Azure] Hit Custom Vision Service with Python
Infer Custom Vision model with Raspberry Pi
Implement a model with state and behavior
Create an image composition app with Flask + Pillow
Crop Numpy.ndarray and save it as an image
Return the image data with Flask of Python and draw it to the canvas element of HTML