[PYTHON] Create an AI that identifies Zuckerberg's face by deep learning ④ (WEB construction)

Part 1: Create AI to identify Zuckerberg's face by deep learning ① (Learning data preparation) Part 2: Making AI that identifies Zuckerberg's face by deep learning ② (AI model construction) Part 3: Create AI to identify Zuckerberg's face by deep learning ③ (Data learning)

This article is Part 4. What we make is "AI that identifies Zuckerberg's face". Use TensorFlow from Google's deep learning library. Sample videos of what I made this time are here.

スクリーンショット 2017-05-07 12.33.41.png zgif2.gif

In the first to third parts so far, we had the collected face data learned using TensorFlow, so this time we actually extracted the face of an arbitrary image using the learning result, and the extracted face I would like to be able to determine how close the face is to the learned face. And finally, we'll use Flask, Python's web application framework, to enable a series of identifications to be performed from a web interface. If you do it to the end, I think it will be "Oh, this is like AI!" I became w

First, consider the working directory structure (preparation)

Considering the WEB cooperation later, the directory structure looks like the following.

Directory structure


/tensoflow
  main.py
  model.ckpt(File where learning results are saved)
  eval.py(Described the process of returning the case result of any image)
  web.py(Describe WEB related processing using Flask)
  /opencv(I cloned it from opencv official github)
  /static(Place the images used in Flask)
    /images
      /default
      /cut_dace
      /face_detect
  /data
    /train
      /zuckerbuerg
      /elonmusk
      /billgates
      data.txt
    /test
      /zuckerbuerg
      /elonmusk
      /billgates
      data.txt
After that, files created when installing tensorflow, etc.

This time, I will mainly edit the files `ʻeval.pyand web.py. To save the face recognition part of the input image and the image with the face cut out, create a directory called / static / images`` in preparation for later WEB construction, and save the input image and processed image in that directory. I will continue. I will also use the OpenCV used in Part 1 for face recognition and clipping, so OpenCV I also placed the folder cloned from opencv / opencv).

④ Make it possible to judge an arbitrary image using the learning result

Since I had you actually learn face data using TensorFlow in the previous part 3, the last save_path = saver.save (sess, "model.ckpt") of main.pyI think that the model.ckptfile specified in one line of is completed. Since the adjusted parameters after data learning are saved in this file, you can easily use the AI model actually learned by TensorFlow by using this model.ckpt file. I will.

Let's first write TensorFlow-related processing so that the face can be judged from any image. (TensorFlow for Momokuro member face recognition (Part 2) I referred to the article. Thank you m (_ _) m)

eval.py


#!/usr/bin/env python
#! -*- coding: utf-8 -*-

import sys
import numpy as np
import cv2
import tensorflow as tf
import os
import random
import main

#OpenCV default face classifier path
cascade_path = './opencv/data/haarcascades/haarcascade_frontalface_default.xml'
faceCascade = cv2.CascadeClassifier(cascade_path)

#Identification label and the name corresponding to each label number
HUMAN_NAMES = {
  0: u"Zuckerberg",
  1: u"Elon Musk",
  2: u"Bill Gates"
}

#Specified image(img_path)Learning results(ckpt_path)Judgment using
def evaluation(img_path, ckpt_path):
  #Graph Reset(Apparently, I'm not sure what I'm doing ...)
  tf.reset_default_graph()
  #Open image
  f = open(img_path, 'r')
  #Image loading
  img = cv2.imread(img_path, cv2.IMREAD_COLOR)
  #Convert to monochrome image
  gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
  face = faceCascade.detectMultiScale(gray, 1.1, 3)
  if len(face) > 0:
    for rect in face:
      #I wanted to give an appropriate name to the processed image because it doesn't matter. Maybe the date and the number of seconds
      random_str = str(random.random())
      #Let's write the face part with a red line
      cv2.rectangle(img, tuple(rect[0:2]), tuple(rect[0:2]+rect[2:4]), (0, 0, 255), thickness=2)
      #Where to save the image with the face surrounded by a red line
      face_detect_img_path = './static/images/face_detect/' + random_str + '.jpg'
      #Saving the image with the face surrounded by a red line
      cv2.imwrite(face_detect_img_path, img)
      x = rect[0]
      y = rect[1]
      w = rect[2]
      h = rect[3]
      #Save the image of the detected face cut out
      cv2.imwrite('./static/images/cut_face/' + random_str + '.jpg', img[y:y+h, x:x+w])
      #Cut out face image to pass to TensorFlow
      target_image_path = './static/images/cut_face/' + random_str + '.jpg'
  else:
    #If no face is found, the process ends
    print 'image:NoFace'
    return
  f.close()

  f = open(target_image_path, 'r')
  #Array to put data
  image = []
  #Image loading
  img = cv2.imread(target_image_path)
  # 28px*Resized to 28px
  img = cv2.resize(img, (28, 28))
  #After arranging the image information in a row, 0-Set to a float value of 1
  image.append(img.flatten().astype(np.float32)/255.0)
  #Convert to numpy format so that it can be processed by TensorFlow
  image = np.asarray(image)
  #Outputs and returns the probability of each label for the input image(main.Call from py)
  logits = main.inference(image, 1.0)
  # We can just use 'c.eval()' without passing 'sess'
  sess = tf.InteractiveSession()
  # restore(Parameter reading)Preparation of
  saver = tf.train.Saver()
  #Variable initialization
  sess.run(tf.initialize_all_variables())
  if ckpt_path:
    #Reading parameters after learning
    saver.restore(sess, ckpt_path)
  # sess.run(logits)Same as
  softmax = logits.eval()
  #judgment result
  result = softmax[0]
  #Judgment result%And round off
  rates = [round(n * 100.0, 1) for n in result]
  humans = []
  #Create a hash of label numbers, names and percentages
  for index, rate in enumerate(rates):
    name = HUMAN_NAMES[index]
    humans.append({
      'label': index,
      'name': name,
      'rate': rate
    })
  #Sort in descending order of percentage
  rank = sorted(humans, key=lambda x: x['rate'], reverse=True)

  #Returns the judgment result and the path of the processed image
  return [rank, face_detect_img_path, target_image_path]

#For testing from the command line
if __name__ == '__main__':
  evaluation('testimage.jpg', './model2.ckpt')

I think that the above process will return an array ** [Percentage of judgment results, image with face part surrounded by red line, image with face part cut out] **. All you have to do now is to make it possible to perform a series of operations such as (1) uploading an image → (2) cutting out the face and making a judgment by AI → (3) outputting the judgment result on the WEB interface.

Deep learning using TensorFlow ends here, so the following is the construction of the WEB.

⑤ Make it possible to judge images on the WEB

This time, we will use Flask, a web application framework for python. Flask is a framework that makes it very easy to create web applications in Python. This time, I will omit how to use Flask. (Reference: Try using the web application framework Flask)

It's really easy: "Upload image-> pass image to eval.py-> display result".

web.py


# -*- coding: utf-8 -*-

import tensorflow as tf
import multiprocessing as mp

from flask import Flask, render_template, request, redirect, url_for
import numpy as np
from werkzeug import secure_filename
import os
import eval

#Instantiate your name as app
app = Flask(__name__)
app.config['DEBUG'] = True
#Where to save the posted image
UPLOAD_FOLDER = './static/images/default'

#routing./When accessing
@app.route('/')
def index():
  return render_template('index.html')

#Action when posting an image
@app.route('/post', methods=['GET','POST'])
def post():
  if request.method == 'POST':
    if not request.files['file'].filename == u'':
      #Save uploaded file
      f = request.files['file']
      img_path = os.path.join(UPLOAD_FOLDER, secure_filename(f.filename))
      f.save(img_path)
      # eval.Pass the uploaded image to py
      result = eval.evaluation(img_path, './model2.ckpt')
    else:
      result = []
    return render_template('index.html', result=result)
  else:
  	#If you want to redirect due to an error etc.
  	return redirect(url_for('index'))

if __name__ == '__main__':
  app.debug = True
  app.run(host='0.0.0.0')

Click here for View and CSS files <** index.html > Main view https://github.com/AkiyoshiOkano/zuckerberg-detect-ai/blob/First_commit/templates/index.html < layout.html > Common layout part https://github.com/AkiyoshiOkano/zuckerberg-detect-ai/blob/First_commit/templates/layout.html < style.css **> CSS file https://github.com/AkiyoshiOkano/zuckerberg-detect-ai/blob/First_commit/static/style.css

The final directory structure looks like this.

Directory structure


/tensoflow
  main.py(tensorflow learning model and training part)
  model.ckpt(File where learning results are saved)
  eval.py(Described the process of returning the case result of any image)
  web.py(Describe WEB related processing using Flask)
  /opencv(I cloned it from opencv official github)
  /static(Place images and css used in Flask)
    /images
      /default(Uploaded image)
      /cut_dace(Image with cropped face)
      /face_detect(Image of face surrounded by red line)
    style.css
  /templates(View part of Flask)
    index.html
    layout.html
  /data
    /train
      /zuckerbuerg
      /elonmusk
      /billgates
      data.txt
    /test
      /zuckerbuerg
      /elonmusk
      /billgates
      data.txt
After that, files created when installing tensorflow, etc.

It seems that TensorFlow can be run on Heroku, so I'll try it when I have time.

This completes the procedure for creating a Zuckerberg detector using TensorFLow! Thank you to everyone who helped us. I hope my article will be helpful to someone m (_ _) m zgif2.gif

Postscript of development

It was my first machine learning (deplanning) this time, but I thought that the information on the net is great because I could form it to some extent only with the information on the WEB without using books etc. and I could understand deep learning so much. thought. However, I think that this time I touched on "I think it's really the first step in machine learning (deep learning)." (Is it possible to change from an "inexperienced person" to a "beginner"?)

Regarding the AI that was actually trained, ** there is a high probability that Bill Gates or Elon Musk will have more than 95% of the images of women **, which means that the data trained this time is all male faces. So, I can't judge well when a woman's face is handed over, but for the time being, I guessed that this would be the result because I had to set the probability of each label to 100% and output it. difficult.

スクリーンショット 2017-05-07 1.05.31.png

However, I was quite surprised at the accuracy of deep learning because ** Zuckerberg, Ilone Mask, and Bill Gates who trained face data can be identified almost without removing **. It's like "Oh, I'll hit you really w". (Is it possible to remove only the Ilone mask once in a while?)

zuck5.png elon5.png gates3.png

After that, I was impressed by the convenience of TensorFlow, which allows you to move with built-in functions for the time being, even if you do not understand the complicated mathematical formulas and calculus that occur behind deep learning. If I study more, the range of things I can do will expand greatly, so I would like to take this opportunity to do my best to catch up with machine information. I think there were many points that I couldn't reach in my first deep learning experience, but I would like to thank anyone who read it to the end. (I would appreciate it if you could point out the points that are misinterpreted. M (_ _) m)

I even made a development road movie under the name of ** GW Independent Study ** in 2017, so if you have time, please take a look (laughs). You can also see the judgment results judged from the images of various people. The development movie is from here.

** "Creating an AI that identifies Zuckerberg's face with deep learning" ** Part 1: Create AI to identify Zuckerberg's face by deep learning ① (Learning data preparation) Part 2: Making AI that identifies Zuckerberg's face by deep learning ② (AI model construction) Part 3: Create AI to identify Zuckerberg's face by deep learning ③ (Data learning) Part 4: Create AI to identify Zuckerberg's face by deep learning ④ (WEB construction)

GitHub:https://github.com/AkiyoshiOkano/zuckerberg-detect-ai

Recommended Posts

Create an AI that identifies Zuckerberg's face by deep learning ④ (WEB construction)
Create an AI that identifies Zuckerberg's face by deep learning ② (AI model construction)
Create an AI that identifies Zuckerberg's face by deep learning ① (Learning data preparation)
Create AI to identify Zuckerberg's face by deep learning ③ (Data learning)
[AI] Deep Metric Learning
Create an environment for "Deep Learning from scratch" with Docker