Deploy Python face recognition model on Heroku and use it from Flutter ②

In the previous article, I even covered deploying Python's facial recognition model on Heroku.

Deploy Python face recognition model to Heroku and use it from Flutter ①

This time, I would like to introduce how to call the model from a mobile application and actually realize face comparison. I used Flutter to create a sample mobile app.

About the poster of this article

We are tweeting about application development utilizing face recognition on Twitter. https://twitter.com/studiothere2

The diary of application development is serialized in note. https://note.com/there2

App overview

Screenshot_resize.png

It is a simple application that judges whether two images are the same person by selecting two images from the gallery and pressing the compare button at the bottom right. Get the embedding of each image from the web service created in the previous article and calculate the L2 norm between the embeddings. If the L2 norm is 0.6 or less, it is judged to be the same person, and if it is more than that, it is judged to be another person.

For example, if you select a different person as shown below, the L2 norm will be 0.6 or more and it will be judged as a different person.

Screenshot_1590077756.png

If the L2 norm is 0.5 or less, there is a high possibility that they are the same person, and the threshold for judging whether they are the same person with an accuracy of about 99% is 0.6.

Code commentary

Usage package

pubsec.yaml


dependencies:
  flutter:
    sdk: flutter
  image_picker: ^0.6.6+1
  ml_linalg: ^12.7.1
  http: ^0.12.1
  http_parser: ^3.1.4

-Get an image from the gallery with ʻimage_picker. This is a great way to get images from both the gallery and the camera. --The L2 norm is calculated with ml_linalg. This is a Dart library that can perform Vector operations. --You are calling a web service using http, http_parser`.

Flutter source code

First is the import part. Importing the required libraries.

main.dart


import 'dart:convert';
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
import 'package:ml_linalg/linalg.dart';
import 'package:http/http.dart' as http;
import 'package:http_parser/http_parser.dart';
import './secret.dart';

secret.dart holds information for accessing WEB services. This is not in git, so please read as appropriate.

main.dart


void main() => runApp(MyApp());`

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: MyHomePage(title: 'Flutter Demo Home Page'),
    );
  }
}

class MyHomePage extends StatefulWidget {
  MyHomePage({Key key, this.title}) : super(key: key);

  final String title;

  @override
  _MyHomePageState createState() => _MyHomePageState();
}

So far, nothing has been changed with the defaults created for the newly created project. The following are the main classes.

main.dart


class _MyHomePageState extends State<MyHomePage> {
  ///Image for comparison 1
  Uint8List _cmpImage1;

  ///Image for comparison 2
  Uint8List _cmpImage2;

  //Euclidean distance between two faces.
  double _distance = 0;

  void initState() {
    super.initState();
  }

We declare a member that holds the byte data (ʻUint8List) of the two images to be compared as member variables, and a _distance` variable that holds the L2 norm (Euclidean distance) of the embedding of the two images.

Loading images

main.dart


  Future<Uint8List> _readImage() async {
    var imageFile = await ImagePicker.pickImage(source: ImageSource.gallery);
    return imageFile.readAsBytesSync();
  }

Use the ʻImagePickerlibrary to get images from the gallery. The return value will be of typeFile, so convert it to ʻUint8List format using thereadAsBytesSync ()method. ʻUint8List is an array of ʻint type and can be handled as byte data.

A function that returns the comparison result from the distance of the L2 norm

main.dart


  ///Returns similarity depending on the Euclidean distance between the two images
  String _getCompareResultString() {
    if (_distance == 0) {
      return "";
    } else if (_distance < 0) {
      return "processing....";
    } else if (_distance < 0.6) {
      return "Same person";
    } else {
      return "Another person";
    }
  }

This function returns the image comparison result in text according to the value of _distance in the L2 norm. -1 is processing and returns the text as the same person if it is 0.6 or less, and as a different person if it is more than 0.6. Call this from the Widget and display it on the screen.

WEB service caller

It's a little long, so I'll divide it and look at it in order.

main.dart


  void uploadFile() async {
    setState(() {
      _distance = -1;
    });
    var response;
    var postUri = Uri.parse(yourRemoteUrl);
    var request1 = http.MultipartRequest("POST", postUri);

First, set _distance to -1 so that the screen shows that it is being processed. Please read postUri separately. We are preparing an http request here.

main.dart


    //First file
    debugPrint("start: " + DateTime.now().toIso8601String());
    request1.files.add(http.MultipartFile.fromBytes('file', _cmpImage1.toList(),
        filename: "upload.jpeg ", contentType: MediaType('image', 'jpeg')));
    response = await request1.send();
    if (response.statusCode == 200) print("Uploaded1!");

The image obtained from the gallery is set in the http request and the request is sent. If the return value is 200, it is successful.

main.dart


    var featureString1 = await response.stream.bytesToString();
    List<double> embeddings1 =
        (jsonDecode(featureString1) as List<dynamic>).cast<double>();
    debugPrint("end: " + DateTime.now().toIso8601String());

The return value of the web service is converted from a byte array to a string and obtained (featureString1), jsonDecode is cast to double, and as a result, it is acquired as a double type array. .. This is the embedding of the image, and you can judge whether they are the same person by comparing them.

main.dart


    //Second file
    var request2 = http.MultipartRequest("POST", postUri);
    request2.files.add(http.MultipartFile.fromBytes('file', _cmpImage2.toList(),
        filename: "upload.jpeg ", contentType: MediaType('image', 'jpeg')));
    response = await request2.send();
    if (response.statusCode == 200) print("Uploaded2!");
    var featureString2 = await response.stream.bytesToString();
    List<double> embeddings2 =
        (jsonDecode(featureString2) as List<dynamic>).cast<double>();

So far I have done the same for the second image. Next is the calculation part of the L2 norm.

main.dart


    var distance = Vector.fromList(embeddings1)
        .distanceTo(Vector.fromList(embeddings2), distance: Distance.euclidean);

    setState(() {
      _distance = distance;
    });
  }

It's very easy because it uses the ml_linalg library. All you have to do is convert the embedding of each double type array to Vector with Vector.fromList and find the distance with distanceTo. The Euclidean distance (L2 norm) is specified as the calculation method of distance.

Finally, set the distance to the member variable _distance and you're done.

Screen drawing part

main.dart


  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text(widget.title),
      ),
      body: Column(
        mainAxisAlignment: MainAxisAlignment.start,
        children: <Widget>[
          Row(
            crossAxisAlignment: CrossAxisAlignment.start,
            children: <Widget>[
              Expanded(
                flex: 1,
                child: Column(
                  mainAxisAlignment: MainAxisAlignment.start,
                  children: <Widget>[
                    RaisedButton(
                      onPressed: () async {
                        var cmpImage = await _readImage();
                        setState(() {
                          _cmpImage1 = cmpImage;
                        });
                      },
                      child: Text("Loading the first image"),
                    ),
                    Text("First image"),
                    Container(
                      child:
                          _cmpImage1 == null ? null : Image.memory(_cmpImage1),
                    ),
                  ],
                ),
              ),
              Expanded(
                flex: 1,
                child: Column(
                  children: <Widget>[
                    RaisedButton(
                      onPressed: () async {
                        var cmpImage = await _readImage();
                        setState(() {
                          _cmpImage2 = cmpImage;
                        });
                      },
                      child: Text("Loading the second image"),
                    ),
                    Text("Second image"),
                    Container(
                      child:
                          _cmpImage2 == null ? null : Image.memory(_cmpImage2),
                    ),
                  ],
                ),
              )
            ],
          ),
          SizedBox(height: 10),
          Text("Face similarity comparison results"),
          Text(_getCompareResultString()),
          Text("The L2 norm is$_distance"),
        ],
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: uploadFile,
        tooltip: 'Image comparison',
        child: Icon(Icons.compare),
      ),
    );
  }

It's a little long but simple. Load the image with the load button of each of the two images. The data of ʻUint8List of the loaded image can be displayed on the screen with ʻImage.memory. The comparison result of the image is displayed in Japanese with _getCompareResultString (). The process of calculating the distance of the image feeling is called by calling the WEB service with ʻonPressed of FloatingActionButton`.

Finally

If it is a face image that is properly reflected, it will judge whether it is the same person, so it is quite impressive. Recently, it seems that some models can recognize faces even if they are masked. Using face recognition has the problem of privacy invasion, so you need to be careful about how to use it, but it would be fun if you could develop a service that makes good use of it.

Recommended Posts

Deploy Python face recognition model on Heroku and use it from Flutter ②
Deploy Python face recognition model on Heroku and use it from Flutter ①
Deploy and use the prediction model created in Python on SQL Server
Read and use Python files from Python
Use fastText trained model from Python
Install mecab on Sakura shared server and call it from python
Deploy a Python app on Google App Engine and integrate it with GitHub
Firebase: Use Cloud Firestore and Cloud Storage from Python
PHP and Python integration from scratch on Laravel
Until you use PhantomJS with Python on Heroku
Use python on Raspberry Pi 3 and turn on the LED when it gets dark!
Install selenium on Mac and try it with python
I sent regular emails from sendgrid on heroku, on python
Get mail from Gmail and label it with Python3
Ubuntu 20.04 on raspberry pi 4 with OpenCV and use with python
Use thingsspeak from python
Use fluentd from python
Use MySQL from Python
Use MySQL from Python
Use BigQuery from python.
Use mecab-ipadic-neologd from python
Install Mecab and CaboCha on ubuntu16.04LTS so that it can be used from python3 series
Put Ubuntu in Raspi, put Docker on it, and control GPIO with python from the container
How to deploy the easiest python textbook pybot on Heroku
A note on touching Microsoft's face recognition API in Python
Install pyenv on MacBook Air and switch python to use
I tried face recognition from the video (OpenCV: python version)
[Python] I installed the game from pip and played it
Use matplotlib on Ubuntu 12 & Python
Python on Ruby and angry Ruby on Python
Use MySQL from Anaconda (python)
Deploy masonite app on Heroku 2020
Use django model from interpreter
Try face recognition with Python
Use Python on Windows (PyCharm)
Use e-Stat API from Python
processing to use notMNIST data in Python (and tried to classify it)
How to install OpenCV on Cloud9 and run it in Python
Get data from MySQL on a VPS with Python 3 and SQLAlchemy
[Python + heroku] From the state without Python to displaying something on heroku (Part 1)
Install lp_solve on Mac OS X and call it with python.
[Python + heroku] From the state without Python to displaying something on heroku (Part 2)
Everything from building a Python environment to running it on Windows
I want to pass an argument to a python function and execute it from PHP on a web server