[PYTHON] Let's recognize emotions with Azure Face

I used it recently, so I'll write a little. Since this is my first post, there may be many mistakes and missing parts. I would appreciate it if you could kindly teach me: girl_tone1:

Azure Face had a strong impression of face recognition. I thought it was interesting to be able to recognize emotions, so I used it for emotion recognition.

Azure Face

What you can do: face verification, face detection, emotion recognition It seems that there are other ways to look for similar faces, group by the same face, and so on. In this article, we will use face detection and emotion recognition.

1. Create variables for Azure endpoints and keys

Write this and you'll be able to use it: ok_woman_tone1:

KEY = 'your_key'
ENDPOINT = 'your_endpoint'
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))

2. Call the face detection method

params = ['age','emotion']
result = face_client.face.detect_with_stream(  
             image
             , return_face_id=True
             , return_face_landmarks=False
             , return_face_attributes=params 
         )

I basically kept the default and changed only return_face_attributes from False. image: This time pass a stream of images return_face_id: If set to True, face ID will be returned. return_face_landmarks: If set to True, it will be returned. Face landmark return_face_attributes Returns the attribute information passed in the parameter.

This time I passed age and emotion to return_face_attributes It returns the target's age and emotions.

UI-like

I want to be able to take a picture by pressing a button on the camera → emotion recognition → display the result…: upside_down: I will make it using kivy.

Create a simple kivy file

[Emotion.py]

# -*- coding: utf-8 -*-
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.lang import Builder

Builder.load_string("""
<MainWidget>:

""")

class MainWidget(BoxLayout):
    pass
class EmotionApp(App):
    title = "E.A."
    def build(self):
        return MainWidget()

if __name__ == "__main__":
    EmotionApp().run()

Screen ↓ E.A1.png

Black

2. Attach the camera and buttons.

[Emotion.py]

(Abbreviation)

Builder.load_string("""
<MainWidget>:
    orientation: 'vertical'
    Camera:
        id: camera
        resolution: (640, 480)
        play: True
    Button:
        text: 'Capture'
        size_hint_y: None
        height: '48dp'
        on_press: root.capture()
""")

class MainWidget(BoxLayout):
    def capture(self):
        print('captured!')
(Abbreviation)

↓ Screen (Since it was processed very roughly so that the face and background are not reflected, it is for reference only ...) 無題.png

Enter the face detection method

[Emotion.py]

KEY = 'your_key'
ENDPOINT = 'your_endpoint'
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))   

   (Abbreviation)
    def capture(self):
        texture = self.ids['camera'].texture
        nparr = np.fromstring(texture.pixels, dtype=np.uint8)
        reshaped = np.reshape(nparr, (480,640,4))
        ret,buf = cv2.imencode('.jpg', reshaped)
        stream = io.BytesIO(buf)

        params = ['age','emotion','hair']
        result = face_client.face.detect_with_stream(  
                     stream
                     , return_face_id=True
                     , return_face_landmarks=False
                     , return_face_attributes=params 
                 )
   (Abbreviation)

Pop-up for displaying results

[Emotion.py]

   (Abbreviation)
    Layout = BoxLayout(orientation='vertical')
       faceatts = result[0].face_attributes
       Layout.add_widget(Label(text=("age: "+ str(int(faceatts.age)))))
       Layout.add_widget(Label(text="emotion: "))      
       Layout.add_widget(Label(text="anger: " + str(faceatts.emotion.anger)))
       Layout.add_widget(Label(text="contempt: " + str(faceatts.emotion.contempt)))
       Layout.add_widget(Label(text="disgust: " + str(faceatts.emotion.disgust)))
       Layout.add_widget(Label(text="fear: " + str(faceatts.emotion.fear)))
       Layout.add_widget(Label(text="happiness: " + str(faceatts.emotion.happiness)))
       Layout.add_widget(Label(text="neutral: " + str(faceatts.emotion.neutral)))
       Layout.add_widget( Label(text="sadness: " + str(faceatts.emotion.sadness)))
       Layout.add_widget(Label(text="surprise: " + str(faceatts.emotion.surprise)))

       popupWindow = Popup(title="Results", content=Layout, size_hint=(None,None),size=(400,400)) 
       popupWindow.open()
     (Abbreviation)

did it!

I will try using it. 無題2.png Click the button! 無題3.png Oh ~ I could tell that I was laughing properly! I'm getting younger ... (laughs): relaxed: Eventually I wanted to stick to the look a little more & learn about how it works: robot: Thank you for reading!

# -*- coding: utf-8 -*-
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.lang import Builder
import cv2
import numpy as np
import io
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from kivy.uix.label import Label
from kivy.uix.popup import Popup
KEY = 'your_key'
ENDPOINT =  'your_end_point'
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
Builder.load_string("""
<MainWidget>:
    orientation: 'vertical'
    Camera:
        id: camera
        resolution: (640, 480)
        play: True
    Button:
        text: 'Capture'
        size_hint_y: None
        height: '48dp'
        on_press: root.capture()
""")

class MainWidget(BoxLayout):
    def capture(self):
        texture = self.ids['camera'].texture
        nparr = np.fromstring(texture.pixels, dtype=np.uint8)
        reshaped = np.reshape(nparr, (480,640,4))
        ret,buf = cv2.imencode('.jpg', reshaped)
        stream = io.BytesIO(buf)
        params = ['age','emotion']
        result = face_client.face.detect_with_stream(  
                    stream
                    , return_face_id=True
                    , return_face_landmarks=False
                    , return_face_attributes=params 
            )
        Layout = BoxLayout(orientation='vertical')
        faceatts = result[0].face_attributes
        Layout.add_widget(Label(text=("age: "+ str(int(faceatts.age)))))
        Layout.add_widget(Label(text="emotion: "))      
        Layout.add_widget(Label(text="anger: " + str(faceatts.emotion.anger)))
        Layout.add_widget(Label(text="contempt: " + str(faceatts.emotion.contempt)))
        Layout.add_widget(Label(text="disgust: " + str(faceatts.emotion.disgust)))
        Layout.add_widget(Label(text="fear: " + str(faceatts.emotion.fear)))
        Layout.add_widget(Label(text="happiness: " + str(faceatts.emotion.happiness)))
        Layout.add_widget(Label(text="neutral: " + str(faceatts.emotion.neutral)))
        Layout.add_widget( Label(text="sadness: " + str(faceatts.emotion.sadness)))
        Layout.add_widget(Label(text="surprise: " + str(faceatts.emotion.surprise)))

        popupWindow = Popup(title="Results", content=Layout, size_hint=(None,None),size=(400,400)) 
        popupWindow.open()
        
class EmotionApp(App):
    title = "E.A."
    def build(self):
        return MainWidget()

if __name__ == "__main__":
    EmotionApp().run()


Recommended Posts

Let's recognize emotions with Azure Face
Face recognition with Edison
Now, let's try face recognition with Chainer (learning phase)
Let's play with 4D 4th
Let's play with Amedas data-Part 1
Face recognition with Python's OpenCV
Let's run Excel with Python
Face recognition with Amazon Rekognition
Let's make Othello with wxPython
Face detection with Python + dlib
Let's play with Amedas data-Part 4
Let's write python with cinema4d.
Face detection with Python + OpenCV
Recognize red objects with python
Let's do R-CNN with Sklearn-theano
Face recognition / cutting with OpenCV
Let's play with Amedas data-Part 3
Try face recognition with Python
Face detection with Haar Cascades
Let's play with Amedas data-Part 2
Anime face detection with OpenCV