I used it recently, so I'll write a little. Since this is my first post, there may be many mistakes and missing parts. I would appreciate it if you could kindly teach me: girl_tone1:
Azure Face had a strong impression of face recognition. I thought it was interesting to be able to recognize emotions, so I used it for emotion recognition.
What you can do: face verification, face detection, emotion recognition It seems that there are other ways to look for similar faces, group by the same face, and so on. In this article, we will use face detection and emotion recognition.
Write this and you'll be able to use it: ok_woman_tone1:
KEY = 'your_key'
ENDPOINT = 'your_endpoint'
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
params = ['age','emotion']
result = face_client.face.detect_with_stream(
image
, return_face_id=True
, return_face_landmarks=False
, return_face_attributes=params
)
I basically kept the default and changed only return_face_attributes from False. image: This time pass a stream of images return_face_id: If set to True, face ID will be returned. return_face_landmarks: If set to True, it will be returned. Face landmark return_face_attributes Returns the attribute information passed in the parameter.
This time I passed age and emotion to return_face_attributes It returns the target's age and emotions.
I want to be able to take a picture by pressing a button on the camera → emotion recognition → display the result…: upside_down: I will make it using kivy.
[Emotion.py]
# -*- coding: utf-8 -*-
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.lang import Builder
Builder.load_string("""
<MainWidget>:
""")
class MainWidget(BoxLayout):
pass
class EmotionApp(App):
title = "E.A."
def build(self):
return MainWidget()
if __name__ == "__main__":
EmotionApp().run()
Screen ↓
Black
[Emotion.py]
(Abbreviation)
Builder.load_string("""
<MainWidget>:
orientation: 'vertical'
Camera:
id: camera
resolution: (640, 480)
play: True
Button:
text: 'Capture'
size_hint_y: None
height: '48dp'
on_press: root.capture()
""")
class MainWidget(BoxLayout):
def capture(self):
print('captured!')
(Abbreviation)
↓ Screen (Since it was processed very roughly so that the face and background are not reflected, it is for reference only ...)
[Emotion.py]
KEY = 'your_key'
ENDPOINT = 'your_endpoint'
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
(Abbreviation)
def capture(self):
texture = self.ids['camera'].texture
nparr = np.fromstring(texture.pixels, dtype=np.uint8)
reshaped = np.reshape(nparr, (480,640,4))
ret,buf = cv2.imencode('.jpg', reshaped)
stream = io.BytesIO(buf)
params = ['age','emotion','hair']
result = face_client.face.detect_with_stream(
stream
, return_face_id=True
, return_face_landmarks=False
, return_face_attributes=params
)
(Abbreviation)
[Emotion.py]
(Abbreviation)
Layout = BoxLayout(orientation='vertical')
faceatts = result[0].face_attributes
Layout.add_widget(Label(text=("age: "+ str(int(faceatts.age)))))
Layout.add_widget(Label(text="emotion: "))
Layout.add_widget(Label(text="anger: " + str(faceatts.emotion.anger)))
Layout.add_widget(Label(text="contempt: " + str(faceatts.emotion.contempt)))
Layout.add_widget(Label(text="disgust: " + str(faceatts.emotion.disgust)))
Layout.add_widget(Label(text="fear: " + str(faceatts.emotion.fear)))
Layout.add_widget(Label(text="happiness: " + str(faceatts.emotion.happiness)))
Layout.add_widget(Label(text="neutral: " + str(faceatts.emotion.neutral)))
Layout.add_widget( Label(text="sadness: " + str(faceatts.emotion.sadness)))
Layout.add_widget(Label(text="surprise: " + str(faceatts.emotion.surprise)))
popupWindow = Popup(title="Results", content=Layout, size_hint=(None,None),size=(400,400))
popupWindow.open()
(Abbreviation)
I will try using it. Click the button! Oh ~ I could tell that I was laughing properly! I'm getting younger ... (laughs): relaxed: Eventually I wanted to stick to the look a little more & learn about how it works: robot: Thank you for reading!
# -*- coding: utf-8 -*-
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.lang import Builder
import cv2
import numpy as np
import io
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from kivy.uix.label import Label
from kivy.uix.popup import Popup
KEY = 'your_key'
ENDPOINT = 'your_end_point'
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
Builder.load_string("""
<MainWidget>:
orientation: 'vertical'
Camera:
id: camera
resolution: (640, 480)
play: True
Button:
text: 'Capture'
size_hint_y: None
height: '48dp'
on_press: root.capture()
""")
class MainWidget(BoxLayout):
def capture(self):
texture = self.ids['camera'].texture
nparr = np.fromstring(texture.pixels, dtype=np.uint8)
reshaped = np.reshape(nparr, (480,640,4))
ret,buf = cv2.imencode('.jpg', reshaped)
stream = io.BytesIO(buf)
params = ['age','emotion']
result = face_client.face.detect_with_stream(
stream
, return_face_id=True
, return_face_landmarks=False
, return_face_attributes=params
)
Layout = BoxLayout(orientation='vertical')
faceatts = result[0].face_attributes
Layout.add_widget(Label(text=("age: "+ str(int(faceatts.age)))))
Layout.add_widget(Label(text="emotion: "))
Layout.add_widget(Label(text="anger: " + str(faceatts.emotion.anger)))
Layout.add_widget(Label(text="contempt: " + str(faceatts.emotion.contempt)))
Layout.add_widget(Label(text="disgust: " + str(faceatts.emotion.disgust)))
Layout.add_widget(Label(text="fear: " + str(faceatts.emotion.fear)))
Layout.add_widget(Label(text="happiness: " + str(faceatts.emotion.happiness)))
Layout.add_widget(Label(text="neutral: " + str(faceatts.emotion.neutral)))
Layout.add_widget( Label(text="sadness: " + str(faceatts.emotion.sadness)))
Layout.add_widget(Label(text="surprise: " + str(faceatts.emotion.surprise)))
popupWindow = Popup(title="Results", content=Layout, size_hint=(None,None),size=(400,400))
popupWindow.open()
class EmotionApp(App):
title = "E.A."
def build(self):
return MainWidget()
if __name__ == "__main__":
EmotionApp().run()
Recommended Posts