[PYTHON] I tried using AWS Rekognition's Detect Labels API

Introduction

I tried using the Detect Labels API of AWS Rekognition, an AWS machine learning service. It seems that you can easily identify objects and scenes, so I created and used a simple image extraction application.

What is AWS Rekognition?

With the machine learning service provided by AWS, you can easily perform image recognition such as image analysis and video analysis. Specifically, the following APIs are provided.

What is the DetectLabels API?

The DetectLabels API allows you to label thousands of objects, such as cars, pets, and furniture identified from images, and get a confidence score. The confidence score is indicated by a value between 0 and 100, indicating the possibility that the identification result is correct. Quoted from AWS Rekognitoion Black belt

As quoted above, it is an API that can be labeled from the input image, and you can also check the labeling result from the management console as follows. You can see that the three cats are well identified! demo.png

The request to the DetectLabels API is as follows, using the above cat image as an input example.

{
    "Image": {
        "Bytes": "(Input image byte sequence)"
    }
}

As a response, the following JSON is returned. The structure is an array of label information, and the label information has the following items.

{
    "Labels": [
        {
            "Name": "Cat",
            "Confidence": 99.57831573486328,
            "Instances": [
                {
                    "BoundingBox": {
                        "Width": 0.369978129863739,
                        "Height": 0.7246906161308289,
                        "Left": 0.17922087013721466,
                        "Top": 0.06359343975782394
                    },
                    "Confidence": 92.53639221191406
                },
                {
                    "BoundingBox": {
                        "Width": 0.3405080735683441,
                        "Height": 0.7218159437179565,
                        "Left": 0.31681257486343384,
                        "Top": 0.14111439883708954
                    },
                    "Confidence": 90.89508056640625
                },
                {
                    "BoundingBox": {
                        "Width": 0.27936506271362305,
                        "Height": 0.7497209906578064,
                        "Left": 0.5879912376403809,
                        "Top": 0.10250711441040039
                    },
                    "Confidence": 90.0565414428711
                }
            ],
            "Parents": [
                {
                    "Name": "Mammal"
                },
                {
                    "Name": "Animal"
                },
                {
                    "Name": "Pet"
                }
            ]
        },
        {
            "Name": "Pet",
            "Confidence": 99.57831573486328,
            "Instances": [],
            "Parents": [
                {
                    "Name": "Animal"
                }
            ]
        },
        {
            "Name": "Kitten",
            "Confidence": 99.57831573486328,
            "Instances": [],
            "Parents": [
                {
                    "Name": "Mammal"
                },
                {
                    "Name": "Cat"
                },
                {
                    "Name": "Animal"
                },
                {
                    "Name": "Pet"
                }
            ]
        },
        {
            "Name": "Animal",
            "Confidence": 99.57831573486328,
            "Instances": [],
            "Parents": []
        },
        {
            "Name": "Mammal",
            "Confidence": 99.57831573486328,
            "Instances": [],
            "Parents": [
                {
                    "Name": "Animal"
                }
            ]
        }
    ],
    "LabelModelVersion": "2.0"
}

Try to make

What to make

Label the image uploaded to the S3 bucket, extract the labeled range in the image, and output the extracted image to another S3 bucket with the tag "label name" = "reliability". just made it.

Diagram

flow.jpg ①. Upload the image file to the bucket "rekognition-test-20200530" ②. Lambda "Recognition Test" is started triggered by the file creation event of the bucket ③. Lambda "Recognition Test" calls DetectLabel API by inputting the image file uploaded to S3 ④. Lambda "Recognition Test" is based on the response of Detect Label API

  1. For items with a target range specified on the label, extract the target range from the uploaded image
  2. Add S3 tag "label name" = "label trust value" to the extracted image
  3. Output to bucket "rekognition-test-20200530-output"

Settings for each AWS resource

S3

Bucket name Setting
rekognition-test-20200530 ・ Creating a bucket
-Trigger setting for Lambda "Recognition Test" at the time of file creation event in S3
rekognition-test-20200530-output ・ Creating a bucket
-Added bucket policy to give write permission to IAM Role of Lambda "Recognition Test"

Lambda Layer Since Pillow is used for image extraction, I registered Lambda Layer by referring to this blog.

  1. Launch EC2 on Amazon Linux
  2. Install Pillow on EC2
  3. Zip the folder where Pillow is installed
  4. Download the zip file and register the zip file with Lambda Layer

Lambda Runtime: python2.7

lambda_function.py


# coding: utf-8
import json
import boto3
from PIL import Image
import uuid
from io import BytesIO


def lambda_handler(event, context):
    #Event S3 and object acquisition
    s3 = boto3.client('s3')
    #The name of the bucket where the event occurred
    bucket = event['Records'][0]['s3']['bucket']['name']
    #Object key where the event occurred
    photo = event['Records'][0]['s3']['object']['key']
    try:
        #Get the image file where the S3 event occurred
        target_file_byte_string = s3.get_object(Bucket=bucket, Key=event['Records'][0]['s3']['object']['key'])['Body'].read()
        target_img = Image.open(BytesIO(target_file_byte_string))
        #Get width and height of image file
        img_width, img_height = target_img.size
        #Rekognition client
        rekognition_client=boto3.client('rekognition')
        #DetectLabels API call and labeling result acquisition
        response = rekognition_client.detect_labels(Image={'S3Object':{'Bucket':bucket,'Name':photo}}, MaxLabels=10)
        for label in response['Labels']:
            #Extract the image and output to S3 for the label with the specified range.
            for bounds in label['Instances']:
                box = bounds['BoundingBox']
                #Determine the image extraction range
                target_bounds = (box['Left'] * img_width, 
                                box['Top'] * img_height,
                                (box['Left'] + box['Width']) * img_width,
                                (box['Top'] + box['Height']) * img_height)
                #Image extraction
                img_crop = target_img.crop(target_bounds)
                imgByteArr = BytesIO()
                img_crop.save(imgByteArr, format=target_img.format)
                #S3 object tag specification
                tag = '{0}={1}'.format(label['Name'], str(label['Confidence']))
                #Output to S3
                s3.put_object(Key='{0}.jpg'.format(uuid.uuid1()), 
                            Bucket='rekognition-test-20200530-output', 
                            Body=imgByteArr.getvalue(),
                            Tagging=tag)
    except Exception as e:
        print(e)
    return True

I tried using it!

animal5.jpg

Try uploading the above input image (file name is animal5.jpg) to S3. demo2.png The upload is complete. demo3.png After waiting for a while, the extracted image was output to S3! image.png Let's check one tag of the image. image.png It's well tagged. Let's check the contents of each image output to S3.

I was able to extract it safely!

Reference material

AWS Rekognitoion Overview AWS Rekognitoion Black belt

Recommended Posts

I tried using AWS Rekognition's Detect Labels API
I tried using AWS Chalice
I tried using the checkio API
I tried using Twitter api and Line api
I tried using YOUTUBE Data API V3
I tried using UnityCloudBuild API from Python
I tried using the BigQuery Storage API
I tried scoring a transvestite contest using Face ++'s Detect API
I tried using parameterized
I tried using argparse
I tried using mimesis
I tried using Remote API on GAE / J
I tried using anytree
I tried using aiomysql
I tried using Summpy
I tried using coturn
I tried using Pipenv
I tried using the Google Cloud Vision API
I tried using matplotlib
I tried using "Anvil".
I tried using Hubot
I tried using ESPCN
I tried using openpyxl
I tried AWS CDK!
I tried using Ipython
I tried using PyCaret
I tried using cron
I tried using ngrok
I tried using face_recognition
I tried using Jupyter
I tried using PyCaret
I tried using Heapq
I tried using doctest
I tried using folium
I tried using jinja2
I tried AWS Iot
I tried using folium
I tried using time-window
I tried to get an AMI using AWS Lambda
I tried APN (remote notification) using Parse.com REST API
I tried using the API of the salmon data project
[I tried using Pythonista 3] Introduction
I tried using easydict (memo).
I tried face recognition using Face ++
I tried using Random Forest
I tried using BigQuery ML
I tried using Amazon Glacier
I tried using git inspector
[Python] I tried using OpenPose
I tried using magenta / TensorFlow
I tried using Slack emojinator
[AWS] I tried using EC2, RDS, Django. Environment construction from 1
I tried to search videos using Youtube Data API (beginner)
I tried using Microsoft's Cognitive Services facial expression recognition API
[Python] I tried collecting data using the API of wikipedia
[For beginners] I tried using the Tensorflow Object Detection API
I tried using Rotrics Dex Arm # 2
I tried to create Quip API
I tried the Naro novel API 2
I tried using Rotrics Dex Arm
I tried using GrabCut of OpenCV