-** This is a public release of the "AWS Hands-on" materials that were conducted outside of work for colleagues at the company. ** ** -** Using a serverless architecture, the content is to create a celebrity image analysis API service in about an hour. ** ** --Although Python is used as the language, it is structured so that it can be enjoyed without basic knowledge. --The following resources are required to implement hands-on. --PC that can connect to the Internet (whether Windows or Mac) --AWS IAM user account ――We have paid close attention to the preparation of this material, but we do not guarantee its accuracy. In addition, the creator does not take any responsibility for the damage caused by this material.
--Speaker Deck has ** released a slide version of this hands-on material **. ――I think the slide version has a better design and is easier to see. ――On the other hand, this article of Qiita has the advantage that you can copy and paste the source code. ――I think it would be good if both the slide version and the Qiita version were used together. - https://speakerdeck.com/hayate_h/awshanzuon-sabaresuakitekutiyade-you-ming-ren-shi-bie-sabisuwozuo-rou -
-Use Amazon Rekognition (Image Analysis AI) And create a celebrity identification API. --The system configuration diagram is as follows. --Since there is no explicit server, it is easy to build because of the "serverless architecture". ――It also reduces the labor and cost of operation management.
――For example, prepare a photo of Japan's world-famous actor "Ken Watanabe" and load it into the AI created this time.
-* In the screenshot below, the image is blurred to protect intellectual property rights.
--Select the image of "Ken Watanabe" from the "Select File" button.
--After selecting the file, click the "Send" button ...
--"He / She is Ken Watanabe with 100% confidence."
Is displayed.
-** In this hands-on, we will create a backend for this service. ** **
--I will briefly introduce the three services used this time. ――Let's proceed with hands-on with the idea of "getting used to it rather than learning".
—— Services and architectures that users do not need to be aware of the server. --Note: This does not mean that the server does not exist at all. The server is managed by AWS. --It works just by putting the program code. Therefore, operation management becomes easier.
――It is a service that AWS takes charge of (a part of) operation management. ――The content and scope of operation management that you take charge of varies depending on the service, but in many cases, we provide not only "take backups regularly" but also "automatically scale when access increases". .. ――When the scope of operation management is wide and there is almost no effort for users, it is often referred to as a “fully managed service”. ――Even if it is "managed", it is not always "serverless".
--Access the AWS Management Console and sign in with your IAM user information.
-** Sign-in supplementary information ** ――In this hands-on, we are proceeding on the premise that the IAM policy of ** "AdministratorAccess" is given to the IAM user who implements the hands-on **. --If the "AdministratorAccess" authority is not granted, an error such as not being able to create a resource may occur in the following procedure. --In that case, attach an appropriate policy and re-execute. ――In addition, the IAM settings described on the following site were very helpful. --It has been confirmed that all hands-on procedures can be performed. -I'm thinking about IAM settings when putting external development members into an AWS account --kmiya_bbm's blog
--Confirm that the upper right corner of the screen is "Tokyo (Region)" and the lower left corner of the screen is "Japanese". --If you need to change it, refer to the next page and perform the switching work.
--Press the place name in the upper right corner of the screen and select "Asia Pacific (Tokyo) ap-northeast-1".
--Press the language at the bottom left of the screen and select "Japanese".
--First, let's create a Lambda function and learn the basic usage.
--On the management console screen, enter "Lambda" in the search window and select the displayed "Lambda".
--When the Lambda screen opens, click the "Create Function" button.
--On the function creation screen, select and enter the following. --Select "Create from scratch" --Enter "<< name >> -rekognition-handson" in "function name" --Example: higuchi-rekognition-handson --Select "Python 3.8" for "Runtime" --Open "Permissions"-"Change Default Execution Role" and select "Create New Role with Basic Lambda Permissions" --After completing the above settings, click the "Create Function" button at the bottom right of the screen.
--The Lambda function is created.
――I will write the code that works from now on. --Scroll down to "Function Code". This is the editor that writes the logic.
--First, let's write a simple Python code to learn how Lambda works.
--For example, write the following code that combines and outputs character strings. --The content is to set a character string in the variable and return the combined one. ――Specifically, ‘Pen’ + ‘Pineapple’ + ‘Apple’ + ‘Pen’ → ‘PenPineappleApplePen’.
lambda_function.py
def lambda_handler(event, context):
a = 'Pen'
b = 'Pineapple'
c = 'Apple'
x = a + b + c + a
return {
'statusCode': 200,
'body': x
}
--After writing, click the "Deploy" button, and then click "Test" on the upper right.
--The "Test Event Settings" screen is displayed. After making the following settings, click the "Create" button. --Select "Create new test event". --Enter "SimpleEvent" for the event name
--Click "Test" on the upper right. --The test will be executed in the test event you created earlier.
--Lambda has been tested. --If you see "Success" at the top of the screen, the code is proof that it worked.
--Click Details to expand the execution result of the function. ――Here, “body” is “PenPineappleApplePen”, which is the output you expected.
--Detailed logs when running Lambda can be found in "CloudWatch Logs". --Click "Monitoring"-> "View CloudWatch Logs" to move to the "CloudWatch Logs" page.
--You can check the log by selecting "20xx / xx / xx [$ LATEST] xxxxxx" on the "Log Stream" screen. Please use it when debugging. --If you set Logger etc., the log will be output here.
--From here, we will work to link the ** Lambda function with the image analysis AI (Rekognition) . --You need the proper permissions to call Rekognition from your Lambda function. --Specifically, you need to attach an IAM policy that allows access to Rekognition to the IAM role granted to Lambda. - First, let's go from granting authority. ** **
--Select "Settings" at the top of the Lambda screen
--Select "Edit" from "Basic Settings" at the bottom of the "Settings" screen.
--Go to the bottom of the "Edit Preferences" screen and click "Show xxx roles in the IAM console."
--In a separate tab, the IAM role screen will open. --Click the "Attach Policy" button.
--Enter "Rekognition" in the search window, check the "Amazon RekognitionReadOnlyAccess" policy, and click the "Attach Policy" button.
--On the IAM console screen, make sure that "AmazonRekognitionReadOnlyAccess" is attached. --You have now given Lambda permission to call Rekognition.
--Returns to the Lambda Edit Preferences screen. --Change the timeout to "10 seconds". --After the above settings, click the "Save" button.
--Rewrite the Lambda function with the celebrity identification code. --Overwrite the "function code" with the following contents and press the "Deploy" button.
lambda_function.py
import boto3
import base64
import logging
import traceback
#logger settings
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#Create a rekognition instance
rekognition = boto3.client('rekognition')
# lambda_handler function definition (main logic)
def lambda_handler(event, context):
logger.info(f'Received event = {event}')
#Since the binary data is received in the Base64 format encoded, it is decoded into bytes type.
received_body = base64.b64decode(event['body-json'])
#Excludes file information given by AWS (bytes character string)'\r\n'The main body information of the file uploaded by the 4th and subsequent ones)
images = received_body.split(b'\r\n',4)
image = images[4]
#Pass the acquired Blob format image information to Rekognition to identify the celebrity
response = rekognition.recognize_celebrities(
Image={'Bytes': image}
)
logger.info(f'Rekognition response = {response}')
try:
#Extract the reliability of the celebrity name from the Rekognition response and respond to the API caller.
label = response['CelebrityFaces'][0]
name = label['Name']
conf = round(label['Face']['Confidence'])
output = f'He/She is {name} with {conf}% confidence.'
logger.info(f'API response = {output}')
return output
except IndexError as e:
#If you can't get celebrity information from Rekognition's response, tell them to use another photo.
logger.info(f"Coudn't detect celebrities in the Photo. Exception = {e}")
logger.info(traceback.format_exc())
return "Couldn't detect celebrities in the uploaded photo. Please upload another photo."
--First line ʻimport boto3 --I'm importing the modules needed to work with AWS services. --boto3 is an SDK used to operate AWS resources in Python. - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html --Lines 25-27
response = rekognition.recognize_celebrities (Image = {'Bytes': image}) --How to use rekognition is described in the boto3 documentation. - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rekognition.html#Rekognition.Client.recognize_celebrities --Line 37
return statement`
--This is actually an anti-pattern return statement.
--Actually, it's a good idea to use a response format that matches the format of Lambda's proxy integration.
--This time, in order to check the operation on the browser, the proxy integration is not explicitly used, and the output format is intentionally ignored.
--For details, search for "API Gateway integrated response".
- https://docs.aws.amazon.com/ja_jp/apigateway/latest/developerguide/api-gateway-integration-settings-integration-response.html
――From here, we will link API Gateway and Lambda.
--Enter "API" in the search window from "Services" at the top of the screen, and select "API Gateway" as a candidate.
--Click "Create API" at the top right of the screen.
--Click the "Build" button of REST API.
--Set the following settings on the "Create new API" screen, and click the "Create API" button. --Select "New API" --API name: Enter "<< Name >> -api-handson" (Example: higuchi-api-handson) --Endpoint type: Select "Region"
--API will be developed by "resources" x "methods". --For example, "GET to / users" or "POST to / users / 12345".
--First, create a resource. Select Create Resource from Actions.
--Enter "Rekognition" in the resource name and click the "Create Resource" button.
--Next, create the method. After selecting the resource, select Actions → Create Method.
--Select "POST" from the pull-down menu and press the "Check Button".
--Select "Lambda function" in the integration type, select "<< Name >> -rekognition-handson" created earlier in the Lambda function, and click the "Save" button.
--The confirmation screen for "Adding permissions to your Lambda function" will be displayed. Click the "OK" button.
--Select "Integration Request".
--Expand "Mapping Templates" at the bottom of the screen and select "If Template Is Not Defined (Recommended)" in the "Request Body Passthrough" item. --In the "Content-Type" item, click "Add Mapping Template".
--In the "Content-Type" item, enter multipart / form-data
and press the check button.
--A template generation screen will be added at the bottom of the screen. --From the template generation pull-down, select "Method Request Passthrough" and press the "Save" button.
--Select "Settings" on the left side of the screen, and click "Add Binary Media Type" in the "Binary Media Type" item at the bottom of the "Settings" screen.
--Enter multipart / form-data
and press the" Save Changes "button.
--After selecting "Resources" in the left pane of the screen, select "Deploy API" from "Actions" at the top of the screen.
--On the API deployment screen, make the following settings and click the "Deploy" button. --Deployed stage: Select "New stage" --Stage name: Enter "dev"
--From the left pane of the screen, select "Stage" and then select "dev"-"rekognition"-"POST". --The URL of the API created as "Call URL" is displayed on the right side of the screen. Copy it.
--Paste the API Gateway URL you created earlier in the **** API Gateway URL Paste ****
part of the HTML below, and save it with the file name "index.html".
--A simple HTML file that just submits the selected file using a form.
index.html
<!DOCTYPE html>
<html lang="ja">
<head>
<meta charset="UTF-8">
<title>Celebrity recognition AI hands-on</title>
</head>
<body>
<p>Celebrity recognition is performed using Amazon Rekognition, an image identification AI!</p>
<form action="****Paste API Gateway URL****" enctype="multipart/form-data" method="POST">
<input type="file" name="Select a photo file" />
<input type="submit" name="upload"/>
</form>
</body>
</html>
--Open the created Index.html in your browser. (Select a file and drag and drop it onto the browser to open it)
――For example, let's try it with Japan's world-class comedian (?) "Watabe of the World". -* In the screenshot below, the image is blurred to protect intellectual property rights.
--Select a file and press the "Send" button ...
--"He / She is Ken Watabe with 100% confidence."
Is displayed!
--The celebrity identification service is complete!
――As expected, it is "Watabe of the world".
--The following resources will be deleted. - API Gateway - Lambda --CloudWatch log group (Lambda execution log) --IAM roll for Lambda
--From the API Gateway screen, select the created "<< Name >>-api-handson".
--From the API Gateway screen, select the created "<< Name >>-api-handson".
--With "Resource" selected in the left pane of the screen, select "Delete API" from the action.
--The confirmation screen before deletion is displayed. After entering the API name, click "Delete API".
--Search and select "Lambda" from "Services" at the top of the screen to move to the Lambda console screen.
--On the list screen, select "<< Name >> -rekognition-handson".
--Select "Monitoring" at the top of the screen and press the "View CloudWatch Log" button.
--Transitions to the CloudWatch Logs screen. Select Delete log group from Actions.
--A confirmation screen will be displayed, so click the "Delete" button.
--Return to the Lambda screen and open "Settings".
--At the bottom of the Lambda screen, press "Edit" in "Basic Settings".
--Go to the bottom of the "Edit Preferences" screen and select "Show xxx roles in the IAM console."
--Transitions to the IAM role screen. Select "Delete Role" at the top right of the screen.
--A confirmation screen will appear, so click "Yes, delete".
--Return to the Lambda screen and select "Delete Function" from "Action" at the top of the screen.
--A confirmation screen will be displayed. Select "Delete".
--The message "It was deleted successfully." Is displayed.
-** This is the end of cleaning up. Thank you for your hard work. ** **
--There are certain free tiers for various AWS services. ――This configuration is assumed to fit in the AWS free usage tier unless you call a large number of APIs. ――For reference, here is an estimate of the charges that will be incurred when the free tier is exceeded. -(As of October 13, 2020, Tokyo Region, monthly)
Service classification | Classification | Fee | Supplement |
---|---|---|---|
Data communication charges | In to AWS | 0.000USD/GB | |
〃 | Out from AWS | 0.114USD/GB | First 1GB-10TB |
CloudWatch | Log collection | 0.760USD/GB | |
〃 | Log save | 0.033USD/GB | |
Lambda | Request billing | 0.20USD/1 million | |
〃 | Execution time billing | 0.0000002083USD /128MB,100 ms |
|
API Gateway | REST API | 4.25USD/1 million | First 303,3 million calls received |
Rekognition | Image | 0.0013USD/1 image | First 1 million |
-AWS Hands-on for Beginners Serverless # 1 Build Translation Web API with Serverless Architecture | AWS -Put binary data (wav) from AWS API Gateway + Lambda to S3 using multipart / form-data – Qiita
Recommended Posts