This article is a hands-on article for reviewing and fixing the knowledge gained by developing Serverless Web App Mosaic. It is one of w2or3w / items / 87b57dfdbcf218de91e2).
It would be nice to read this article after looking at the following.
S3 is good if you just save a file, but it's not suitable for people to refer to that file. Even if it is an image, it cannot be viewed as it is on a Web browser, and it is necessary to download it once and then view it. It's a bit of a hassle. In that respect, Google Drive is nice because you can browse images nicely with a browser or application, whether it is a PC or a smartphone.
Go to Google's Developers Console. https://console.developers.google.com
If you do not have a project, create one.
Click "+ Enable APIs and Services" at the top of the "APIs and Services" screen on the console to enable Google Drive.
After enabling the Google Drive API, create your credentials from "Credentials" in the left pane of the "APIs and Services" screen on the console. Select "Service Account" from the drop-down menu that appears when you press "+ Create Credentials" at the top of the Credentials page.
Please enter the service account name to create it. The service account ID under the service account name is important information, so please manage it so that it will not be leaked.
After creating the service account, browse the details of the created service account from "Service Account" in the left pane of the "IAM and Administration" screen of the console.
From the service account details, click the "Edit" and "Create + Key" buttons to create the private key in JSON format and download the JSON file.
Create a folder for uploading files in Google Drive. Here, the folder name is sample-drive
. Then share that folder with the service account (email address) you created earlier.
The string after the URL of this folder will be used later in the program. (Hidden part of the capture below)
Well, this completes the settings on the Google side. Next, let's upload the file programmatically using the Google API.
First, rename the private key JSON file of the service account you downloaded earlier to an appropriate name and place it in the same location as lambda_function.py. I named it "service-account-key.json" here.
Then install the required libraries.
$ pip install google-api-python-client -t .
$ pip install oauth2client -t .
Now import what you need.
lambda_function.py
:
from googleapiclient.discovery import build
from googleapiclient.http import MediaFileUpload
from oauth2client.service_account import ServiceAccountCredentials
:
Then upload the file with the following implementation.
lambda_function.py
:
def uploadFileToGoogleDrive(fileName, localFilePath):
try:
ext = os.path.splitext(localFilePath.lower())[1][1:]
if ext == "jpg":
ext = "jpeg"
mimeType = "image/" + ext
service = getGoogleService()
file_metadata = {"name": fileName, "mimeType": mimeType, "parents": ["*********************************"] }
media = MediaFileUpload(localFilePath, mimetype=mimeType, resumable=True)
file = service.files().create(body=file_metadata, media_body=media, fields='id').execute()
except Exception as e:
logger.exception(e)
def getGoogleService():
scope = ['https://www.googleapis.com/auth/drive.file']
keyFile = 'service-account-key.json'
credentials = ServiceAccountCredentials.from_json_keyfile_name(keyFile, scopes=scope)
return build("drive", "v3", credentials=credentials, cache_discovery=False)
"parents": ["*********************************"] This part is a folder created in Google Drive Replace it with the string after the URL in.
Finally, just in case, I will post the entire program.
lambda_function.py
# coding: UTF-8
import boto3
import os
import json
from urllib.parse import unquote_plus
import numpy as np
import cv2
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
s3 = boto3.client("s3")
rekognition = boto3.client('rekognition')
from gql import gql, Client
from gql.transport.requests import RequestsHTTPTransport
ENDPOINT = "https://**************************.appsync-api.ap-northeast-1.amazonaws.com/graphql"
API_KEY = "da2-**************************"
_headers = {
"Content-Type": "application/graphql",
"x-api-key": API_KEY,
}
_transport = RequestsHTTPTransport(
headers = _headers,
url = ENDPOINT,
use_json = True,
)
_client = Client(
transport = _transport,
fetch_schema_from_transport = True,
)
from googleapiclient.discovery import build
from googleapiclient.http import MediaFileUpload
from oauth2client.service_account import ServiceAccountCredentials
def lambda_handler(event, context):
bucket = event["Records"][0]["s3"]["bucket"]["name"]
key = unquote_plus(event["Records"][0]["s3"]["object"]["key"], encoding="utf-8")
logger.info("Function Start (deploy from S3) : Bucket={0}, Key={1}" .format(bucket, key))
fileName = os.path.basename(key)
dirPath = os.path.dirname(key)
dirName = os.path.basename(dirPath)
orgFilePath = "/tmp/" + fileName
if (not key.startswith("public") or key.startswith("public/processed/")):
logger.info("don't process.")
return
apiCreateTable(dirName, key)
keyOut = key.replace("public", "public/processed", 1)
dirPathOut = os.path.dirname(keyOut)
try:
s3.download_file(Bucket=bucket, Key=key, Filename=orgFilePath)
orgImage = cv2.imread(orgFilePath)
grayImage = cv2.cvtColor(orgImage, cv2.COLOR_RGB2GRAY)
processedFileName = "gray-" + fileName
processedFilePath = "/tmp/" + processedFileName
uploadImage(grayImage, processedFilePath, bucket, os.path.join(dirPathOut, processedFileName), dirName, False)
uploadFileToGoogleDrive(key, orgFilePath)
detectFaces(bucket, key, fileName, orgImage, dirName, dirPathOut)
except Exception as e:
logger.exception(e)
raise e
finally:
if os.path.exists(orgFilePath):
os.remove(orgFilePath)
def uploadImage(image, localFilePath, bucket, s3Key, group, isUploadGoogleDrive):
logger.info("start uploadImage({0}, {1}, {2}, {3})".format(localFilePath, bucket, s3Key, group))
try:
cv2.imwrite(localFilePath, image)
s3.upload_file(Filename=localFilePath, Bucket=bucket, Key=s3Key)
apiCreateTable(group, s3Key)
if isUploadGoogleDrive:
uploadFileToGoogleDrive(s3Key, localFilePath)
except Exception as e:
logger.exception(e)
raise e
finally:
if os.path.exists(localFilePath):
os.remove(localFilePath)
def apiCreateTable(group, path):
logger.info("start apiCreateTable({0}, {1})".format(group, path))
try:
query = gql("""
mutation create {{
createSampleAppsyncTable(input:{{
group: \"{0}\"
path: \"{1}\"
}}){{
group path
}}
}}
""".format(group, path))
_client.execute(query)
except Exception as e:
logger.exception(e)
raise e
def detectFaces(bucket, key, fileName, image, group, dirPathOut):
logger.info("start detectFaces ({0}, {1}, {2}, {3}, {4})".format(bucket, key, fileName, group, dirPathOut))
try:
response = rekognition.detect_faces(
Image={
"S3Object": {
"Bucket": bucket,
"Name": key,
}
},
Attributes=[
"ALL",
]
)
name, ext = os.path.splitext(fileName)
jsonFileName = name + ".json"
localPathJSON = "/tmp/" + jsonFileName
with open(localPathJSON, 'w') as f:
json.dump(response, f, ensure_ascii=False)
s3.upload_file(Filename=localPathJSON, Bucket=bucket, Key=os.path.join(dirPathOut, jsonFileName))
if os.path.exists(localPathJSON):
os.remove(localPathJSON)
imgHeight = image.shape[0]
imgWidth = image.shape[1]
index = 0
for faceDetail in response["FaceDetails"]:
index += 1
faceFileName = "face_{0:03d}".format(index) + ext
box = faceDetail["BoundingBox"]
x = max(int(imgWidth * box["Left"]), 0)
y = max(int(imgHeight * box["Top"]), 0)
w = int(imgWidth * box["Width"])
h = int(imgHeight * box["Height"])
logger.info("BoundingBox({0},{1},{2},{3})".format(x, y, w, h))
faceImage = image[y:min(y+h, imgHeight-1), x:min(x+w, imgWidth)]
localFaceFilePath = os.path.join("/tmp/", faceFileName)
uploadImage(faceImage, localFaceFilePath, bucket, os.path.join(dirPathOut, faceFileName), group, False)
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 0, 255), 3)
processedFileName = "faces-" + fileName
processedFilePath = "/tmp/" + processedFileName
uploadImage(image, processedFilePath, bucket, os.path.join(dirPathOut, processedFileName), group, True)
except Exception as e:
logger.exception(e)
raise e
def uploadFileToGoogleDrive(fileName, localFilePath):
try:
ext = os.path.splitext(localFilePath.lower())[1][1:]
if ext == "jpg":
ext = "jpeg"
mimeType = "image/" + ext
service = getGoogleService()
file_metadata = {"name": fileName, "mimeType": mimeType, "parents": ["*********************************"] }
media = MediaFileUpload(localFilePath, mimetype=mimeType, resumable=True)
file = service.files().create(body=file_metadata, media_body=media, fields='id').execute()
except Exception as e:
logger.exception(e)
def getGoogleService():
scope = ['https://www.googleapis.com/auth/drive.file']
keyFile = 'service-account-key.json'
credentials = ServiceAccountCredentials.from_json_keyfile_name(keyFile, scopes=scope)
return build("drive", "v3", credentials=credentials, cache_discovery=False)
When you upload an image from a web app, Lambda's Python runs to process the image, and then you also upload the original image and the image with ROI marked on the face to Google Drive. Unlike S3, I'm happy to see it as a picture.
I would like to touch Google's cloud service as well. I like Google. This is one of the goals for 2020.
By the way, there are many Google products around me such as Pixel 3a, Chromebook, Google Home Mini, Chromecast, Google Wifi. As for the service, I use Google Fit with Mi Band on a daily basis as well as Gmail, Calendar, Drive, Photo, etc., and I enjoy posting Google Map as a local guide. Google One has also been upgraded to 2TB (¥ 13,000 per year).
OK Google, I love you!
Recommended Posts