[PYTHON] Display the result of video analysis using Cloud Video Intelligence API from Colaboratory.

Final notebook

Video to analyze: image.png

Preparing to use the Cloud Video Intelligence API

Prepare before you start for the next quick start. Since 1 to 3 are project settings, they are as instructed in the quick start, but 4 and 5 are slightly different.

1. Create a GCP project

Screen Shot 2019-12-27 at 17.46.14.png

2. Make sure billing is enabled for your project

Make sure the project you created in step 1 is linked to your billing account. (Validity of billing is at your own risk.)

3. Enable the Cloud Video Intelligence API

When you select the Enable button, the screen for selecting a project is displayed. Select the project created in 1 and Continue. (In the past, there were times when the API couldn't be enabled, but when I recreated the project, it worked fine. I don't know the cause.)

image.png

4. Create a service account

First of all, what is a service account?

A service account is a special Google account that belongs to an application or virtual machine (VM) rather than an individual end user. Applications can use their service account to call Google's service API without the need for user involvement.

For example, if a service account runs a Compute Engine VM, you can give that account access to the resources you need. The service account thus becomes the identity of the service, and the privileges of the service account control the resources that the service can access.

Service account|Cloud IAM documentation|  Google CloudThan

By setting the permissions of the service account, you control the resources that the service can access, so Here we want to create a service account with only the view permission on the storage. (To upload the video to Google Cloud Storage once and have it loaded from there.)

When I created it from the quick start link, I didn't know the cause, but it didn't work, so

-Select + Create Service Account from Service Accounts – IAM and Administration – Google Cloud Platform

image.png

--Enter service account name and select create

image.png

--Select Storage> Storage Object Viewer from Role --Select Continue

image.png

--Select Create from` Create Key``

image.png

--Download JSON file with pop-up

image.png

Set environment variables

Since it is necessary to specify the file path of the JSON file, the environment variable is set in the quick start, Here, the path is specified directly from the Colaboratory, so no setting is required.

Upload video to Google Cloud Storage

Go to Storage Browser – Storage – Google Cloud Platform (https://console.cloud.google.com/storage/browser?hl=ja) and select Create Bucket.

image.png

Enter a name for your bucket and select Create. (I don't care about regions here.)

image.png

Once you've created your bucket, upload your video.

image.png

I uploaded dog.mp4 directly under the bucket.

About video size etc.

Please be careful about the free tier of storage capacity and download capacity.

image.png

Cloud Storage pricing|  Cloud Storage  |  Google Cloud

Make a notebook on Colaboratory

On Colaboratory,

--File> Python 3 new notebook

Make a new notebook from.

Analyze video with Cloud Video Intelligence API

Install the Cloud Video Intelligence API Python package

Install the Cloud Video Intelligence API Python package.

!pip install -U google-cloud-videointelligence

Upload service account credentials

Upload the JSON file that you downloaded when you created the service account from File in the left pane of Colaboratory.

image.png

Create a certificate from your credentials

import json
from google.oauth2 import service_account

service_account_key_name = <JSON file name>
info = json.load(open(service_account_key_name))
creds = service_account.Credentials.from_service_account_info(info)

image.png

Create a client

Create the client by specifying the certificate here.

from google.cloud import videointelligence

video_client = videointelligence.VideoIntelligenceServiceClient(credentials=creds)

Specify the location of the video

This time, I uploaded the video file directly under the bucket, so it will be as follows.

video_url = "gs://<Bucket name>/<Video file name>"

image.png

Analyze the video

From here, it's the same as the quick start.

features = [videointelligence.enums.Feature.LABEL_DETECTION]
operation = video_client.annotate_video(
    video_url, features=features)
print('\nProcessing video for label annotations:')

result = operation.result(timeout=120)
print('\nFinished processing.')

# first result is retrieved because a single video was processed
segment_labels = result.annotation_results[0].segment_label_annotations
for i, segment_label in enumerate(segment_labels):
    print('Video label description: {}'.format(
        segment_label.entity.description))
    for category_entity in segment_label.category_entities:
        print('\tLabel category description: {}'.format(
            category_entity.description))

    for i, segment in enumerate(segment_label.segments):
        start_time = (segment.segment.start_time_offset.seconds +
                      segment.segment.start_time_offset.nanos / 1e9)
        end_time = (segment.segment.end_time_offset.seconds +
                    segment.segment.end_time_offset.nanos / 1e9)
        positions = '{}s to {}s'.format(start_time, end_time)
        confidence = segment.confidence
        print('\tSegment {}: {}'.format(i, positions))
        print('\tConfidence: {}'.format(confidence))
    print('\n')

The analysis result is displayed safely.

image.png

Error encountered

PermissionDenied: 403 The caller does not have permission

Removed permissions granted to members (service accounts) from the IAM-IAM and Administration-Google Cloud Platform (https://console.cloud.google.com/iam-admin/iam?hl=ja) page. After that, I recreated the service account and it worked. However, the cause is unknown.

Recommended Posts

Display the result of video analysis using Cloud Video Intelligence API from Colaboratory.
Feature extraction by TF method using the result of morphological analysis
Shortening the analysis time of Openpose using sound
I tried using the Google Cloud Vision API
An introduction to data analysis using Python-To increase the number of video views-
Extract only complete from the result of Trinity
Explanation of the concept of regression analysis using python Part 2
[Python] Get the text of the law from the e-GOV Law API
Mathematical understanding of principal component analysis from the beginning
Explanation of the concept of regression analysis using Python Part 1
I tried using the API of the salmon data project
Explanation of the concept of regression analysis using Python Extra 1
Study from the beginning of Python Hour8: Using packages
Decrease the class name of the detection result display of object detection
A little bit from Python using the Jenkins API
Control smart light "Yeelight" from Python without using the cloud
View using the python module of Nifty Cloud mobile backend
Follow the XBRL taxonomy display link using the OSS Arrele API
View the contents of the queue using the RabbitMQ Management Web API
[Python] I tried collecting data using the API of wikipedia
Let's publish the super resolution API using Google Cloud Platform
The story of creating a database using the Google Analytics API
Scraping the result of "Schedule-kun"
Try using the Twitter API
Try using the Twitter API
Try using the PeeringDB 2.0 API
How to get followers and followers from python using the Mastodon API
Flow of getting the result of asynchronous processing using Django and Celery
Real-time display of video acquired from webcam on Jupyter notebook (Python3)
Visualize the center of the rank battle environment from the Pokemon Home API
From the introduction of JUMAN ++ to morphological analysis of Japanese with Python
Using TensorFlow in the cloud integrated development environment Cloud9 ~ Basics of usage ~
[Ruby on Rails] Display and pinning of GoolgeMAP using Google API
I tried to display the analysis result of the natural language processing library GiNZA in an easy-to-understand manner
I tried to deliver mail from Node.js and Python using the mail delivery service (SendGrid) of IBM Cloud!