[PYTHON] [Introduction to RasPi4] Environment construction; OpenCV / Tensorflow, Japanese input ♪

RasPi4 I finally got it, so I built the environment for the first time in a while. It was done in much the same way as Jetson_nano. This article is a continuation of the last time, and I will explain up to the point where the application works. 【reference】 0. [Jetson_nano] Tensorflow, Chainer, and Keras environment can be built from installation ♬

  1. How to build an image recognition environment with deep learning of Raspberry Pi 4 from zero to 1 hour
  2. OpenCV 4 on Raspberry Pi
  3. Japanese localization of keyboard input for Raspberry Pi

What i did

・ Environment construction; up to OpenCV / Tensorflow ・ OpenCV / Tensorflow operation verification ・ Environment construction; up to Japanese input ・ I'm writing this

・ Environment construction; up to OpenCV / Tensorflow

This was done according to Karaage's environment construction procedure (Reference 1).

Karaage's environment construction procedure.


$ git clone https://github.com/karaage0703/raspberry-pi-setup
$ cd raspberry-pi-setup
$ ./setup-opencv-raspbian-buster.sh
$ ./setup-tensorflow-raspbian-buster.sh

Now you can easily install OpenCV and Tensorflow. Also, I additionally installed the following for development.

jupyter-notebook.


$ sudo apt install jupyter-notebook

Also, this time I installed all at once with Karaage's script, but since there is a high possibility that there is a difference from the latest, the following items are installed sequentially from Reference 2.

Latest OS environment.


$ sudo apt-get update
$ sudo apt-get upgrade

CMake environment.


$ sudo apt-get install build-essential cmake unzip pkg-config

Image file library.


$ sudo apt-get install libjpeg-dev libpng-dev libtiff-dev

Library for video streams.


$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev 
$ sudo apt-get install libxvidcore-dev libx264-dev

Image display library.


$ sudo apt-get install libgtk-3-dev 
$ sudo apt-get install libcanberra-gtk*

Arithmetic library.


$ sudo apt-get install libatlas-base-dev gfortran

・ OpenCV / Tensorflow operation verification

I used the following code to verify the operation. I have not installed keras, but as I did in Reference 0 above, it says tf.keras ...., so you can use Lib of keras.

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

The result is as follows

$ python3 tensorflow_ex.py
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 41s 685us/sample - loss: 0.2220 - acc: 0.9342
Epoch 2/5
60000/60000 [==============================] - 41s 676us/sample - loss: 0.0962 - acc: 0.9700
Epoch 3/5
60000/60000 [==============================] - 41s 678us/sample - loss: 0.0688 - acc: 0.9788
Epoch 4/5
60000/60000 [==============================] - 41s 678us/sample - loss: 0.0539 - acc: 0.9830
Epoch 5/5
60000/60000 [==============================] - 41s 678us/sample - loss: 0.0435 - acc: 0.9857
10000/10000 [==============================] - 3s 295us/sample - loss: 0.0652 - acc: 0.9812

・ OpenCV operation verification

Here, I will try to move the following code that was moved with the following reference. The operation is slow, but I was able to save the image taken by the camera as a movie. 【reference】 ・ RasPi: I played with OpenCV ♬

import numpy as np
import cv2

# cv2.cv.CV_FOURCC
def cv_fourcc(c1, c2, c3, c4):
    return (ord(c1) & 255) + ((ord(c2) & 255) << 8) + \
        ((ord(c3) & 255) << 16) + ((ord(c4) & 255) << 24)

cap = cv2.VideoCapture(0) #'dougasozai_car.mp4')
GRAY_FILE_NAME='douga_camera_5s.avi'
FRAME_RATE=30
ret, frame = cap.read()

# Define the codec and create VideoWriter object
height, width, channels = frame.shape
out = cv2.VideoWriter(GRAY_FILE_NAME, \
                      cv_fourcc('X', 'V', 'I', 'D'), \
                      FRAME_RATE, \
                      (width, height), \
                      True)  #isColor=True for color

#Window preparation
cv2.namedWindow('frame')
cv2.namedWindow('gray')
cv2.namedWindow('hsv')
cv2.namedWindow('mask')
cv2.namedWindow('res')
cv2.namedWindow('gaussian')

while ret == True:
    #ret, frame = cap.read()

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    hsv =cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
    hsv =cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
    #Gaussian smoothing
    # (5, 5)The degree of blur changes when the standard deviation in the x and y directions is changed, and the last argument is int.,It seems to be a border type, but the meaning of the numbers is unknown
    g_frame = cv2.GaussianBlur(frame, (15, 15), 0)
    gg_frame = cv2.cvtColor(g_frame, cv2.COLOR_BGR2GRAY)

    # define range of blue color in HSV
    lower_blue = np.array([110,50,50])
    upper_blue = np.array([130,255,255])

    # Threshold the HSV image to get only blue colors
    mask = cv2.inRange(hsv, lower_blue, upper_blue)

    # Bitwise-AND: mask and original image
    res = cv2.bitwise_and(frame,frame, mask= mask)
    cv2.imshow('frame',frame)
    cv2.imshow('gray',gray)
    cv2.imshow('hsv',hsv)
    cv2.imshow('mask',mask)
    cv2.imshow('res',res)
    cv2.imshow('gaussian',g_frame)
    #Gray image for writing gray, extension change for frame
    #For color, out= cv2.VideoWriter()With isColor=True, False if gray
    out.write(g_frame)  #OK by cv_fourcc('X', 'V', 'I', 'D')
    #If you press some key, it will stop
    if cv2.waitKey(1000*5) >= 0:  
        break
    ret, frame = cap.read()
cap.release()
out.release()
cv2.destroyAllWindows()

The execution example is as follows (although it is a still image) 2020-02-08-215532_1920x1080_scrot.png At least this worked, so OpenCV is available.

・ Environment construction; Japanese input

It is good to try object detection here, but this time I want to put a conversation application, so I would like to be able to input Japanese. First, let's assume that the Japanese localization was completed last night, and follow Reference 3. "By default, the keyboard of Raspberry Pi is in US layout, so we will use JIS layout." If anything, I felt that it was a JIS layout. .. ..

$ sudo raspi-config

1.4 Select Internationalization Options (Move on keyboard ↑ ↓) Select and press the ENTER key 2. Select I3 Change Keyboard Layout and press [Enter] 3. Select Generic 105-key (Intl) PC and press [Enter] 4. Select Other and press [Enter] 5. Select Japanese and press [Enter] 6.Japanese --Select Japanese (OADG 109A) and press [Enter] 7. Select The default for the keyboard layout and press Enter 8. Select No compose key and press [Enter] I think you will return to the first screen, so select and exit [Enter] to exit (you can move the cursor with ← → on the keyboard) Update once here

$ sudo apt-get update

Also, when you can type commands, install Japanese input on the keyboard.

$ sudo apt-get install -y uim uim-anthy

When you restart, the array etc. will be reflected.

$ sudo reboot

After restarting, the keyboard should be able to type in JIS layout → Isn't the key layout US layout?

Japanese fonts and IME installation

Enter the Japanese font and IME according to the following reference procedure. 【reference】 Raspberry Pi Japanese Input Settings In the first place, the display etc. is in Japanese, so it may be enough to put only IME in this work, but I put it in a circle for the time being.

$ sudo apt-get update

Installation of Japanese fonts

$ sudo apt-get install fonts-vlgothic

The following is the essential Japanese IME installation IME different from the above.

$ sudo apt-get install ibus-anthy

On the way (y / n) y ENTER

Finally reboot to enable Japanese display.

$ sudo reboot

The image is as follows. I think you can choose English-Japanese etc. at the right end of this upper bar. For the time being, you can now input Japanese. However, if this is left as it is, it is necessary to select the switching one by one in the upper row. Enlarged view of the right end (made of Pinta) pinta_selectArea.jpg

2020-02-09-221858_1920x1080_scrot.png

Switch one-shot key

At Jetson-nano I found the one-shot switching key by the following procedure. In other words, if you select as follows, you can automatically switch between English characters and A with the half-width / full-width keys. Also, the input has returned to the Japanese keyboard layout. 2020-02-09-223602_896x434_scrot.png

that,. .. .. You may not need the key input settings and the above IME.

Summary

・ I was able to install OpenCV / Tensorflow successfully. ・ Japanese input (105 keyboard layout) is now possible

・ Install the conversation app ・ I want to make voice input and complete voice conversation ・ I wonder if I will make a surveillance camera. .. .. "People there, put on a mask"

Recommended Posts

[Introduction to RasPi4] Environment construction; OpenCV / Tensorflow, Japanese input ♪
From Ubuntu 20.04 introduction to environment construction
[Introduction to RasPi4] Environment construction; natural language processing system mecab, etc. .. .. ♪
Introduction to OpenCV (python)-(2)
Python development environment construction 2020 [From Python installation to poetry introduction]
[Ubuntu 18.04] Tensorflow 2.0.0-GPU environment construction
An introduction to private TensorFlow
Ubuntu14.04 + GPU + TensorFlow environment construction
[Tensorflow] Tensorflow environment construction on Windows 10
Enabled to input Japanese in Linux environment (crostini) of Chromebook
[Raspi4; Introduction to Sound] Stable recording of sound input with python ♪
Introduction to Deep Learning for the first time (Chainer) Japanese character recognition Chapter 1 [Environment construction]
Environment construction of python and opencv
Modify Ubuntu tofu to Japanese environment
Introduction to TensorFlow --Hello World Edition
Python environment construction (pyenv, anaconda, tensorflow)
Spigot (Paper) Introduction to how to make a plug-in for 2020 # 01 (Environment construction)
Python3 TensorFlow for Mac environment construction
Introduction to image analysis opencv python
OpenCV3 & Python3 environment construction on Ubuntu
An introduction to OpenCV for machine learning
[Python] Road to snake charmer (1) Environment construction
[Introduction to Udemy Python3 + Application] 41. Input function
Probably the most straightforward introduction to TensorFlow
Beginners read "Introduction to TensorFlow 2.0 for Experts"
Introducing Japanese input system to Serene Linux
Introduction to docker Create ubuntu environment in ubuntu
[Super Introduction] Machine learning using Python-From environment construction to implementation of simple perceptron-
Introduction to Python Let's prepare the development environment
From Kivy environment construction to displaying Hello World
Introduction of ferenOS 2 (settings after installation, Japanese input settings)
Pillow environment construction --For Docker + iPython (and OpenCV)
Introduction to Python "Re" 1 Building an execution environment
[Introduction to cx_Oracle] (5th) Handling of Japanese data
From 0 to Django development environment construction to basic operation
[TF] How to build Tensorflow in Proxy environment
[Python] How to handle Japanese characters with openCV
From Python environment construction to virtual environment construction with anaconda
Steps to quickly create a deep learning environment on Mac with TensorFlow and OpenCV