Last time, I streamed with HTTP and watched the video with a web browser. This time, we will send and receive data using a lower layer than HTTP, and display the image in the window of the local application.
Leave the technical explanation to our predecessors. Let's relearn the basics of socket communication that I knew https://qiita.com/megadreams14/items/32a3eed4661e55419e1c
See also the information of our predecessors ...
Stream camera footage over the network with Python https://qiita.com/tokoroten-lab/items/bb27351b393f087650a9
However, even if I copy this information, it doesn't work. First, you have to rewrite the information in connection.ini for your environment. Then, arrange and write the setting information that the sample may have missed. After this setting, start the streaming server, As before, the server uses a realsense camera, the IP address is automatically acquired in the server side environment, and then printed to the console (displayed on the terminal).
Here is the completed file.
connection.ini
[server]
ip = AUTO
port = 12345
[packet]
# [bytes]
header_size = 4
[camera]
id = 0
fps = 15
image_width = 1280
image_height = 960
[pixels]
image_width = 640
image_height = 480
streaming_server.py
# coding:utf-8
import socket
import numpy as np
import cv2
import time
import configparser
from realsensecv import RealsenseCapture
import getIpAddress as gIA
config = configparser.ConfigParser()
config.read('./connection.ini', 'UTF-8')
#Overall settings
FPS = int(config.get('camera','fps'))
INDENT = ' '
#Camera settings
CAMERA_ID = int(config.get('camera','id'))
CAMERA_FPS = FPS
CAMERA_WIDTH = int(config.get('camera','image_width'))
CAMERA_HEIGHT = int(config.get('camera','image_height'))
#Image settings
IMAGE_WIDTH = int(config.get('pixels', 'image_width'))
IMAGE_HEIGHT = int(config.get('pixels', 'image_height'))
IMAGE_QUALITY = 30
#Apply camera settings
try:
cam = RealsenseCapture()
#Property settings
cam.WIDTH = CAMERA_WIDTH
cam.HEIGHT = CAMERA_HEIGHT
cam.FPS = CAMERA_FPS
# cv2.VideoCapture()Unlike cap.start()Don't forget
cam.start()
CAMERA_ID = "RealsenseCapture"
except:
cam = cv2.VideoCapture(CAMERA_ID, cv2.CAP_V4L)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, CAMERA_WIDTH)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, CAMERA_HEIGHT)
cam.set(cv2.CAP_PROP_FPS, CAMERA_FPS)
if not cam.isOpened(): #Determine if video capture is possible
print("Not Opened Video Camera")
exit()
#Camera information display
print('Camera {')
print(INDENT + 'ID : {},'.format(CAMERA_ID))
print(INDENT + 'FPS : {},'.format(CAMERA_FPS))
print(INDENT + 'WIDTH : {},'.format(CAMERA_WIDTH))
print(INDENT + 'HEIGHT: {}'.format(CAMERA_HEIGHT))
print('}')
#Server settings
SERVER_IP = str(config.get('server', 'ip'))
SERVER_PORT = int(config.get('server', 'port'))
if SERVER_IP == "AUTO":
l = [d.get('name') for d in gIA.get_ip()]
print(l)
for k in gIA.get_ip():
if (k.get('name') == 'wlan0'):
HOST_wlan0 = k['address']
elif (k.get('name') == 'eth0'):
HOST_eth0 = k['address']
if HOST_wlan0 is not None:
SERVER_IP = HOST_wlan0
else:
SERVER_IP = HOST_eth0
if SERVER_IP == "AUTO" or SERVER_IP is None:
print("SERVER_IP is {}".format(SERVER_IP))
cam.release();
exit()
#Packet settings
HEADER_SIZE = int(config.get('packet', 'header_size'))
#Client connection listen
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((SERVER_IP, SERVER_PORT))
s.listen(1)
soc, addr = s.accept()
print('Server {')
print(INDENT + 'IP : {},'.format(SERVER_IP))
print(INDENT + 'PORT : {}'.format(SERVER_PORT))
print('}')
#Client information display
print('Client {')
print(INDENT + 'IP : {},'.format(addr[0]))
print(INDENT + 'PORT : {}'.format(addr[1]))
print('}')
#Main loop
while True:
loop_start_time = time.time()
#Image data creation for transmission
flag, frames = cam.read()
color_frame = frames[0]
depth_frame = frames[1]
resized_img = cv2.resize(color_frame, (IMAGE_WIDTH, IMAGE_HEIGHT))
(status, encoded_img) = cv2.imencode('.jpg', resized_img, [int(cv2.IMWRITE_JPEG_QUALITY), IMAGE_QUALITY])
#Packet construction
packet_body = encoded_img.tostring()
packet_header = len(packet_body).to_bytes(HEADER_SIZE, 'big')
packet = packet_header + packet_body
#Packet transmission
try:
soc.sendall(packet)
except socket.error as e:
print('Connection closed.')
break
#FPS control
time.sleep(max(0, 1 / FPS - (time.time() - loop_start_time)))
s.close()
Place these files in any folder along with the realsensecv.py and getIpAddress.py files that you used last time, and let python3 process them.
$ python3 streaming_server.py
pipline start
Camera {
ID : RealsenseCapture,
FPS : 15,
WIDTH : 1280,
HEIGHT: 960
}
['eth0', 'l4tbr0', 'wlan0']
With such a display, the server side waits for the client to connect.
On the client side, run it on another PC or Mac. The connection.ini is the same as the one on the server. The script on the client side just changed img.tostring () to img.tobytes () as follows on line 83 of the previous one.
streaming_client.py changes
82: #Convert image to binary
83: img = img.tobytes()
If the execution is successful on the client side, the application window will be displayed as shown below.
If you close the application window on the Client side, the program on the server side will also stop. Also, unlike the video reference in the browser, the Socket communication of this script does not seem to assume simultaneous access from many clients.
It turns out that with Socket programming, the application can receive the data and display it in a window instead of the browser. This is fine for one-to-one communication, but if you want to do one-to-many and many-to-one streaming, how should you do Socket programming? In addition, kivy is used for GUI display. I used Kivy for the first time and found that the window opened faster than using the Tcl/Tk library. In the future, I would like to consider the use of kivy in GUI development as well.
I checked the reference information of Socket programming and succeeded in one-to-one communication. In the future, I would like to investigate the means of realizing one-to-many and many-to-one communication and expand the functions.