[PYTHON] Stop sign detection Development of visualization part part5 Display what was detected when an object was detected

Until last time

The functions implemented last time are as follows. When an object is detected on Yolov5 ➡ Socket communication is performed. ➡ Sound is played when socket communication is received. (Multiple)

Functions to be implemented this time

With the implementation so far, it was possible to visualize with sound, but it was not possible to visualize the actual thing. On the contrary, there is a story that you can understand it by looking at the camera image, but this time it may be a function that seems to be meaningless, but I have described a program that visualizes what you recognize. I think.

Why realize this function

I want to visualize this on the web later, so I realized this function so that it can be skipped with a socket at that time.

About implementation

This time, I made the socket communication part a function, so I will briefly post about it. The contents are written in part3 and part4.

socket part

Client part

detect.py


def socket1():
    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s2:
                         s2.connect(('127.0.0.1', 50007))
                         BUFFER_SIZE=1024
                         data1='1'
                         s2.send(data1.encode())
                         print(s2.recv(BUFFER_SIZE).decode())

If the number of things to recognize increases, the function here can handle even if the number of things to recognize increases by increasing the number after the socket and changing the number in the part of data = '1'.

server.py


#Create socket server
from playsound import playsound
import socket
cont=1
# AF =Means IPv4
# TCP/For IP, SOCK_Use STREAM
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
    #Specify IP address and port
    s.bind(('', 50007))
    #1 connection
    s.listen(1)
    #Wait until you connect
    while True:
        #When someone accesses it, enter the connection and address
        conn, addr = s.accept()
        with conn:
            while True:
                #Receive data
                data = conn.recv(1024)
                if not data:    
                    break
                else:
                 data2=str(data)
                 data3=(data2.replace('b', ''))
                 conn.sendall(b'Received: ' + data)
                if data3 == "'1'":
                        playsound('2.wav')
                if data3 == "'2'":
                        playsound('3.wav')

Recognition part

detect.py


                    if label1=="cell phone":
                     print("I found a smartphone")
                     cv2.imwrite('sumaho.jpg',im0)
                     img = cv2.imread('sumaho.jpg')
                     cv2.imshow('sumaho', img)

                     socket1()
                     with open('daystext/'+str(d_today)+'.txt', 'a') as f:
                         dt_now = datetime.datetime.now()
                         f.write(str(dt_now)+"I found a smartphone"+"\n")
                    if label1=="book":
                     print("I found a book")
                     cv2.imwrite('book.jpg',im0)
                     img = cv2.imread('book.jpg')
                     cv2.imshow('book', img)
                     socket2()
                     with open('daystext/'+str(d_today)+'.txt', 'a') as f:
                         dt_now = datetime.datetime.now()
                         f.write(str(dt_now)+"I found a book"+"\n")

The server part can be handled by changing the number part of data3 = "'1'" and increasing the if statement.

Visualization part

                     img = cv2.imread('sumaho.jpg')
                     cv2.imshow('sumaho', img)

This can be achieved by embedding this code after if label1 == "" :.

Finally

I've written up to part5 so far, and except for the recording function, I've almost realized the functions I want to realize with yolov5, so I'd like to end the yolov5 part for the time being. Next, I'd like to make a visualization web part. .. If you come up with a function that seems to be realized, I will write this article, so continuous posting is over.

Recommended Posts

Stop sign detection Development of visualization part part5 Display what was detected when an object was detected
Stop sign detection Development of visualization part part3 Notify by voice using socket communication when an object is detected
Stop sign detection Development of visualization part part2 Notify another system by socket communication when an object is detected
Stop sign detection Development of visualization part part4 When an object is detected, it is notified by voice using socket communication (multiple volumes)
Stop sign detection Development of visualization part part1 Detection and recording of objects * This time, the chair is detected (the model is not made by myself)
Try to detect an object with Raspberry Pi ~ Part 1: Comparison of detection speed ~
Decrease the class name of the detection result display of object detection