--Pimoroni's Pan-Tilt HAT Full Kit w / Micro Servos was purchased
- https://shop.pimoroni.com/products/pan-tilt-hat
- https://www.adafruit.com/products/3353 - Pimoroni Pan-Tilt HAT for Raspberry Pi
- https://www.adafruit.com/products/1967 - Mini Pan-Tilt Kit - Assembled with Micro Servos
--Attach the Zupai camera module V2.1 and run the sample Face Tracking Demo.
https://github.com/pimoroni/PanTiltFacetracker
-(I didn't buy Neo pixel stick ..)
--Easy control with Python
```shell-session:Execution example) pan()And tilt()use.-Designated at 90-90 degrees
pi@raspberrypi:~ $ python Python 2.7.9 (default, Sep 17 2016, 20:26:04) [GCC 4.9.2] on linux2 Type "help", "copyright", "credits" or "license" for more information.
from pantilthat import * angle_pan = 15 #angle:-90 - +90 angle_tilt = -30 #angle:-90 - +90 pan(angle_pan) tilt(angle_tilt) quit() pi@raspberrypi:~ $
## environment
- Raspberry Pi 2 / 3
--Camera Module V2.1 (also worked with V1.3)
--Raspbian: `2017-01-11-raspbian-jessie.img` (OpenCV displays the camera image in the window, so it's not Lite)
## procedure
1. Assembling the Pan-Tilt Kit [Assembling Pan-Tilt HAT](https://learn.pimoroni.com/tutorial/sandyj/assembling-pan-tilt-hat)
https://learn.pimoroni.com/tutorial/sandyj/assembling-pan-tilt-hat <br> I wonder if I can do it while looking at the pictures.
--Pan-Tilt module fixed to HAT board
――The black nylon screws are long, so it's good idea to trim the forming end of the nylon bolt with a pair of scissors or tin snips, ~)
--Insert the servo motor wire into the back connector (GND = brown, 5V = red, PWM-yellow)
--Pan (shake the fire) on channel 1 (SERVO1)
--Tilt (shake the fire vertically) on channel 2 (SERVO2)
--Camera fixing base assembly
――I think that the notch on the front side is different between V1.3 and V2.1.
--The white screws are made longer, so you can cut them (Again, it's a good idea to trim off the excess nylon bolt with a pair of scissors or tin snips.)
--Attach the camera mount to the Pan-Tilt module
--The camera cable goes up
--Neopixel stick ... I haven't purchased it, so skip it.
--Insert the camera cable all the way.
――About 1mm to lift the parts (brown or black?) Above the Raspberry Pi outlet. If you put too much force, it will break.
--Plug in the camera cable
――When you lift it up and push it in, it is fixed.
--Insert the hat on the Raspberry Pi 40pin.
2. Boot system preparation (usual)
--Raspbian
--` sudo apt-get update && sudo apt-get upgrade -y` # package update
3. Install Pan-Tilt related packages
`curl -sS https://get.pimoroni.com/pantilthat | bash`
When asked something, `y [Enter]`
4. Enable camera module <br> `sudo raspi-config nonint do_camera 0 # 0: Enable / 1: Disable`
5. `sudo reboot` # reboot
6. Check the operation of PanTilt module
--Open the terminal,
--Start python with `python [Enter]`
――Hereafter, copy
```py
from pantilthat import *
angle_pan = 15 #angle:-90 - +90
angle_tilt = -30 #angle:-90 - +90
pan(angle_pan)
tilt(angle_tilt)
quit()
```
-Does it work? If you change the number, the direction changes. Angle_pan = 0 / angle_tilt = 0 and you feel like you are facing the front?
### Face tracking demo program
https://github.com/pimoroni/PanTiltFacetracker
1. Install required packages <br> `sudo apt-get install python-smbus python-opencv opencv-data`
2. git clone
```bash
git clone https://github.com/pimoroni/PanTiltFacetracker
cd PanTiltFacetracker
./facetracker_lbp.py
--A window opens and the camera image is displayed.
--When a face is detected, the Pan-Tilt module moves so that it comes to the center.--Log
```shell-session:
pi@raspberrypi:~ $ curl -sS https://get.pimoroni.com/pantilthat | bash
This script will install everything needed to use
Pan-Tilt HAT
Always be careful when running scripts and commands
copied from the internet. Ensure they are from a
trusted source.
If you want to see what this script does before
running it, you should run:
\curl -sS https://get.pimoroni.com/pantilthat
Note: Pan-Tilt HAT requires I2C communication
Do you wish to continue? [y/N] y
Checking environment...
Updating apt indexes...
.........
Reading package lists...
.........
Checking hardware requirements...
I2C must be enabled for Pan-Tilt HAT to work
I2C is now enabled
Checking packages required by I2C interface...
Pan-Tilt HAT comes with examples and documentation that you may wish to install.
Performing a full install will ensure those resources are installed,
along with all required dependencies. It may however take a while!
Do you wish to perform a full install? [y/N] y
Checking install requirements...
Checking for dependencies...
Installing python-pantilthat...
install ok installed
Installing python3-pantilthat...
install ok installed
Checking for additional software...
python-picamera is already installed
python3-picamera is already installed
Downloading examples and documentation...
Resources for your Pan-Tilt HAT were copied to
/home/pi/Pimoroni/pantilthat
All done!
Enjoy your Pan-Tilt HAT!
pi@raspberrypi:~ $ sudo reboot
```
――Is the color strange?
When the red one is projected, it becomes blue, and the blue one becomes red .. Is the RGB alignment setting wrong?
Can't set v4l2 from OpenCV? CV_CAP_PROP_FORMAT
??
I don't know how to fix it, so I changed it to use Python's PiCamera.
- `sudo apt-get install python-picamera`
--Code that has been messed up to use picamera.
```py
#!/usr/bin/env python
## http://qiita.com/mt08/items/97d89a09e5129e10f88c
from picamera.array import PiRGBArray
from picamera import PiCamera
import cv2, sys, time, os
from pantilthat import *
# Frame Size. Smaller is faster, but less accurate.
# Wide and short is better, since moving your head
# vertically is kinda hard!
FRAME_W = 192
FRAME_H = 112
# Default Pan/Tilt for the camera in degrees.
# Camera range is from -90 to 90
cam_pan = 90
cam_tilt = 60
# Set up the CascadeClassifier for face tracking
#cascPath = 'haarcascade_frontalface_default.xml' # sys.argv[1]
cascPath = '/usr/share/opencv/lbpcascades/lbpcascade_frontalface.xml'
faceCascade = cv2.CascadeClassifier(cascPath)
# Set up the capture with our frame size
camera = PiCamera()
camera.resolution = (FRAME_W, FRAME_H)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(FRAME_W, FRAME_H))
time.sleep(0.1)
# Turn the camera to the default position
pan(cam_pan-90)
tilt(cam_tilt-90)
light_mode(WS2812)
def lights(r,g,b,w):
for x in range(18):
set_pixel_rgbw(x,r if x in [3,4] else 0,g if x in [3,4] else 0,b,w if x in [0,1,6,7] else 0)
show()
lights(0,0,0,50)
for image in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
frame = image.array
# This line lets you mount the camera the "right" way up, with neopixels above
frame = cv2.flip(frame, -1)
# Convert to greyscale for detection
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.equalizeHist( gray )
# Do face detection
faces = faceCascade.detectMultiScale(gray, 1.1, 3, 0, (10, 10))
# Slower method
'''faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=4,
minSize=(20, 20),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE | cv2.cv.CV_HAAR_FIND_BIGGEST_OBJECT | cv2.cv.CV_HAAR_DO_ROUGH_SEARCH
)'''
lights(50 if len(faces) == 0 else 0, 50 if len(faces) > 0 else 0,0,50)
for (x, y, w, h) in faces:
# Draw a green rectangle around the face
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Track first face
# Get the center of the face
x = x + (w/2)
y = y + (h/2)
# Correct relative to center of image
turn_x = float(x - (FRAME_W/2))
turn_y = float(y - (FRAME_H/2))
# Convert to percentage offset
turn_x /= float(FRAME_W/2)
turn_y /= float(FRAME_H/2)
# Scale offset to degrees
turn_x *= 2.5 # VFOV
turn_y *= 2.5 # HFOV
cam_pan += -turn_x
cam_tilt += turn_y
print(cam_pan-90, cam_tilt-90)
# Clamp Pan/Tilt to 0 to 180 degrees
cam_pan = max(0,min(180,cam_pan))
cam_tilt = max(0,min(180,cam_tilt))
# Update the servos
pan(int(cam_pan-90))
tilt(int(cam_tilt-90))
break
frame = cv2.resize(frame, (540,300))
frame = cv2.flip(frame, 1)
# Display the image, with rectangle
# on the Pi desktop
cv2.imshow('Video', frame)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# When everything is done, release the capture
cv2.destroyAllWindows()
TODO
――Do you want to post screenshots or motion videos?
-Laughing Man mark compatible?
-Is it possible to detect multiple faces and track the largest one?