DeepLabCut is a markerless biological tracking tool that uses deep learning and image processing. It is mainly used for observing the behavior of animals.
DeepLab Cut Paper DeepLabCut Github
This time, I built the environment so that it can be used locally in Docker. The model has been confirmed to work using an existing model.
There is also How to run in Google Colab, so if you can use the ground environment, it is easier to use it.
I built the environment using Docker.
Created using Docker. I'm about to use an Nvidia image, including OpenGL, as the original image. In this image, cudnn is not included, so download the deb file from the Official Page in advance.
Dockerfile
FROM nvidia/cudagl:10.0-devel-ubuntu18.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y upgrade &&\
apt-get install -y git vim byobu tree wget \
zlib1g-dev libssl-dev libffi-dev build-essential \
checkinstall libreadline-gplv2-dev libncursesw5-dev \
libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev \
python3-dev python3-pip libgtk-3-dev libnotify-dev ffmpeg &&\
rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
RUN mkdir workspace
WORKDIR /workspace
RUN pip3 install -U pip
RUN pip3 install -U setuptools
RUN pip3 install -U -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-18.04 wxPython
COPY libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb /workspace
RUN dpkg -i libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb && rm libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb
RUN pip3 install tensorflow-gpu==1.13.1
RUN pip3 install deeplabcut
In the location of the above file and cudnn's deb file, execute the command below to build the environment
$ docker build . -t yourname/deeplbacut:latest
After building, enter the environment with the following command. It is recommended to create a data transfer folder with the host first.
$ mkdir ~/docker
$ docker run -it --gpus all --net host -e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix -v ~/docker:/data \
yourname/deeplbacut:latest
Run Full_Human in Model Zoo.
test.py
import deeplabcut
deeplabcut.create_pretrained_project('test_project1', #Project name
'yourname', #My name
['/data/output_origin.avi'], #Pass the video you want to process in list format
videotype='avi', #Video format specification
copy_videos=True) #Copy the original video to the action result directory
When executed, the directory of [Project name]-[My name]-[Execution date] specified above will be created in the executed directory. The contents of the directory are as follows.
.
├── config.yaml #Learning / recognition setting file
├── dlc-models #Model used
│ └── iteration-0
│ └── test_project1Jan9-trainset95shuffle1
│ ├── test
│ │ └── pose_cfg.yaml
│ └── train
│ ├── pose_cfg.yaml
│ ├── snapshot-103000.data-00000-of-00001
│ ├── snapshot-103000.index
│ ├── snapshot-103000.meta
│ ├── snapshot-103000.pb
│ └── snapshot-103000.pbtxt
├── labeled-data
│ └── output_origin
├── training-datasets
└── videos #output data
├── output_origin.avi
├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000.csv
├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000_filtered.csv
├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000_filtered.h5
├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000_filtered_labeled.mp4
├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000.h5
├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000_meta.pickle
└── plot-poses
└── output_origin
├── hist_filtered.png
├── plot_filtered.png
├── plot-likelihood_filtered.png
└── trajectory_filtered.png
I haven't tried the learning side, so I'd like to try learning as well (annotation is terrible)
Recommended Posts