[PYTHON] I tried moving DeepLabCut for biological tracking

Introduction

DeepLabCut is a markerless biological tracking tool that uses deep learning and image processing. It is mainly used for observing the behavior of animals.

DeepLab Cut Paper DeepLabCut Github

This time, I built the environment so that it can be used locally in Docker. The model has been confirmed to work using an existing model.

There is also How to run in Google Colab, so if you can use the ground environment, it is easier to use it.

Operation check environment

I built the environment using Docker.

Environment

Created using Docker. I'm about to use an Nvidia image, including OpenGL, as the original image. In this image, cudnn is not included, so download the deb file from the Official Page in advance.

Dockerfile


FROM nvidia/cudagl:10.0-devel-ubuntu18.04

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get -y upgrade &&\
    apt-get install -y git vim byobu tree wget \
    zlib1g-dev libssl-dev libffi-dev build-essential \
    checkinstall libreadline-gplv2-dev libncursesw5-dev \
    libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev \
    python3-dev python3-pip libgtk-3-dev libnotify-dev ffmpeg &&\
    rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*

RUN mkdir workspace
WORKDIR /workspace

RUN pip3 install -U pip
RUN pip3 install -U setuptools
RUN pip3 install -U -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-18.04 wxPython
COPY libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb /workspace
RUN dpkg -i libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb && rm libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb
RUN pip3 install tensorflow-gpu==1.13.1
RUN pip3 install deeplabcut

In the location of the above file and cudnn's deb file, execute the command below to build the environment

$ docker build . -t yourname/deeplbacut:latest

After building, enter the environment with the following command. It is recommended to create a data transfer folder with the host first.

$ mkdir ~/docker
$ docker run -it --gpus all --net host -e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix -v ~/docker:/data \
yourname/deeplbacut:latest

Execution method

Run Full_Human in Model Zoo.

test.py


import deeplabcut

deeplabcut.create_pretrained_project('test_project1',             #Project name
                                     'yourname',                  #My name
                                     ['/data/output_origin.avi'], #Pass the video you want to process in list format
                                     videotype='avi',             #Video format specification
                                     copy_videos=True)            #Copy the original video to the action result directory

When executed, the directory of [Project name]-[My name]-[Execution date] specified above will be created in the executed directory. The contents of the directory are as follows.

.
├── config.yaml #Learning / recognition setting file
├── dlc-models #Model used
│   └── iteration-0
│       └── test_project1Jan9-trainset95shuffle1
│           ├── test
│           │   └── pose_cfg.yaml
│           └── train
│               ├── pose_cfg.yaml
│               ├── snapshot-103000.data-00000-of-00001
│               ├── snapshot-103000.index
│               ├── snapshot-103000.meta
│               ├── snapshot-103000.pb
│               └── snapshot-103000.pbtxt
├── labeled-data
│   └── output_origin
├── training-datasets
└── videos  #output data
    ├── output_origin.avi
    ├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000.csv
    ├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000_filtered.csv
    ├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000_filtered.h5
    ├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000_filtered_labeled.mp4
    ├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000.h5
    ├── output_originDLC_resnet101_test_project1Jan9shuffle1_103000_meta.pickle
    └── plot-poses
        └── output_origin
            ├── hist_filtered.png
            ├── plot_filtered.png
            ├── plot-likelihood_filtered.png
            └── trajectory_filtered.png

Finally

I haven't tried the learning side, so I'd like to try learning as well (annotation is terrible)

Recommended Posts

I tried moving DeepLabCut for biological tracking
I tried moving EfficientDet
I tried moving food with SinGAN
I tried tensorflow for the first time
I tried using scrapy for the first time
[Python] I tried substituting the function name for the function name
vprof --I tried using the profiler for Python
I tried scraping
I tried PyQ
I tried python programming for the first time.
I tried AutoKeras
I tried to make AI for Smash Bros.
I tried Mind Meld for the first time
I tried papermill
I tried using firebase for Django's cache server
I tried django-slack
I tried Django
I tried spleeter
I tried cgo
I tried Python on Mac for the first time.
I tried pipenv and asdf for Python version control
I tried the MNIST tutorial for beginners of tensorflow.
I tried python on heroku for the first time
I tried a simple RPA for login with selenium
AI Gaming I tried it for the first time
I tried using parameterized
I tried using argparse
I tried using mimesis
I tried using anytree
I tried competitive programming
I tried running pymc
I tried ARP spoofing
I tried using Summpy
I tried Python> autopep8
I tried using coturn
I tried using Pipenv
I tried using matplotlib
I tried using "Anvil".
I tried using Hubot
I tried using ESPCN
I tried PyCaret2.0 (pycaret-nightly)
I tried using openpyxl
I tried deep learning
I tried AWS CDK!
I tried using Ipython
I tried to debug.
I tried using PyCaret
I tried using cron
I tried Kivy's mapview
I tried using ngrok
I tried using face_recognition
I tried to paste
I tried using Jupyter
I tried using PyCaret
I tried shell programming
I tried using Heapq
I tried using doctest
I tried Python> decorator
I tried running TensorFlow
I tried Auto Gluon
I tried using folium