** To keep the environment clean. ** **
Without Docker, python3.7 and python3.8 are mixed in the local environment, and it's already messed up. chaos. It is also great that the development environment and the production environment can be combined. Errors often occur due to different environments each time you deploy, but this can be prevented. Docker is better than Virtual Box because it starts up much faster. Since I started using Docker, my health at the time of development has improved considerably. It feels the same as studying at an organized desk.
The content is that this is enough if you just play around with the sandbox without being aware of the production environment. At first, I think this is all you need. Once you get used to it, I want to move on to the advanced version.
#It pulls the base image. Feeling of building the environment of Oomoto. Put the package etc. on this.
$docker pull image name
# anaconda ->Most of the packages required for machine learning such as pandas are already installed, but they are heavy. Lightness is important in a production environment, but I think this is fine for local development.
$ docker pull continuumio/anaconda3
# -Keep Docker running with the it command. If you don't, you can't work in it because it will be cut off just by starting it once. Also at the end/bin/By attaching bash, you can work in bash. Image that can use command line. Otherwise, it will be in python interactive mode.
$ docker run -it continuumio/anaconda3 /bin/bash
#If you make changes such as installing a package inside the container, save the changes after exiting the container. Otherwise, the initial environment will remain.
#History of started containers
$ docker ps -a
#Save the container as an image
$ docker commit container_id container_name
#Check the list of saved images
$ docker images
#Start the saved container
$ docker run -it container_name /bin/bash
#Mount a local directory (shared with Docker). -Describe "local directory: Docker directory" after v.
docker run -it -v /c/Users/user/Python:/home container_name /bin/bash
It may be a little difficult, but once you get used to it, you can develop it more comfortably and efficiently. I remember taking a long time here.
The basic flow of use is as follows.
Here, I would like to talk about Dockerfile alone, docker-compose usage production, and Dockerfile for production environment. Eventually, I want to get used to development using docker-compose. Also, it would be nice to be able to create a Dockerfile for the production environment.
#The one described at the top makes it easier to use the cache, so let's describe the ones that change frequently at the bottom.
# python-buster ->Minimal package for running python. OS is debian.I am developing with this.
FROM python:3.8-buster
#Use an absolute path as the path. Because the directory structure is different between local and container.
WORKDIR /app
#Package manager updates and package acquisitions together.apt is apt-Better than get.
RUN apt update -y && apt upgrade -y && \
apt install -y ffmpeg
#COPY is better than ADD. Copy the file to the app directory. The file is being copied from the local to the container.
COPY requirements.txt /app
# requirements.See txt and pip install.I want to use pip3 in python3.
RUN pip3 install --upgrade pip && \
pip3 install -r requirements.txt
#The specified directory can be shared in each environment.
# VOLUME /data
#Explicitly specify the directory to copy. To reflect only the changed part. I'm troublesome, so ".I'm done with.
# COPY src /app/src
# COPY data /app/data
COPY . /app
# CMD ["/bin/bash"]
By using Docker-compose, you can use multiple servers at the same time. For example, not only for python execution, but also for databases such as postgres and jupyter. It is also good to be able to share resources. The code example is below.
docker-compose.yml
version: '3'
services:
notebook:
# image: jupyter/datascience-notebook ->Not required because the base image is described in the Dockerfile.
# container_If you do not specify a name, it will be automatically generated and it will be difficult to check it later.
container_name: vad-sandbox
ports:
- "8888:8888"
# 「.Will execute the Dockerfile in the same directory.
build: .
#Specify the directory to mount.
volumes:
- .:/app
working_dir: /app
#It will automatically execute the command. I'm hitting it myself because it's easier to understand.
# command: jupyter lab --ip=0.0.0.0 --allow-root
And here is the dockerfile when using docker-compose
FROM python:3.8-buster
WORKDIR /app
RUN apt update -y && apt upgrade -y && \
apt install -y ffmpeg
COPY requirements.txt /app
RUN pip3 install --upgrade pip && \
pip3 install -r requirements.txt
#Use ipywidgets.You can use jupyter interactively.
RUN jupyter nbextension enable --py widgetsnbextension --sys-prefix
COPY . /app
# CMD ["/bin/bash"]
In a production environment, it is important to reduce the capacity as much as possible. Therefore, a multi-stage build is adopted (the environment is divided between build and execution).
#In python, buster is better than alpine. c extension cannot be used with slim
#In multi-stage build, separate images for build and execution, and minimize the capacity
#The one you want to use the cache is described at the top. If there is a change, it will be discarded after that.
#For build. You can use gcc for buster. Required to install webrtcvad.
FROM python:3.8-buster as builder
#Use absolute path for path
WORKDIR /app
#Since pytorch takes a long time to install, it is easy to reuse the cache.
RUN pip3 install --upgrade pip && \
pip3 install torch
#COPY is better than ADD. Copy the file to the app directory.
COPY requirements.txt /app
RUN pip3 install --upgrade pip && \
pip3 install -r requirements.txt
#Execution container
FROM python:3.8-slim-buster as runner
#Copy the installed python package for execution.I was able to reduce it by about 1GB.
COPY --from=builder /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
#Package manager updates and package acquisitions together.apt is apt-Better than get.
RUN apt update -y && apt upgrade -y && \
apt install -y ffmpeg
COPY . /app
Since it is highly abstract, it is sure to be confusing at first glance, but once you get used to it, it becomes a mere task. You may feel stressed at first, but let's do our best. You should be able to appeal even in job hunting.
Recommended Posts