I made a web API with flask, but when I passed the environment with a Docker image (about 1.4GB), it was seen with frosty eyes. Oshin Well, that's why in this article I'll write about how to pass a Python environment as a Dockerfile.
It is a convenient service that allows you to easily share a linux OS and troll the environment you want to do. Think of it as software that can handle OS-level virtual environments. Click here for installation method You can create a docker container by using docker run ~ as the OS element of about 1 ~ 2GB called docker image. A container is like a virtual environment (OS). If a container is created, it is already "" "win" "" ". So how do you share this environment? For the time being, if you share the docker image, you can run and launch the container on the other side as well. Well, that would be inconvenient because of the large capacity depending on the package of the environment ... *** Therefore, use Dockerfile *** A Dockerfile is a text file that looks like a blueprint for a docker image and has a light capacity. Here's an example ⇓
FROM ubuntu:18.04
# install python
RUN apt-get update
RUN apt-get install -y python3 python3-pip
# setup directory
RUN mkdir /api
ADD . /api
WORKDIR /api
# install py
RUN pip3 install -r requirements.txt
This will create a ubuntu 18.04 environment with Python 3 series and necessary packages. In 8 lines! In this way, you can share it with the other party with Dockerfile, and the other party can create a Docker image by doing docker build ~. After that, you can launch the container from docker run ~ and the image and enter the virtual OS. Onji!
When beginners, including myself, asked so far, "Docker awesome yeah yeah yeah hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh! 1h isa fy say b " It will be. Everyone will be. Well, but even if the OS environment is created, the important thing is the API file, and in the case of machine learning, if the network model file etc. are not placed in Docker, there is no source or child, so check the shared directory structure To do. kyoyu/ ├ data/ ├ ganbari/ ├ api_server.py ├ gomi/ ├ .dockerignore ├ requirements.txt └ Dockerfile As an example, let's consider such a directory structure and share this kyoyu directory with the other party on github etc. Place the Dockerfile inside the shared directory. Let data, gambari and api_sever.py be the files and folders of the data and code you want to use in your environment. The gomi directory should be unnecessary for the Docker environment (such as specifications in .md).
As I will explain later, write a command to create a directory in Docker with Dockerfile, and place the directory in the kyoyu directory in Docker. At that time, you can instruct not to bother to place unnecessary directories in Docker. That is .dockerignore. Write it like this in a text file.
.dockerignore
gomi/*
Now everything under the gomi directory will not be placed in Docker.
Summarize the Python packages you want to use in your environment. (Example)
Example.py
absl-py==0.9.0
alembic==1.4.2
astor==0.8.1
attrs==19.3.0
backcall==0.2.0
bcolz==1.2.1
bleach==3.1.5
Bottleneck==1.3.2
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
It is a text file in which the package and its version are written like this. It seems to be troublesome to make, but
/kyoyu$ pip freeze > requirements.txt
You can create a file in one shot.
For more information on how to write a Dockerfile, please check it on the net. This time I will explain the feelings of Dockerfile. Dockerfile is a text file with no extension (it seems to be recognized by .dockerfile on windows).
FROM ubuntu:18.04
# install wget, cmake
RUN apt-get update
RUN apt-get install -y python3 python3-pip
# setup directory
RUN mkdir /api
ADD . /api
WORKDIR /api
# install py
RUN pip3 install -r requirements.txt
My feeling is that I decide the basic OS with FROM, and after that I will prepare the environment with a twisted linux command.
Specify the base Docker image with the FROM command. You might think, "Wait !? The blueprint of the Docker image is a Dockerfile !?", but here you can use the Docker image registered in the service called Dockerhub. Various Docker images are registered and are usually found here without having to write your own Dockerfile. (Actually, I already have a Docker image of Python3 series, but this time I wrote it with a command for explanation.)
You can write code that you would normally write in a linux terminal. However, only cd is written in WORKDIR which will be explained later.
Change the working directory for the linux cd command.
This Dockerfile is built with docker build ~ to create a container, but you can send the files of this building environment into Docker.
ADD [The path of the file in the environment you are running] [The path of the directory in docker you want to send]
This time, assuming that you are building in the kyoyu directory, you are sending "." (All directory files in the kyoyu directory) to / api. *** At this time, the gomi directory specified by .dockerignore is not placed in docker. *** ***
install -r requirements.txt I placed the contents of the kyoyu directory in the api directory in docker. The directory structure in docker looks like this. api ├ data/ ├ ganbari/ ├ api_server.py └ requirements.txt Now, use the WORKDIR / api command to enter the api directory and browse requirements.txt to install the library.
RUN pip3 install -r requirements.txt
The above is how to write a Dockerfile. There are many other useful commands, so please try google.
Now that we have a Dockerfile, we will create an image immediately. Creating a Docker image from a Dockerfile is called a build.
/kyoyu$ docker build -t [Image name(Name)] [The path of the directory where the Dockerfile resides]
(Think of -t as a spell now) This time, create an image with the name test, and since the Dockerfile exists in "." (Current directory (kyoyu)), the code to be executed is as follows.
/kyoyu$ docker build -t test .
Creating a container from a Docker image is called a run.
$ docker run -p [Port number accessed from the outside]:[Specify the port number on the container side] -it [Image name] --name [The name of the container(Name)]
(-it is a command to maintain the environment when you start Docker later, so think of it as a spell now) For example, in this example, we plan to execute API etc. on port 5000 on the Docker side, and want to send it to port 8888 of the actual personal computer. If the container name is testcon
$ docker run -p 8888:5000 -it test --name testcon
Will be.
Finally, go inside the Docker container you created.
$ docker ps -a
You can see the list of containers with this command, but first make sure that testcon is here.
$ docker start testcon
Now you can run the stopped testcon container. The feeling is that the docker container normally exists in a stopped state, and you can move it by starting it. You can enter the virtual OS by attaching (touching) the moving container.
$ docker attach testcon
root@hogehoge:/api#
The beginning is a character string like the one above, and I was able to enter the virtual OS. Here, it is / api from the beginning because it is specified as WORKDIR / api in the Dockerfile.
Did you understand how Docker feels? I've heard of Docker! Since it is intended for about 3 layers, please google the details.
Recommended Posts