I studied based on this. At the very least, I'll summarize what you need to remember. Introduction to Docker/Kubernetes Practical Container Development
A logical partition (container) is created on the host OS, and the libraries and applications required to operate the application are put together so that they can be used as if they were individual servers.
● Understand that "the virtualized target is different"
The container management software (docker, etc.) virtualizes the "container", and the virtualization machine virtualizes the "OS". In an environment that uses containers, only the host OS is required, while in a virtual environment, a guest OS is required in addition to the host OS. Therefore, the container environment has less overhead than the virtual environment.
Overhead: CPU resource/disk capacity/memory usage required for virtualization Hypervisor: Software that specializes in virtualization on hardware (windows: Hyper-V, etc.)
● Different usage of system resources
In a virtual environment, multiple applications are usually run on the host OS (or guest OS), so it is necessary to manage the unification of middleware and library versions under the environment. On the other hand, in a container environment, applications can be easily made independent, so version control of the system environment is sufficient.
In application development, the waterfall of "development"-> "test"-> "staging"-> "production release" is basic, but it is necessary to unify the development environment in each process. Even if it works well in "development" and "test", it is impossible to release the service unless it matches the environment of the provider.
By using container management software, it is possible to easily realize "unification of the environment" by using a container image template, and it is possible to respond comfortably to changes in the environment. It has high continuous delivery power that can be deployed consistently from development to product release.
It is also very attractive that you can focus on development without having to spend time on library version control and resource construction.
Staging: Before the system is released, the system is reflected in an environment that is almost the same as the environment that actually provides the service (production environment), and the final confirmation of operation and display is performed. Or the environment.
Reference: https://codezine.jp/article/detail/11336
Docker
Roughly speaking, docker users have the following three things to do independently.
Build: A function to create an image. Created from the infrastructure configuration information code (dockerfile). Ship: Ability to share images. Run: A function to move the container.
The following tools and engines are running as components that operate the above functions.
DockerEngine: Core function for creating images, starting containers, etc. DockerRegistry: Registry feature for publishing/sharing Docker images DockerCompose: A tool to centrally manage multiple container environments DockerMachine: A tool for automatically generating a docker execution environment with commands in a cloud environment DockerSwarm: A tool for clustering multiple Docker hosts
・ Virtual NIC is attached to the container -When docker is installed, the physical NIC of the host s aber and docker0 virtual bridge are connected. -Communication between containers on a single host is done via a virtual bridge -When the container is executed, a private IP address is automatically assigned to the container eth0. ・ When communicating with an external network, I use NAPT yesterday.
If you keep this image in mind, you can easily get an image of communication inside and outside the container.
Namespace: Technology for partitioning containers → Example: By limiting resources for each group, it is possible to prevent a specific container from running out of resources and affecting other containers.
Cgroups: Resource management → Divide into groups of parent-child relationships so that children cannot make settings that exceed the parental limit.
NIC: Network Interface Controller → Card-type expansion device for connecting devices such as computers to a communication network (LAN)
ethX: Ethernet port. Ethernet is a wired standard that supports the network interface layer of the TCP/IP protocol, which is a communication model between networks and computers.
Network bridge: A function that allows you to use one computer (a computer equipped with multiple wired LAN terminals and wireless LAN adapters) like a hub
See here. https://qiita.com/kurkuru/items/127fa99ef5b2f0288b81
To run the web server, refer to here etc. https://qiita.com/mtakehara21/items/d7be42cf12772f3ada25
・ There is a tutorial below https://github.com/asashiho/dockertext2.git
docker-compose.yml
version: '3.3' #Specifying version
services: #List the services to configure
# WebServer config
webserver:
build: . #Build "dockerfile" in the current directory
ports:
- "80:80"
depends_on:
- redis
# Redis config
redis:
image: redis:4.0
From the following, configure the container according to docker-compose.yml
docker-compose up
When accessing the port 80 version of localhost from a browser, the sample works as follows
From the following, stop the container according to docker-compose.yml
docker-compose stop
YAML: Data format for representing structured data </font>
I have configured the docker environment on the host machine, but if a failure occurs on the host machine, the service will stop and availability and redundancy cannot be guaranteed.
For that, docker machine is used.
docker machine: A command line tool that allows you to create an execution environment such as docker in a host machine/cloud/virtual environment.
Availability: The degree and ability of a system to continue operating.
Redundancy: To prepare a spare system by duplicating the entire system including the network to improve fault tolerance.
Kubernetes
The docker container can be installed manually when running on one machine such as in a development environment. However, in order to operate a production environment composed of multiple hosts in a cluster configuration, not only operations such as container start/stop, but also network connection between hosts, storage management, which host to run the container, etc. Scheduling function is essential. It is "kubernetes" that realizes this.
Below is a list of servers that make up kubernetes
● Master server kubernetes A server for operating containers in a cluster. The master server receives the request by the kubectl command and executes the process. It is possible for the user to behave as if operating on one server.
● etcd server Distributed key-value store. Manage cluster configuration information. The setting information for configuring the cluster is written here.
● Node The server that actually runs the docker container. A cluster is a collection of multiple nodes. Inside the node, there is a pod, which is a collection of docker containers.
Below is a list of the elements that make up the application
●Replica Set Create/start a pre-specified pod with kubernetes. Monitor the pod, and if the container stops due to a failure, delete the pod and start a new pod.
●Deployment It manages the history of pods and ReplicaSets. When you want to upgrade the version of the container in the pod, you can update it without stopping the system.
The following is a list of elements that manage the network
●Service Define Service when accessing the pod drafted in the kubernetes cluster from the outside.
●Label Use Label for resource management to make it easy to identify.
Recommended Posts