It is a network configured within a single docker host, and is used when building a relatively small network configuration.
-#Creating a Docker host
% docker-machine create nw-vm1
-#Log in to the virtual machine with ssh
% docker-machine ssh nw-vm
-#Check for existing networks
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
94ade5c5f1ae bridge bridge local
3723c4561df4 host host local
105e9024e5d5 none null local
Network operation differs depending on the type of "DRIVER". Three are made by default. The container belongs to the "bridge" network by default.
-#docker network inspect DRIVER name
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "94ade5c5f1aed16a3637a2c40bed82ac50d51b8814737c5e9942d0f8e0a1fd4c",
"Created": "2020-09-04T11:31:43.470419246Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
You can check the network subnet and default gateway id by looking at the Config column. Containers connected to the bridge network will be assigned ips in this network. The gateway ip will be the ip assigned to the "docker0" interface.
##Ssh connected to docker host
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:78:70:eb brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe78:70eb/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:ae:f1:12 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.101/24 brd 192.168.99.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feae:f112/64 scope link
valid_lft forever preferred_lft forever
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:77:b5:fe:32 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
You can check the ip of the interface of Docker0.
$ docker run -itd —name alpine1 alpine /bin/sh
$ docker inspect bridge
You can see that the container of alpine1 exists in the "Containers" column and ip is assigned. In the default bridge network, it is possible to communicate with containers existing in the same network by specifying the IP address.
-#Launch a container for "alpine 2" based on alpine
$ docker run -itd —name alpine2 alpine /bin/sh
-#Attach to the container and check the ip.
$ docker attach alpine2
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
ip is assigned to alpine2 at 172.17.0.3.
# ping -w 3 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.389 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.140 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.116 ms
--- 172.17.0.2 ping statistics ---
4 packets transmitted, 3 packets received, 25% packet loss
round-trip min/avg/max = 0.116/0.215/0.389 ms
It can be confirmed that communication with alpine1 is possible.
In the default network, the docker daemon cannot provide DNS, so it is not possible to communicate with the container name. This can be resolved by using a user-defined bridge network.
-#docker network create network name
$ docker network create my_nw
Now you can create a bridge network called "my_nw".
-#docker network connect network name container name
$ docker network connect my_nw alpine1
$ docker network connect my_nw alpine2
In the above, the containers of "alpine1" and "alpine2" are connected to the network "my_nw".
-# docker run -itd --name New container name--network network name Original container
$ docker run -itd --name alpine3 --network my_nw alpine
By specifying the network you want to connect to in the argument of --network, it will start in the state of being connected with that network from the beginning. By the way, if you specify a network and start it, it will start without being connected to the default bridge network.
-#Attach with alpine2
$ docker attach alpine2
-#Results omitted
# Ping -w 3 alpine1
# Ping -w 3 alpine3
It can be confirmed that communication is possible with the container name. In a user-defined bridge network, the docker daemon's built-in DNS starts and resolves the name with the container name. The container can be connected to a plurality of networks.
-#docker net disconnect network name container name
$ docker network disconnect bridge alpine2
-#Details and confirmation of alpine2
$ docker inspect alpine2
The "bridge" network is disconnected and only the "my_nw" network is connected. The bridge network can go out to the Internet, but by default it is a network that is not open to the outside world. By releasing the port specified by -p, the specified port of the container can be accessed from the outside.
A network whose Driver is "null". The container connected to the "null" driver has no network interface other than the loopback interface. Containers that connect non-networks must have other networks disconnected.
Network using the "host" driver. The container connected to the host network has the same network as the docker host. If you start the web server in a container on the host network, it will behave as if you were listening on ip 80 of the host machine. You can connect to port 80 of the container by accessing ip 80 of the docker host just by starting the container without using -p.
-#IP confirmation of docker host
% docker-machine ip nw-vm1
-#ssh connection
% docker-machine ssh nw-vm1
-#Launch nginx using the "host" network
$ docker run -d —name web —network host nginx
You can access it with the Ip address and confirm that nginx is running.
Recommended Posts