[LINUX] Building a kubernetes environment with ansible 2

Introduction

I was able to initialize kubernetes using ansible
Finally I will add k8s worker and create a final k8s environment </ p>

Environment

MBP OS Sierra
MAAS server (192.168.100.152 Network for MAAS: 192.168.200.1)
k8s-master server (KVM: 192.168.100.191)
ansible server (KVM: 192.168.100.192)
k8s worker(192.168.200.151)
k8s worker(192.168.200.153)

dash-board Ver.1.8

Goal

Add a machine with OS deployed by MAAS as k8s-worker in the flannel network created by k8s
Also, you can see the usage status of k8s on the dashboard.

MAAS server network settings

This time, due to the DHCP problem of MAAS, k8s-master and worker are on different networks, so it is necessary to set the bridge on the MAAS server.

ubuntu18 has changed the way of network setting from up to 16
I should have been able to just write /etc/netplan/50-cloud-init.yaml_bk, but this time I wrote it in two types of files because the bridge setting did not work.

$ sudo vi /etc/netplan/50-cloud-init.yaml_bk
network:
    ethernets:
        enp0s31f6:
            addresses:
            - 192.168.100.152/24
            gateway4: 192.168.100.1
            nameservers:
                addresses:
                - 8.8.8.8
                search:
                - 8.8.4.4
        enp2s0:
            addresses:
            - 192.168.200.1/24
            gateway4: 192.168.100.1
            nameservers:
                addresses:
                - 8.8.8.8
                search:
                - 8.8.4.4
    version: 2

$ sudo vi /etc/network/interfaces
auto lo
iface lo inet loopback

auto enp0s31f6
iface enp0s31f6 inet manual

auto br0
iface br0 inet static
address 192.168.100.152
netmask 255.255.255.0
gateway 192.168.100.1
dns-nameservers 8.8.8.8
bridge_ports enp0s31f6
bridge_maxwait 0
bridge_df 0
bridge_stp off


auto enp2s0
iface enp2s0 inet static
address 192.168.200.1
netmask 255.255.255.0
gateway 192.168.100.1
dns-nameservers 8.8.8.8

$ ifconfig
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.100.152  netmask 255.255.255.0  broadcast 192.168.100.255
        inet6 fe80::329c:23ff:feac:5570  prefixlen 64  scopeid 0x20<link>
        ether 30:9c:23:ac:55:70  txqueuelen 1000  (Ethernet)
        RX packets 9579059  bytes 16579553543 (16.5 GB)
        RX errors 0  dropped 657286  overruns 0  frame 0
        TX packets 6047022  bytes 936298283 (936.2 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s31f6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::329c:23ff:feac:5570  prefixlen 64  scopeid 0x20<link>
        ether 30:9c:23:ac:55:70  txqueuelen 1000  (Ethernet)
        RX packets 21689196  bytes 26237413396 (26.2 GB)
        RX errors 0  dropped 475  overruns 0  frame 0
        TX packets 6555651  bytes 4057603928 (4.0 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xdf100000-df120000

enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.200.1  netmask 255.255.255.0  broadcast 192.168.200.255
        inet6 fe80::6a05:caff:fe66:a834  prefixlen 64  scopeid 0x20<link>
        ether 68:05:ca:66:a8:34  txqueuelen 1000  (Ethernet)
        RX packets 6867754  bytes 970026556 (970.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13304857  bytes 15246678579 (15.2 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 17  memory 0xdf0c0000-df0e0000

Also, set NAT so that you can communicate on the two networks

sudo iptables -t nat -A POSTROUTING -s 192.168.200.0/24 -j SNAT --to 192.168.100.152

Add worker

To add a k8s worker, it's almost the same as the master, but you need a command to start it at the end to join the flannel network

The command is displayed in the output contents when kubeadm init is performed, but you can also check it with the following command

(k8s-master)$ kubeadm token create --print-join-command

Create a playbook for worker participation using the commands that appear

(ansible)$ sudo vi mlp.yml

  • hosts: mlp01 remote_user: ubuntu become: yes tasks:
    • name: Install prerequisites and Docker.io , nfs-common apt: name={{item}} update_cache=yes with_items:
      • nfs-common
      • apt-transport-https
      • ca-certificates
      • curl
      • software-properties-common
      • docker.io
    • name: user add to docker group user: name=ubuntu group=docker append=yes
    • name: Add K8S GPG key apt_key: url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
    • name: Add K8S APT repository apt_repository: repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
    • name: Install K8S apt: name={{item}} update_cache=yes with_items:
      • kubelet
      • kubeadm
      • kubectl
    • name: Remove swapfile from /etc/fstab mount: name: swap fstype: swap state: absent
    • name: Disable swap command: swapoff -a when: ansible_swaptotal_mb > 0
    • name: Set docker service to start on boot. service: name=docker enabled=yes
    • name: Set kubelet service to start on boot. service: name=kubelet enabled=yes
    • name: Join k8s-master become: yes shell: kubeadm join 192.168.100.191:6443 ~~ # Describe what was output by the above command </ code> </ pre>

Edit hosts as you did when mastering

$ sudo vi /etc/ansible/hosts
[master]
k8s-master

[mlp]
mlp01 

$ sudo vi /etc/hosts 
192.168.100.191 k8s-master 192.168.200.151 mlp01 

Ansible playbook execution
If you want to use python3, you need the option after "-e"

~/ansible$ sudo ansible-playbook --private-key=id_rsa_common mlp.yml -e 'ansible_python_interpreter=/usr/bin/python3'

If the playbook is successful, it will be in the following state

(k8s-master)$ kubectl get node
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    3d        v1.10.3
mlp01        Ready     <none>    3d        v1.10.2

Launch nginx image

After node is added, run nginx docker image for the time being
The reason will be described later

$ sudo vi nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
    - name: nginx-container
      image: nginx
      ports:
      - containerPort: 80

$ kubectl apply -f nginx-pod.yaml
create
$ kubectl get pod
NAME           READY     STATUS    RESTARTS   AGE
nginx-pod      1/1       Running   0          3d

No problem if nginx becomes Running

Installing Dash-board

Even if you don't have Dashboard, you can manage to use kubernetes, but I want to use it
That's why I will install it

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

If there is no error and "created" is displayed, it's OK
Check if it works

kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
default       nginx-pod                               1/1       Running   0          3d
kube-system   kubernetes-dashboard-7d5dcdb6d9-7hptz   1/1       Running   0          3d

Start kube proxy

$ kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Starting to serve on [::]:8001

If this happens, access it with a browser to see it.
The login screen will appear

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Screenshot 2018-05-29 14.24.06.png

Issue a token to log in

$ kubectl -n kube-system get secret

Enter using the issued token

The following settings are for people who find it difficult to issue tokens every time.
After doing this, you will be able to enter with SKIP on the login screen
However, since it is loose in terms of security, it is recommended to use it in conditions where there are no outsiders such as in-house only environment.

$ cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
EOF

This completes the kubernetes environment construction

Conclusion

Once you have created this environment, you can automatically distribute it to node by starting the docker image from k8s-master.
It's very convenient because you can add a new node as soon as you have a command to add it to the flannel network.

I stumbled

Network communication

I mentioned the network settings at the beginning of this article, but until this was decided, I continued to get errors that communication did not pass.
Impression that was swayed by the change of the setting method due to the version change of ubuntu18

kubeadm init fails

The cause is that I just forgot to delete swap, but I tend to forget its existence

dash-board is not displayed

A node was added to the master, and just by installing dash-board → starting proxy, dash-board was not displayed in the browser, and only the directory tree of files appeared on the browser page.
I tried to start proxy-pod as shown in the middle of the page, and after that, if I proceeded according to the procedure, dash-board came to be displayed
It seems that there was no similar symptom on other referenced pages, and it is unknown whether it is the specification of the dashboard version or whether there is a setting that I have not done alone.

Reference page

I tried to build a NAT environment with iptables
kubernetes official page
access control
Addition of Kubernetes / Web UI (Dashboard)
Install Dashboard on Kubernetes and access without authentication

Recommended Posts