This article is the 13th day article of Elasticsearch Advent Calendar (2020). I happened to be vacant, so I will post a small story.
When you are touching Elasticsearch, new features are released with the version upgrade, and you may want to give it a try.
In the past, I used to set up a VM in an on-premise virtual environment or cloud, SSH it, and execute commands. Especially when creating a cluster environment of 3 or more units, it took time and effort to set the host name, create and distribute the SSH key, create the yum repo file, yum install, edit the Elasticsearch configuration file, start the service, and so on.
Recently, WSL2 can be used on Windows 10, and docker can be installed in Ubuntu 18.04 on WSL2 to create the exact same configuration, and it is very easy to set up a cluster, so share the method. I will do it. In addition, I think that some people may think that the docker image distributed by elastic company is okay without doing that, but when using it in earnest, design and set with this virtual machine-based configuration. Based on that, the same procedure (a method of customizing the configuration file by yum/apt install on the base OS) is taken.
It's just an individual method, not a best practice, and if anyone is using a more convenient method, please let me know. Also, depending on the version and environment of WSL2, permission errors often occur, so please understand that the docker environment on the Linux host may work without problems.
Basically, if you can use docker, you can follow the procedure in this article. I have Ubuntu 18.10 on WSL2 on Windows 10.
Work in the following order. The scripts are also available on github. (https://github.com/tetsuyasodo/esdocker)
The base OS uses CentOS 7: latest. Only one point, I want to execute the script after starting the OS, so ADD the file in rc.local.
The custom script is as follows, and yum install installs elasticsearch and kibana, and modifies the configuration file so that the cluster configuration can be taken. (The node name is hard-coded, but there is room for generalization around here.)
#!/bin/bash
cat <<'EOF' >/etc/yum.repos.d/es.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
yum install -y elasticsearch kibana
cat <<'EOF' >>/etc/elasticsearch/elasticsearch.yml
cluster.name: cluster01
network.host: 0.0.0.0
discovery.seed_hosts: ["es01","es02","es03"]
cluster.initial_master_nodes: ["es01","es02","es03"]
EOF
cat <<'EOF' >>/etc/kibana/kibana.yml
server.host: "0.0.0.0"
EOF
systemctl daemon-reload
systemctl start elasticsearch
systemctl start kibana
I want to create a Docker image with this embedded, so create a Dockerfile like the one below.
FROM centos:centos7
COPY essetup.sh /etc/rc.local
RUN chmod 755 /etc/rc.local
Make sure to create these two files in the same directory.
$ ls
Dockerfile essetup.sh
Create an image with docker build as it is. This is common to all three units.
$ docker build -t escluster .
Next, start the image for 3 units with docker run. You can set port forwarding from the docker host side with the -p option, but 9200 and 5601 of es01 are probably required, but it works even if there is no rest, so the setting is optional.
$ docker run -it -d --network elasticstack -p 9200:9200 -p 5601:5601 --hostname es01 --name es01 --privileged escluster /sbin/init
$ docker run -it -d --network elasticstack -p 9201:9200 -p 5602:5601 --hostname es02 --name es02 --privileged escluster /sbin/init
$ docker run -it -d --network elasticstack -p 9202:9200 -p 5603:5601 --hostname es03 --name es03 --privileged escluster /sbin/init
This method takes a little time to install because the Docker image is still "plain" CentOS and yum install is done every time after OS startup. It's a good idea to take a Coffee Time of 4-5 minutes, and after a while, use the systemctl command to check if elasticsearch or kibana is included.
$ docker exec -it es01 systemctl status kibana
Alternatively, you can try entering the container directly with bash.
$ docker exec -it es01 /bin/bash
# tail /var/log/yum.log
# ps -ef
After waiting for a while, all three nodes will start and automatically form a cluster. You can access the cluster from the curl command or kibana.
$ curl localhost:9200/_cat/nodes
For Kibana, you can access it from your browser at "http: // localhost: 5601".
You can delete the last unnecessary container with stop/rm.
$ docker stop es0{1,2,3}
$ docker rm es0{1,2,3}
$ docker rmi escluster ###Run if you also want to delete the image
I introduced how to easily start and try an Elasticsearch cluster in your docker environment. With this method, you can easily check the functions of the new version of Elasticsearch, which is frequently performed, so please use it if you like.
Recommended Posts