[LINUX] Construction of Ceph (Octopus) (Preparation hardware)

__ Introduction __

As the title suggests, this article aims to build __ceph (octopus) __, which is the latest at the moment (2021.01). Also, since it also serves as a memorandum of the procedure you built yourself, there may be mistakes in the details.

Basically, the official DOCUMENTATION covers everything you need, so it's a good idea to read it. Since some knowledge of Linux is required, it is advisable to grasp the basics in advance on other sites and books.

Related article

Construction of Ceph (Octopus) (Preparation hardware) This article Construction of Ceph (Octopus) (Preparation software) Construction of Ceph (Octopus) (common to settings)

__ Hardware (node) __

First of all, you need to prepare a personal computer to configure a ceph cluster. It depends on the configuration of the ceph cluster you want to build. This time, I assembled it with the following configuration.

No host memory storage
1 ceph-mon01 16GB HDD1(250GB)/HDD2(250GB)
2 ceph-mds01 16GB HDD1(250GB)/HDD2(250GB)
3 ceph-osd01 16GB HDD1(250GB)/HDD2(250GB)
4 ceph-osd02 16GB HDD1(250GB)/HDD2(250GB)
5 ceph-osd03 16GB HDD1(250GB)/HDD2(250GB)
6 ceph-mon11 16GB HDD1(250GB)/HDD2(250GB)
7 ceph-mds11 16GB HDD1(250GB)/HDD2(250GB)
8 ceph-osd11 16GB HDD1(250GB)/HDD2(250GB)
9 ceph-osd12 16GB HDD1(250GB)/HDD2(250GB)
10 ceph-osd13 16GB HDD1(250GB)/HDD2(250GB)
11 ceph-mon21 16GB HDD1(250GB)/HDD2(250GB)
12 ceph-mds21 16GB HDD1(250GB)/HDD2(250GB)
13 ceph-osd21 16GB HDD1(250GB)/HDD2(250GB)
14 ceph-osd22 16GB HDD1(250GB)/HDD2(250GB)
15 ceph-osd23 16GB HDD1(250GB)/HDD2(250GB)

It consists of a total of 15 units. However, if you physically have at least one (whether it makes sense or not), you can check the operation of ceph if you virtualize it with docker etc. Also, in principle, it is recommended that the osd is physically one device, so it is configured separately from the device for the OS. (Applicable to HDD2)

__ Hardware (network) __

What you need depends on the settings such as replication, crush map, and PG. This time, I wanted to make 3 groups with 5 hosts each, so I have the following configuration.

No SW type Port speed Use
1 L3 1Gbps NW overall management(default gateway)
2 L2 100Mbps For group 0
3 L2 100Mbps For group 1
4 L2 100Mbps For group 2

L3 could be a router. L2 may be SWHUB. Select a device that matches the configuration of each network. Prepare a spec that can handle the traffic of the cluster. The personal computer procured this time was for clients (not for servers), so it has only one NIC. Therefore, we built it in one segment without distinguishing between the front LAN and the back LAN. It is recommended to separate the networks if you have more than one NIC.

in conclusion

That is all for the hardware prepared this time. You don't have to have exactly the same number of hardware I have prepared. Ultimately, the capacity and redundancy of the cluster will be different, but basically you can build ceph on any hardware. You can try it with your own hardware first.

I will post about the software in the next article (https://qiita.com/iriya_v00v/items/e3268fe75423ba0304b3).

Recommended Posts

Construction of Ceph (Octopus) (Preparation hardware)
Construction of Ceph (Octopus) (preparation software)
Construction of Ceph (Octopus) (setting mon node)
Construction of Ceph (Octopus) (common to settings)
[Memo] Construction of cygwin environment
Environment construction of python2 & 3 (OSX)