Of course, it provides VDO functionality for file systems located on block devices. This feature can also be provided for block devices controlled by LVM and LUNs on iSCSI storage. As far as thin provisioning is concerned, it is possible to implement other functions called "Stratis". If you want to use deduplication or compression, it may be VDO.
~~ If the SAN storage side has the same function, I don't think it will be used together. ~~
#Check the current state of each block device(Apply vdo to sdb and sdc)
lsblk
#List of areas where vdo is configured(Nothing is displayed because it is not currently configured)
vdo list
#Commands for configuring vdo on the target device
vdo create --name "vdo volume name" --device "Target device" --vdoLogicalSize "Maximum size of vdo" --vdoSlabSize "Slab size" --force
Execution example vdo create--name vdo1 --device /dev/sdb --vdoLogicalSize 100G --vdoSlabSize 1G --force
Execution example vdo create--name vdo2 --device /dev/sdc --vdoLogicalSize 100G --vdoSlabSize 1G --force
The slab size is the size of the divided data area that exists within VDO. Please check here for details. VDO Slab Size-Red Hat
Let's compare lsblk and vdo list when the configuration is complete. (Please compare with the first image on this page) In lsblk, sdb and sdc are block devices with a size of 20GB each, but you can see that 100GB vdo is configured under them. (That is, thin provisioning configuration)
Also, the vdo list command recognizes two vdo devices.
#Format vdo1 with ext4
mkfs.ext4 /dev/mapper/vdo1
#Created ext4 formatted vdo device,/Mount to a directory under mnt
mount /dev/mapper/vdo1 /mnt/vdo1
The figure below shows the result of executing the above command. Do the same for vdo2. (Use the above command to replace vdo1 with vdo2) Now that I've mounted the two vdo devices, I checked the mount status and capacity with lsblk and df. On the lsblk side, it was confirmed that the capacities of vdo1 and vdo2 are 100GB each and each is mounted in the directory under / mnt. With df -h, it was confirmed that about 2GB of overhead was consumed to save the configuration information as a vdo device out of 100GB.
Finally, don't forget to set the persistent mount to / etc / fstab. Reboot after setting the following.
Ah ... I should have set it correctly, but it came up in Emergency Mode. Then, when I checked the online manual, I could confirm the following description. Apparently the description in fstab is a little different. I scrolled down quite a bit.
man vdo
So, I copied the description example on man and pasted it in / etc / fstab later. This will definitely give you the correct input. Here is the file that actually describes it. I rebooted after this, but CentOS started up without any problems. By the way, the following page also mentioned the above fstab description. Mount VDO Volume --Red Hat
Both are tasks that continue to write successive source code of the Linux kernel to a 25GB volume. In the lower right, you can see the vdostats line that compresses and saves all of this source code using only 4.2GB for a 25GB area. The VDO volume is configured as 250GB and contains 29GB of data, but the capacity saving function of VDO consumes only 4GB. In other words, 29GB-4.2GB is about 25GB.
RHEL 8 Beta - Using the Virtual Data Optimizer (VDO)
Recommended Posts