Linux LVM RAID

One way to realize RAID1 is to create a RAID array using the old-fashioned MD kernel driver and create LVM on it, so-called LVM on RAID (you don't have to create LVM).

This section describes how to create an LVM RAID volume that is mirrored by the LVM function. LVM supports RAID 1/4/5/6/10. (Internally uses MD kernel driver)

Work on the root.

The target disk is / dev / sde, / dev / sdf

ls -F /dev/sde* /dev/sdf*
/dev/sde /dev/sdf

Create partition

It is also possible to delete the partition and put the entire disk under LVM control.

gdisk /dev/sde
Command (? for help): p
Abbreviation
Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Number  Start (sector)    End (sector)  Size       Code  Name

Command (? for help): n
Abbreviation
Hex code or GUID (L to show codes, Enter = 8300): 8e00
Changed type of partition to 'Linux LVM'

Command (? for help): p
Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Abbreviation
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      3907029134   1.8 TiB     8E00  Linux LVM

Command (? for help): w
Abbreviation
Do you want to proceed? (Y/N): y
Abbreviation
The operation has completed successfully.
gdisk /dev/sdf
Abbreviation

Verification

ls -F /dev/sde* /dev/sdf*
/dev/sde /dev/sde1 /dev/sdf /dev/sdf1

Creation of PV (Physical Volume)

pvcreate <devices>

pvcreate /dev/sde1 /dev/sdf1
  Physical volume "/dev/sde1" successfully created.
  Physical volume "/dev/sdf1" successfully created.

Verification pvs pvdisplay

pvs
  PV         VG Fmt  Attr PSize  PFree 
  /dev/sde1     lvm2 ---  <1.82t <1.82t
  /dev/sdf1     lvm2 ---  <1.82t <1.82t

pvdisplay
  "/dev/sde1" is a new physical volume of "<1.82 TiB"
  --- NEW Physical volume ---
  PV Name               /dev/sde1
VG Name Initially empty
  PV Size               <1.82 TiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               BTiUhz-TssW-6hFv-tr70-QO3Y-QbU3-mpNL8N
   
  "/dev/sdf1" is a new physical volume of "<1.82 TiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdf1
VG Name Initially empty
  PV Size               <1.82 TiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               6Il3bs-mr2f-46RS-rV5O-Tavu-PPcu-QNofe5

Delete pvremove <devices>

pvremove /dev/sde1 /dev/sdf1

Creation of VG (Volume Group)

vgcreate <volume name> <devices>

vgcreate vg1 /dev/sde1 /dev/sdf1
  Volume group "vg1" successfully created

Verification vgs vgdisplay vgscan pvdisplay

vgs
  VG  #PV #LV #SN Attr   VSize  VFree 
  vg1   2   0   0 wz--n- <3.64t <3.64t

vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               <3.64 TiB
  PE Size               4.00 MiB
  Total PE              953862
  Alloc PE / Size       0 / 0   
  Free  PE / Size       953862 / <3.64 TiB
  VG UUID               eGU09g-QCWJ-pbCR-a9rf-YtKW-8b9e-BUqIPI

vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg1" using metadata type lvm2

pvdisplay
  --- Physical volume ---
  PV Name               /dev/sde1
VG Name vg1 ← VG name
  PV Size               <1.82 TiB / not usable <4.07 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               476931
  Allocated PE          0
  PV UUID               BTiUhz-TssW-6hFv-tr70-QO3Y-QbU3-mpNL8N
   
  --- Physical volume ---
  PV Name               /dev/sdf1
VG Name vg1 ← VG name
  PV Size               <1.82 TiB / not usable <4.07 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               476931
  Allocated PE          0
  PV UUID               6Il3bs-mr2f-46RS-rV5O-Tavu-PPcu-QNofe5

VG name change

vgrename <old vg name> <new vg name>

Delete VG

vgremove <vg name>

Creating RAID1 LV (Logical Volume)

lvcreate --type raid1 -m <N> -L <S> -n <LV> <VG>

--N: Specify the number of mirrors. 1 if there are 2 disks --S: Specify the size of LV --LV: LV name you want to create --VG: VG name to use (created by vgcreate)

lvcreate --type raid1 -m 1 -L 1.8T -n lv1 vg1
  Rounding up size to full physical extent 1.80 TiB
  Logical volume "lv1" created.

Verification lvs lvdisplay

lvs
  LV   VG  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv1  vg1 rwi-a-r--- 1.80t                                    0.00            

lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1/lv1
  LV Name                lv1
  VG Name                vg1
  LV UUID                WBAnij-OHQm-8jwT-OV0X-iVAk-9FMT-DpYiXJ
  LV Write Access        read/write
  LV Creation host, time d1, 2020-05-04 21:05:59 +0900
  LV Status              available
  # open                 0
  LV Size                1.80 TiB
  Current LE             471860
  Mirrored volumes       2
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:4

LV name change

lvrename /dev/<vg name>/<old lv name> /dev/<vg name>/<new lv name>

LV removal

lvremove <lv path>

lvremove /dev/vg1/lv1

Creating a file system

mkfs command <lv path>

mkfs.ext4 /dev/vg1/lv1

LV mount

Here, the mount point is / disks / raid.

mount /dev/vg1/lv1 /disks/raid

df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/vg1-lv1  1.8T   77M  1.7T   1% /disks/raid

Described in fstab

blkid | grep mapper
/dev/mapper/vg1-lv1_rimage_0: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"
/dev/mapper/vg1-lv1_rimage_1: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"
/dev/mapper/vg1-lv1: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"

ls -l /dev/disk/by-uuid/
lrwxrwxrwx 1 root root 10 May  4 20:15 2f0035ec-eb69-4cb1-9a02-7774a7de8c87 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May  4 20:15 4d0dc8c5-4640-4817-becc-966186982c06 -> ../../sda2
lrwxrwxrwx 1 root root 10 May  4 21:13 896732c2-ce1b-4edc-889f-c749afcde18f -> ../../dm-4
lrwxrwxrwx 1 root root 10 May  4 20:15 e3641bda-7386-4330-b325-e57d11f420bb -> ../../sdb1
lrwxrwxrwx 1 root root 10 May  4 20:15 f42dc771-696a-42e7-9f2b-9dbb6057c6b8 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May  4 20:15 FA3B-8EA8 -> ../../sda1

vim /etc/fstab
UUID="896732c2-ce1b-4edc-889f-c749afcde18f"  /disks/raid  ext4  noatime,nodiratime,relatime 0  0

reference

--How to operate LVM RAID: https://qiita.com/nigaky/items/03fe655af225324567d4

Recommended Posts

Linux LVM RAID
Linux
linux memorandum
Linux command # 4
Linux commands
Linux commands
Linux command # 3
Linux overview
Linux basics
direnv (linux)
Organize Linux
linux commands
Linux practice
Ubuntu Linux 20.04
Linux Summary
Linux process
Linux permissions
Linux command # 5
About Linux
Linux basics
Forgot linux
About Linux
Linux commands
Linux commands
About Linux
About Linux
About Linux ①
Linux redirect
Arch Linux install to RAID1 storage (BIOS boot)