Eine Möglichkeit, RAID1 zu realisieren, besteht darin, ein RAID-Array mit dem altmodischen MD-Kerneltreiber zu erstellen und darauf ein LVM zu erstellen, das sogenannte LVM auf RAID (Sie müssen kein LVM erstellen).
In diesem Abschnitt wird beschrieben, wie Sie ein LVM-RAID-Volume erstellen, das von der LVM-Funktion gespiegelt wird. LVM unterstützt RAID 1/4/5/6/10. (Verwendet intern den MD-Kerneltreiber)
Arbeite an der Wurzel.
ls -F /dev/sde* /dev/sdf*
/dev/sde /dev/sdf
Es ist auch möglich, die Partition zu löschen und die gesamte Festplatte unter LVM-Kontrolle zu stellen.
gdisk /dev/sde
Command (? for help): p
Abkürzung
Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Number Start (sector) End (sector) Size Code Name
Command (? for help): n
Abkürzung
Hex code or GUID (L to show codes, Enter = 8300): 8e00
Changed type of partition to 'Linux LVM'
Command (? for help): p
Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Abkürzung
Number Start (sector) End (sector) Size Code Name
1 2048 3907029134 1.8 TiB 8E00 Linux LVM
Command (? for help): w
Abkürzung
Do you want to proceed? (Y/N): y
Abkürzung
The operation has completed successfully.
gdisk /dev/sdf
Abkürzung
Bestätigung
ls -F /dev/sde* /dev/sdf*
/dev/sde /dev/sde1 /dev/sdf /dev/sdf1
pvcreate <devices>
pvcreate /dev/sde1 /dev/sdf1
Physical volume "/dev/sde1" successfully created.
Physical volume "/dev/sdf1" successfully created.
Bestätigung
pvs
pvdisplay
pvs
PV VG Fmt Attr PSize PFree
/dev/sde1 lvm2 --- <1.82t <1.82t
/dev/sdf1 lvm2 --- <1.82t <1.82t
pvdisplay
"/dev/sde1" is a new physical volume of "<1.82 TiB"
--- NEW Physical volume ---
PV Name /dev/sde1
VG Name Anfangs leer
PV Size <1.82 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID BTiUhz-TssW-6hFv-tr70-QO3Y-QbU3-mpNL8N
"/dev/sdf1" is a new physical volume of "<1.82 TiB"
--- NEW Physical volume ---
PV Name /dev/sdf1
VG Name Anfangs leer
PV Size <1.82 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 6Il3bs-mr2f-46RS-rV5O-Tavu-PPcu-QNofe5
Löschen
pvremove <devices>
pvremove /dev/sde1 /dev/sdf1
vgcreate <volume name> <devices>
vgcreate vg1 /dev/sde1 /dev/sdf1
Volume group "vg1" successfully created
Bestätigung
vgs
vgdisplay
vgscan
pvdisplay
vgs
VG #PV #LV #SN Attr VSize VFree
vg1 2 0 0 wz--n- <3.64t <3.64t
vgdisplay
--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size <3.64 TiB
PE Size 4.00 MiB
Total PE 953862
Alloc PE / Size 0 / 0
Free PE / Size 953862 / <3.64 TiB
VG UUID eGU09g-QCWJ-pbCR-a9rf-YtKW-8b9e-BUqIPI
vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1" using metadata type lvm2
pvdisplay
--- Physical volume ---
PV Name /dev/sde1
VG Name vg1 ← VG Name
PV Size <1.82 TiB / not usable <4.07 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 476931
Free PE 476931
Allocated PE 0
PV UUID BTiUhz-TssW-6hFv-tr70-QO3Y-QbU3-mpNL8N
--- Physical volume ---
PV Name /dev/sdf1
VG Name vg1 ← VG Name
PV Size <1.82 TiB / not usable <4.07 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 476931
Free PE 476931
Allocated PE 0
PV UUID 6Il3bs-mr2f-46RS-rV5O-Tavu-PPcu-QNofe5
vgrename <old vg name> <new vg name>
vgremove <vg name>
lvcreate --type raid1 -m <N> -L <S> -n <LV> <VG>
--N: Geben Sie die Anzahl der Spiegel an. 1, wenn 2 Festplatten vorhanden sind --S: Geben Sie die Größe von LV an --LV: LV-Name, den Sie erstellen möchten --VG: Zu verwendender VG-Name (erstellt von vgcreate)
lvcreate --type raid1 -m 1 -L 1.8T -n lv1 vg1
Rounding up size to full physical extent 1.80 TiB
Logical volume "lv1" created.
Bestätigung
lvs
lvdisplay
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 rwi-a-r--- 1.80t 0.00
lvdisplay
--- Logical volume ---
LV Path /dev/vg1/lv1
LV Name lv1
VG Name vg1
LV UUID WBAnij-OHQm-8jwT-OV0X-iVAk-9FMT-DpYiXJ
LV Write Access read/write
LV Creation host, time d1, 2020-05-04 21:05:59 +0900
LV Status available
# open 0
LV Size 1.80 TiB
Current LE 471860
Mirrored volumes 2
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:4
lvrename /dev/<vg name>/<old lv name> /dev/<vg name>/<new lv name>
lvremove <lv path>
lvremove /dev/vg1/lv1
mkfs Befehl <lv Pfad>
mkfs.ext4 /dev/vg1/lv1
Hier ist der Einhängepunkt "/ disks / raid".
mount /dev/vg1/lv1 /disks/raid
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg1-lv1 1.8T 77M 1.7T 1% /disks/raid
blkid | grep mapper
/dev/mapper/vg1-lv1_rimage_0: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"
/dev/mapper/vg1-lv1_rimage_1: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"
/dev/mapper/vg1-lv1: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"
ls -l /dev/disk/by-uuid/
lrwxrwxrwx 1 root root 10 May 4 20:15 2f0035ec-eb69-4cb1-9a02-7774a7de8c87 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 4 20:15 4d0dc8c5-4640-4817-becc-966186982c06 -> ../../sda2
lrwxrwxrwx 1 root root 10 May 4 21:13 896732c2-ce1b-4edc-889f-c749afcde18f -> ../../dm-4
lrwxrwxrwx 1 root root 10 May 4 20:15 e3641bda-7386-4330-b325-e57d11f420bb -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 4 20:15 f42dc771-696a-42e7-9f2b-9dbb6057c6b8 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 4 20:15 FA3B-8EA8 -> ../../sda1
vim /etc/fstab
UUID="896732c2-ce1b-4edc-889f-c749afcde18f" /disks/raid ext4 noatime,nodiratime,relatime 0 0
Recommended Posts