[Linux] OS recovery with restore command

The story of system area recovery using the dump / restore command.

Since Image template can be used in IBM Cloud, it is easy to back up and restore only the OS area just by clicking on the portal. However, there are the following concerns. (As of 08/03/2018) ・ It costs $ 25 / GB per month to store templates ・ OS password changes to new one ・ Settings around the network such as IP address are initialized. -Although there is a measure to reload the OS, the system area will be completely updated and the kernel version will be forcibly updated. As mentioned above, if it is difficult to use the backup service unique to the cloud and there is only one Linux backup server, what should you do when the backup server itself needs to be restored? It will be.

By the way, here is the "recovery procedure using the dump / restore command at the OS level". As a flow,

-(1) Back up the OS area (/ and / boot) to the 2nd disk with the dump command (even if the backup destination is NFS)

-② Boot from media and start rescue mode (using cloud rescue boot)

-③ Create a partition according to the restore destination

-④ Create file system for / and / boot

-⑤ Mount and restore

-⑥ Bind / dev, / proc, / sys

--⑦ Restored / root system

--⑧ Rewrite /boot/grub2/grub.cfg

--⑨ Rewrite / etc / fstab

--⑩ Reinstall GRUB2 on PReP Boot device (/ dev / xvda)

--⑪ Remove the media and reboot

~~ No ~ It's annoying. ~~

Advance preparation

-The OS area is the IBM Cloud VSI device "/ dev / xvda". -It is assumed that the 2nd DISK FS is mounted on / bkup as the backup destination.

First, let's get the partition and LVM information in advance.

sfdisk -d /dev/xvda > /bkup/sfdis.dat
pvdisplay > /bkup/pvdis.dat
vgdisplay > /bkup/vgdis.dat
cat /etc/fstab > /bkup/fstab.dat
mount > /bkup/mount.dat
blkid > /bkup/blkid.dat
vgcfgbackup VG name-f /bkup/vgcfg.dat (/Implemented when LVM is built in the area)

Install if there is no dump command

yum install dump

--- Caution --- * Dump can be used only when the FS type is "EXT". * Set to "EXT" in this procedure according to the VSI type of IBM Cloud. * Check the FS type, and if "XFS", install "xfsdump" . Options also change slightly. -------------
# (1) Make a backup with the dump command. ``` dump 0 -f /bkup/root.dump /dev/xvda2 dump 0 -f /bkup/boot.dump /dev/xvda1 ``` This completes the backup. Check under / bkup to make sure you have a .dump file.
# ② Boot from media and start rescue mode If it's physics, throw in the media ISO mount from the management screen in a virtual environment In the cloud, rescue boot from the customer portal. (IBM Cloud)

By the way, if the media version is too old, it will be caught in the following steps, so make it as close as possible to the system to restore. It is safe if it is the same major version. In addition, the rescue media and restore system architecture (64, 32bit) must be matched.

③ Create partition

Let's ISO boot and enter rescue mode with the KVM console. For CentOS 7, select Rescue a CentOS system fromTroubleshooting. 20180615134900.jpg

sh-4.2#

OK if you get a prompt such as You should be able to see the 2nd DISK device, Create a directory for the appropriate restore file. (/ Bkup in this procedure)

Mount / bkup where the restore file is located

mount /dev/vg_bkup/lv_bkup /bkup

Enabled if the device state is not available and cannot be done.

lvchange --available y /dev/vg_bkup/lv_bkup

Check if you can see the backed up files.

ls -l /bkup

Restore configuration information from sfdis.dat obtained in advance

sfdisk --force /dev/xvda < /bkup/sfdis.dat
  • If vgcfgbackup was done in advance, the following is also performed.
lvm vgscan
lvm vgcfgrestore -f /bkup/vgcfg.dat ${VGNAME}

# ④ Recreate the file system Recreate the system for / and / boot. Let's match the FS type with the actual machine.
mkfs.ext3 -f /dev/xvda1
mkfs.ext3 -f /dev/xvda2

# ⑤ Mount and restore Since the file system to be restored needs to be mounted, create a temporary mount destination. The name and path can be anything.
mkdir /restore

Mount and restore root

  • Pay attention to the device name
mount /dev/xvda2 /restore
cd /restore
restore -rf /bkup/root.dump

Primary area unmount

cd /
umount /restore

Mount and restore boot

mount /dev/xvda1 /restore
cd /restore
restore -rf /bkup/boot.dump

Unmount

cd /
umount /restore

Mount root again

mount /dev/xvda2 /restore

# ⑥ Bind / dev, / proc, / sys If you do not do this, it seems that the later grub2 relationship will not work, so bind each
mount --bind /proc /mnt/sysimages/proc
mount --bind /sys /mnt/sysimages/sys
mount --bind /dev /mnt/sysimages/dev

# ⑦ Restored / root system Change route with `chroot`
chroot /restore

Mount / boot

mount /dev/xvda1 /boot

# ⑧ Rewrite /boot/grub2/grub.cfg The UUID has changed when I recreated the file system. Therefore, change the UUID of (/ boot) written in /boot/grub2/grub.cfg . You need to rewrite it to the UUID of the recreated one. Since grub.cfg is NG for direct editing, execute the following command to regenerate grub.cfg according to the current environment.
/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg 

# ⑨ Rewrite / etc / fstab / etc / fstab also needs to be rewritten. This can be edited directly, but I'm afraid of handwriting, so execute the following command.
export NEWUUID=`blkid | grep /dev/xvda1 | awk '{print $2}' | cut -d '"' -f 2`
export OLDUUID=`cat /bkup/fstab.dat | grep /boot | awk '{print $1}' | cut -d '=' -f 2`
sed -i.org -e "s/UUID=${OLDUUID}/UUID=${NEWUUID}/g" /etc/fstab

# ⑩ Reinstall GRUB2 on the PReP Boot device (/ dev / xvda)
/sbin/grub2-install /dev/xvda

"install finished. no error reported" Is displayed, OK

⑪ Remove the media and reboot

Then reboot and boot from the restored system.

This procedure is a procedure to temporarily delete the FS in the OS area, so be careful when performing it. Also, since the commands are on the console, I don't think you can copy and paste. It may be good to make a script of the above command together with the backup.
Above

Recommended Posts