This post really is about using LVM (Logical Volume Manager; an abstraction layer for disk devices) snapshots. A snapshot is a frozen image of a logical volume, which simply means “filesystem”. It’s not really “frozen”, LVM2 snapshots are read/write by default. But you can freeze a filesystem in time with a LVM snapshot.
The background of this really is Exadata (computing node) and upgrading, but has nothing unique to Exadata. So don’t let this bother you. But the idea of using LVM snapshots popped up when dealing with Exadata computing nodes and upgrades.
First of all: LVM is in development, which mean different Linux versions have different LVM options available to them. I am using the Exadata X2 Linux version: RHEL/OL 5u7 x86_64. I guess OL6 has more and more advanced features inside LVM, but with X2, OL5u7 is what I have to use. So the steps in this blogpost are done with this version. Any comments are welcome!
Second: if you want to experiment with this: most people allocate all space in the volume group to logical volumes upfront. A snapshot is a COW (Copy on Write) copy of a logical volume. This means a snapshot starts off with zero extra bytes (source and snapshot are equal), and grows as the source gets modified. This means you need to have free/available space in the volume group to facilitate the snapshot.
Then there is another caveat: the ‘/boot’ filesystem cannot be in LVM, so is a normal partition on most systems (also on Exadata). This means snapshots do not help if you want a backup of that filesystem. You need to use another trick.
Okay, here we go: you have a large modification upcoming and want to be able to restore your system to this moment in time.
1. Backup /boot filesystem
[root@localhost ~]# df /boot Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 248895 24037 212008 11% /boot [root@localhost ~]# umount /boot [root@localhost ~]# dd if=/dev/sda1 of=dev_sda1_backup [root@localhost ~]# mount /boot
2. Create snapshot of logical volumes
In this example it’s only the root filesystem (which is a bit special, because this filesystem is set in grub.conf for the bootloader, and in /etc/fstab).
[root@localhost ~]# lvdisplay -v /dev/vg00/lv_root Using logical volume(s) on command line /dev/hdc: open failed: No medium found --- Logical volume --- LV Name /dev/vg00/lv_root VG Name vg00 LV UUID wutwln-ffdB-QRlg-1LgL-XqKB-glvn-OpCowW LV Write Access read/write LV Status available # open 1 LV Size 3.91 GB Current LE 125 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0
lvdisplay displays the properties of the logical volume to be snapshotted, I use this to spot the number of LE’s.
[root@localhost ~]# lvcreate -l 125 -s /dev/vg00/lv_root -n lv_root_snap /dev/hdc: open failed: No medium found Logical volume "lv_root_snap" created
The system now has been backed up in a way that we can revert to this situation.
Next would be to do the O/S upgrade, software upgrade or whatever. It goes horribly wrong, and you need to restore the system to the previous situation.
1. Rename current logical volume
In this case, I rename my lv_root logical volume to lv_root_old:
[root@localhost ~]# lvrename /dev/vg00/lv_root /dev/vg00/lv_root_old /dev/hdc: open failed: No medium found Renamed "lv_root" to "lv_root_old" in volume group "vg00"
The logical volume (which we currently use) has been renamed.
2. Create new lv_root
This is the logical volume we are going to use to put the snapshot in.
[root@localhost ~]# lvcreate -l 125 -n lv_root /dev/vg00 Logical volume "lv_root" created
3. Populate the new lv_root with the snapshot contents
[root@localhost ~]# dd if=/dev/vg00/lv_root_snap of=/dev/vg00/lv_root 8192000+0 records in 8192000+0 records out 4194304000 bytes (4.2 GB) copied, 93.0858 seconds, 45.1 MB/s
4. Restore the /boot filesystem
[root@localhost ~]# umount /boot [root@localhost ~]# dd if=dev_sda1_backup of=/dev/sda1 [root@localhost ~]# mount /boot
[root@localhost ~]# shutdown -r now
This sequence of events enabled me to restore my system to a post-modification situation. Of course you should test this very thoroughly for your own situation, but this offers an elegant way, which has little external dependencies.
Post restore/cleaning up:
When the system is reverted to its old situation, we are left with a logical volume and a snapshot which probably are not of use anymore. These can be cleaned up the following way:
[root@localhost ~]# lvremove /dev/vg00/lv_root_old Do you really want to remove active logical volume lv_root_snap? [y/n]: y Logical volume "lv_root_snap" successfully removed Do you really want to remove active logical volume lv_root_old? [y/n]: y Logical volume "lv_root_old" successfully removed