Taking advantage of btrfs snapshots

One of the killer features of BTRFS are the snapshots.

In a nutshell, imagine you just installed your system and want to have a “copy” of that current state in the event of having a system crash in the future, so you can go back to that “stable” moment, avoiding all the fuzz of reinstalling and potentially losing data:

btrfs snapshot <source> <destination>

This will create that “copy” of the <source> btrfs subvolume on the desired <destination>.

You can create a snapshot inside of your subvolume: BTRFS is clever enough to know this and not create recursive snapshots.

This is useful for going back to previous states or just to recover data you have previously deleted (un)intentionally.

This feature is pretty neat, but it is quite useless when you can’t have it done automatically.

Sure, you can create a cron job to run the snapshot command every now and then, but for this, the guys over at SUSE  have already thought of it and created a very handy tool called snapper

To start using snapper (after pulling it from the repos or aur), you have to create configurations. Let’s start by creating a configuration for our root subvolume:

snapper -c root create-config /

However, snapper will not work if you don’t have a daemon, like cronie running, so install (if needed) and enable it

systemctl enable cronie

This will create, by default one snapshot every hour. You can list the current snapshots with the command:

snapper -c root list

And see the snapshots you have available

You can delete snapshots using

snapper -c root delete (id)

where ID, you can either enter one ID or a range of snapshots like “3-20” and snapper will delete all the snapshots in that range: Don’t worry if there’s no snapshot with an id 10 in this case: Snapper will skip unexisting snapshots and won’t fail.

 

Pretty nice, right?

Now, let’s take it up a notch: Let’s say your system fails,a nd you want to revert to a previous snapshot: Snapper has a build-in “snapper rollback” feature tht i didn’t manage to make work: Besides, I prefer when I do this kind of stuff manually: Helps you understand what is really going on 🙂

Just boot a live system and mount the root btrfs filesystem

mount /dev/sda1 /mnt

Now, you will have all your subvolumes under /mnt

Let’s say you created the btrfs filesystem at /dev/sda1 and created two different subvolumes: /mnt/root and /mnt/home

Snapper would have created snapshots under /mnt/root/.snapshots/#/snapshots/, being /mnt/root/.snapshots another subvolume.

You should first move this subvolume out of the root subvolume (you can have it on a separate subvolume at the same level of /mnt/root and /mnt/home, but let’s leave that for later on)

mv /mnt/root/.snapshots /mnt

Then, rename the “broken” root subvolume

mv /mnt/root /mnt/root.broken

Then, find a snapshot that you know was still working under /mnt/.snapshots/#/ and move it back to the top subvolume:

mv /mnt/root/.snapshots/119/snapshot /mnt/root

…and you’re done: unmount and reboot and you will be back at your working system.

Advertisements

Migrate BTRFS setup to a new (bigger or equal) disk: The dirty way

Imagine you are in the situation where your btrfs disk is getting filled up, or is just about to fail or you just want to install a bigger HD, or even better: You are migrating to a SSD disk setup.

Migrating partitions with traditional filesystems such as ext4, or FAT32 is easy, since you just have to format a partition on the new disk with the same type and just

cp -a /mnt/source /mnt/destination

Being /mnt/source your mountpoint for the old disk partition and /mnt/destination the new one.

But with btrfs, the thing gets a little more complicated, since the contents of those partitions are not just plain and simple files and folders: You have subvolumes, snapshots, etc… and they behave in a special way or have specific commands to manage them.

There is a solution, a fast solution, in case you wan to migrate without all the fuss of replicating the subvolumes, use btrfs send and receive, etc….

You can just add the new disk’s partition to the btrfs pool, getting a raid1 data structure, then reduce the data structure to raid0 and finally, removing the old disk from the array.

First, create the new btrfs partition on the new disk, I’m going to assume /dev/sda2 is the btrfs partition on the old disk, while /dev/sdb2 is the btrfs partition on the new one.

First, we create the btrfs partition:

mkfs.btrfs /dev/sdb2

Then, we add the partition to our array: Let’s suppose you mounted /dev/sda2 on /mnt, we would add the new partition like so:

btrfs device add /dev/sdb2 /mnt

You will be warned by a message saying all the data on /dev/sdb2 will be destroyed, and this is because everything on this partition, including the UUID, will be identical to what you had on the original one at /dev/sda2

Afterwards, we have to convert the data structure to raid0:

btrfs balance start -dconvert=raid0 -mconvert=raid0 -sconvert=raid0 /mnt -f

We will probably get a warning if we don’t specify the “-f” option: BTRFS does not like to have single system data structures if you have more than one disk, but it’ll be fine.

NOTE: These operations might take a long time to complete, depending on the amount of data on your brtfs partitions(s)

Finally, once the balance concludes, you can remove the “old” disk’s partition:

btrfs device delete /dev/sda2 /mnt

Btrfs will take a really long time on this final step, since it has to move every chunk of data remaining on /dev/sda2 over to /dev/sdb2, but once you’re done, you can safely reboot and remove the old disk.

I must say again, this is a “dirty” and not so elegant method, but it does the job and is the best way in case you are in a rush or the subvolume/snapshot structure is way too complicated. Keep in mind the new partition must be same size or bigger than the old one for this to work without problems.

Enjoy!