Taking advantage of btrfs snapshots

One of the killer features of BTRFS are the snapshots.

In a nutshell, imagine you just installed your system and want to have a “copy” of that current state in the event of having a system crash in the future, so you can go back to that “stable” moment, avoiding all the fuzz of reinstalling and potentially losing data:

btrfs snapshot <source> <destination>

This will create that “copy” of the <source> btrfs subvolume on the desired <destination>.

You can create a snapshot inside of your subvolume: BTRFS is clever enough to know this and not create recursive snapshots.

This is useful for going back to previous states or just to recover data you have previously deleted (un)intentionally.

This feature is pretty neat, but it is quite useless when you can’t have it done automatically.

Sure, you can create a cron job to run the snapshot command every now and then, but for this, the guys over at SUSE  have already thought of it and created a very handy tool called snapper

To start using snapper (after pulling it from the repos or aur), you have to create configurations. Let’s start by creating a configuration for our root subvolume:

snapper -c root create-config /

However, snapper will not work if you don’t have a daemon, like cronie running, so install (if needed) and enable it

systemctl enable cronie

This will create, by default one snapshot every hour. You can list the current snapshots with the command:

snapper -c root list

And see the snapshots you have available

You can delete snapshots using

snapper -c root delete (id)

where ID, you can either enter one ID or a range of snapshots like “3-20” and snapper will delete all the snapshots in that range: Don’t worry if there’s no snapshot with an id 10 in this case: Snapper will skip unexisting snapshots and won’t fail.

 

Pretty nice, right?

Now, let’s take it up a notch: Let’s say your system fails,a nd you want to revert to a previous snapshot: Snapper has a build-in “snapper rollback” feature tht i didn’t manage to make work: Besides, I prefer when I do this kind of stuff manually: Helps you understand what is really going on 🙂

Just boot a live system and mount the root btrfs filesystem

mount /dev/sda1 /mnt

Now, you will have all your subvolumes under /mnt

Let’s say you created the btrfs filesystem at /dev/sda1 and created two different subvolumes: /mnt/root and /mnt/home

Snapper would have created snapshots under /mnt/root/.snapshots/#/snapshots/, being /mnt/root/.snapshots another subvolume.

You should first move this subvolume out of the root subvolume (you can have it on a separate subvolume at the same level of /mnt/root and /mnt/home, but let’s leave that for later on)

mv /mnt/root/.snapshots /mnt

Then, rename the “broken” root subvolume

mv /mnt/root /mnt/root.broken

Then, find a snapshot that you know was still working under /mnt/.snapshots/#/ and move it back to the top subvolume:

mv /mnt/root/.snapshots/119/snapshot /mnt/root

…and you’re done: unmount and reboot and you will be back at your working system.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s