Importing not-exported ZFS pool to new system

So, recently, I replaced my system’s HDD. I could have just copied the system over┬áto the new disk, but that Arch installation has been with me for years without major issues and I wanted to start fresh, also because I was leaving behind the BIOS system and diving into UEFI.

To have a bit of background, I have 4 hard drives: One system HD with a windows partition and an arch linux partition and the other three are 3x2TB HDD forming a raidz (raid5) 4Tb setup where I can afford to lose one disk and still keep my data while I run to the nearest components shop and get me a new disk to replace the failed one.

ZFS keeps its information on a cache file at /etc/zfs in order to be able to re-assemble the volumes at boot time, but I just forgot to export the pool so it gets “released” for saying it somehow, from the old system, so I can import it on the new one.

The process for importing a non-exported pool is fairly easy:

zpool import -d /dev/disk/by-id -aN -f

This will scan the disks on your system and import the pools it can find. Verify the results with:

zpool status

Pretty simple, isn’t it? Well, not quite: When you install arch:

pacstrap /mnt base

“base” includes mdadm,which is a clever system that scans your partitions automatically at boot time and assembles any array it can detect, regardless of the configuration set in /etc/mdadm.conf, pulling information from the partitions each disk have: If there are partitions marked as array type, mdadm will understand there’s an array, not minding if the disk also has a, let’s say ZFS partition.

This would have not been a problem if my disks haven’t been a software array ever, but they did, and when I made the change to zfs I did not clear the partition information properly.

Before you are going to use mdadm or after you stop using it, you should issue the command

mdadm –zero-superblock /dev/sda

on every member disk of your array: This clears every partition information from the disk, avoiding future problems. Be careful, though: You can’t use this command after you put new data on your disk or you will lose all that data

There’s another way (also destructive, sorry) to do this. A more generic one with:

wipefs -a /dev/sdX

This will clear every (-a) data structure information on your disk: It will wipe the partition information and leave the disk partitionless.

Going back to my issues with ZFS: After re-importing the pools and the system generating the needed cache files for zfs import at boot time, my zfs pool was not surviving the reboot and instead, i found myself with a “/dev/mdxxx” device instead.

My first reaction after finding out was to remove the mdadm package completely.

In the future, I plan to wipe every disk of my array one by one, perhaps to avoid this issue the next time I might reinstall the system, perhaps pulling a device out of the array, issuing a wipefs command on it and then, re-assigning it to the array and performing a resilvering operation: I’m only wondering if the stress of 3 resilvering operations is worth the effort.