Had a bit of a scare this morning on one of my zpools: 24TB of data was missing. Although I couldn’t find my data, zpool status showed that everything was online.
scan: scrub in progress since Mon Sep 11 03:24:01 2017
23.8T scanned out of 25.2T at 249M/s, 1h42m to go
0 repaired, 94.21% done
config:NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0errors: No known data errors
A month or so ago, I implemented some hacky mount points in /etc/fstab to bind a user folder on my main filesystem with a directory on the zpool tank. I had forgotten all about it…
# <file system> <mount point> <type> <options>
/dev/sdi1 / ext4 errors=remount-ro 0 1
/dev/sdi5 none swap sw 0 0
/storage/redacted /home/red/act/ none bind 0 0
/storage/redacted2 /home/red/act2/ none bind 0 0
When the server rebooted, /etc/fstab mount points have priority over zfs-mount. So, fstab was binding to a zpool tank that didn’t yet exist. When the zpool tank tried to come online, the mount point was already in use.
I unmounted and deleted the hacky binds in /etc/fstab and restarted the zfs-mount service. Easy fix!