How I moved my ubuntu ext4 20.04 install to encrypted zfs

Anders Aagaard
6 min readMay 19, 2020

--

Hi

I wanted to share how I did this, because the frustration level was fairly high. I’ve been using ZFS for a while now, and I really like it, but replacing my root filesystem was a bit of a project.

Note that I don’t recommend doing this unless you are fairly confident in your devops capabilities and in your backup solution.

The default ubuntu zfs installer is currently a bit limited. It only supports full disk replacement of partitions, and it does not support encryption. I wanted root encryption and custom partitioning, so I had to go with my own solution. (Note that with root encryption you need to encrypt every subvolume, this was something I wanted)

My plan was:

  1. Install zfs packages + systemd scripts on my current ubuntu install
  2. Reboot from usb pen, rsync everything into a backup
  3. Setup zfs pools
  4. Rsync everything back
  5. Chroot and fix some minor things
  6. Reboot and have fun

There were a … few hiccups between 5 and 6.

Step 1 — Install zfs packages

This is the easiest part.

apt install zfs-initramfs zsys
apt purge dracut zfs-dracut #I used zfs before, so I needed to replace this with zfs-initramfs
systemctl enable zfs-import-cache.service
systemctl enable zfs-mount.service
systemctl enable zfs-share.service
systemctl enable zfs-zed.service
systemctl enable zfs-import.target
systemctl enable zfs.target

List of zfs systemctl list-unit-files and their state.

Step 2 — Backup

I booted from usb pen and rsync’ed everything to a backup drive (including boot partition).

Step 3 — Setup zfs pools

Create zfs partitions:

First you need to setup your partitions.

  1. Use partition type a5.
  2. Ensure your bpool (boot pool) has at least 2gb of space. You’ll need it for kernels/initrd images + snapshots.

Configure zfs install script

Setup for the zfs install script. I’ve used this on 2 machines so far. This is heavily based on ubuntu’s default install script (from ubuntu beta).

export target="/target"
export UUID_ORIG=$(head -100 /dev/urandom | tr -dc 'a-z0-9' |head -c6)
ls -lh /dev/disk/by-id/
#Find your device id's
export partrpool="/dev/disk/by-id/your_root_uuid" #Root pool, as much space as you want to use
export partbpool="/dev/disk/by-id/your_boot_uuid" #Boot drive - AT LEAST 2GB
mkdir /target
export user_uuid="5gjgn2" #this is random, just an identifier used by the install script
export default_user="<your_username>"

Create root pool

What you need to decide here is the -O encryption line. You can put this on a volume, or directly in the root pool. If you turn this on on the root pool everything is encrypted, and you can not turn off encryption on a subvolume level. If you enable it per subvolume (for example on /home), you’ll only encrypt that.

zpool create -f \\
-O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \\
-o ashift=12 \\
-O compression=lz4 \\
-O acltype=posixacl \\
-O xattr=sa \\
-O relatime=on \\
-O normalization=formD \\
-O mountpoint=/ \\
-O canmount=off \\
-O dnodesize=auto \\
-O sync=disabled \\
-O mountpoint=/ -R "/target" rpool "${partrpool}"

Create the rest of the volumes + properties

This should be pure copy & paste, nothing extra needed!

zpool create -f \\
-o ashift=12 \\
-d \\
-o feature@async_destroy=enabled \\
-o feature@bookmarks=enabled \\
-o feature@embedded_data=enabled \\
-o feature@empty_bpobj=enabled \\
-o feature@enabled_txg=enabled \\
-o feature@extensible_dataset=enabled \\
-o feature@filesystem_limits=enabled \\
-o feature@hole_birth=enabled \\
-o feature@large_blocks=enabled \\
-o feature@lz4_compress=enabled \\
-o feature@spacemap_histogram=enabled \\
-O compression=lz4 \\
-O acltype=posixacl \\
-O xattr=sa \\
-O relatime=on \\
-O normalization=formD \\
-O canmount=off \\
-O devices=off \\
-O mountpoint=/boot -R "/target" bpool "${partbpool}"
# Root and boot dataset
zfs create rpool/ROOT -o canmount=off -o mountpoint=none
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}" -o mountpoint=/
zfs create bpool/BOOT -o canmount=off -o mountpoint=none
zfs create "bpool/BOOT/ubuntu_${UUID_ORIG}" -o mountpoint=/boot
# System dataset
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var" -o canmount=off
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/lib"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/lib/AccountServices"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/lib/apt"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/lib/dpkg"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/lib/NetworkManager"
# Desktop specific system dataset
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/srv"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/usr" -o canmount=off
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/usr/local"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/games"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/log"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/mail"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/snap"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/spool"
zfs create "rpool/ROOT/ubuntu_${UUID_ORIG}/var/www"
# USERDATA datasets
# Dataset associated to the user are created by the installer.
zfs create rpool/USERDATA -o canmount=off -o mountpoint=/
# Set zsys properties
zfs set com.ubuntu.zsys:bootfs='yes' "rpool/ROOT/ubuntu_${UUID_ORIG}"
zfs set com.ubuntu.zsys:last-used=$(date +%s) "rpool/ROOT/ubuntu_${UUID_ORIG}"
zfs set com.ubuntu.zsys:bootfs='no' "rpool/ROOT/ubuntu_${UUID_ORIG}/srv"
zfs set com.ubuntu.zsys:bootfs='no' "rpool/ROOT/ubuntu_${UUID_ORIG}/usr"
zfs set com.ubuntu.zsys:bootfs='no' "rpool/ROOT/ubuntu_${UUID_ORIG}/var"
mkdir -p /target/root /target/home/$default_userexport user="root"
export userhome="/root"
zfs create "rpool/USERDATA/${user}_${user_uuid}" -o canmount=on -o mountpoint=${userhome}
chown root:root /target/root
bootfsdataset=$(grep "\\s${target}\\s" /proc/mounts | awk '{ print $1 }')
zfs set com.ubuntu.zsys:bootfs-datasets="${bootfsdataset}" rpool/USERDATA/${user}_${user_uuid}
export user="${default_user}"
export userhome="/home/${default_user}"
zfs create "rpool/USERDATA/${user}_${user_uuid}" -o canmount=on -o mountpoint=${userhome}
chown 1000:1000 /target/home/${default_user}
bootfsdataset=$(grep "\\s${target}\\s" /proc/mounts | awk '{ print $1 }')
zfs set com.ubuntu.zsys:bootfs-datasets="${bootfsdataset}" rpool/USERDATA/${user}_${user_uuid}

Step 4 — Rsync everything back:

This should be self explanatory. I hope your backup works.

Step 5 — Chroot and fix a few things.

OK, so this is where the real fun began.

Since you just created the pools, they are active and mounted under /target. So you only need this step if you have rebooted.

mkdir /target
#mount the pool relative to /target
zpool import -f -R /target -f rpool
#optional : load your encryption keys, if you use it
zfs load-key rpool
#mount needed only if your rpool was encrypted, otherwise it'll mount by default
zfs mount -a
#include bpool
zpool import -f -R /target -f bpool

Fix your fstab:

nano /target/etc/fstab
# You can comment out most things. zfs handles mounting for you.

Chroot into your new root:

#Mount in EFI, if you have an EFI system.
mount /dev/nvme0n1p1 /target/boot/efi/
mount --bind /dev /target/dev
mount --bind /sys /target/sys
mount -t proc none /target/proc
chroot /target

Now for the actual commands I needed to run in chroot to get this working:

# If you had encrypted root before.
rm /etc/crypttab
mkdir /etc/zfs/zfs-list.cache
touch /etc/zfs/zfs-list.cache/rpool
touch /etc/zfs/zfs-list.cache/bpool
# Regenerate initramfs without crypttab and with zfs
update-initramfs -k all -c
update-grub
grub-install # You do need this, even if you had grub before.
# And set a root password if you don't have one. We'll need it to recover soon...

Now for the fun part, rebooting — and no, it will not work yet.

exit
umount /target/boot/efi
umount /target/dev /target/sys /target/proc
zfs umount -a
reboot

On the grub menu you’ll need to add zfs_force=1 to the kernel command line arguments, and boot in recovery mode to drop to a root shell.

At this point ubuntu will only have mounted root, nothing else. Do zfs mount -a, and manually fix the directories ubuntu has written some files to!

zfs mount -a
# If it errors on not being able to mount - for example root:
ls -a /root
rm /root/.bash_history
zfs mount -a
# keep going through other directories that fail to mount!
# Finally:
zpool import bpool -f

After this you can execute:

/lib/systemd/system-generators/zfs-mount-generator
update-initramfs -k all -c #not sure this is needed... but won't hurt.

This script creates systemd tasks to mount all your volumes under root. Such as /root and /var/spool from above…

Step 6 — Reboot and have fun:

Hopefully things boot now.

If they don’t, here’s some of the fun pitfalls I ran into:

  1. zfs-dracut used instead of zfs-initramfs. I had this for historical reasons. It does not work the same way as the new ubuntu boot scripts at all..
  2. crypttab removed from /etc, but still existing in initramfs. This will fail on boot.
  3. zfs systemd scripts not enabled.
  4. Missing /etc/zfs/zfs-list.cache, this means the pools wont be imported.
  5. I initially had “/boot/efi/grub /boot/grub none defaults,bind 0 0” in fstab, as that’s what ubuntu is set up with now. However, I had grub installed without it being there. This means my grub.cfg was actually sitting under the bind mount, and any change I made to grub.cfg did not work…
  6. And for the most fun one, I had a boot partition that was tiny on one of my computers. It had been set up years ago, and was 0.5gb. Turns out 0.5gb is enough to install 2 kernels (using rsync from backup), and reboot. But… it will fail installing any kernel update, as zsys will do a snapshot before install….

--

--