I like fun server racks (Photo by me, circa 2012. Creative Commons or whatever.)

OpenZFS on DVDs as an INSANE backup method

Michael Renz
5 min readOct 7, 2019

I’m sure this has got to be something that somebody’s already done before, but here goes…

For a good amount of time now, I’ve been using loopback images as an incredibly easy method of constructing OpenZFS pools.

Yes, you read that right — I’ve literally dd'd myself an image that I've copied X times to attach as loopback devices to use as a single OpenZFS pool.

“What would ever inspire me to do something that insane??!?!?!?”, you ask?

This article only gets more insane from here, so strap yourself to your chairs, friends.

What would I possibly get by building OpenZFS pools onto images????

Portability

Image files are super easy to copy to cloud providers or move around to any type of other filesystem, disk, or OS (or DVD in this use-case).

Reliability

Ever build a 5-disk RAID-Z3? YOU ONLY NEED TWO DISKS TO RESTORE THE WHOLE THING!

Forgot to keep your AWS account active and lost all your S3 backups?

Any two of those five images you stashed anywhere else have you covered.

Portability

Ok, I know this is redundant, but really — look around: how many other filesystems do you know that are completely supported (thanks Sun and OpenZFS) on almost every OS in the universe?

Seriously, I’m doing this all on MacOS as my primary OS.

I can import them into my home OpenIndiana system, any of the *BSDs, any recent (last 10 years, I think) distribution of Linux, or another MacOS system.

Since I’m personally using defaults supported from the last version of ZFS on OpenSolaris (RIP), it’s likely that even the last version of Oracle Solaris that exists can import the pool images.

Insane features

The last version of OpenSolaris included support for deduplication natively!

Got that bulk folder of pictures you never want to lose but can’t be bothered to manually go through and clean up?

Just throw them all in a ZFS dataset after you’ve zfs set dedup=on %DATASETNAME% to it and let OpenZFS handle it!

I digress….

Ok, so how does this work, exactly?

SIDE NOTE: This covers how to do it on MacOS. On other OS’s you will likely need to lookup the ways to mount images loopback. (Ok, so I included some ways below in the “Attaching the images as loopback” section…)

Making the images

This step is simple:

dd if=/dev/zero of=~/zfs_1.iso bs=4k count=1000000
# For an ~3.8GB image. Change `count` to suit the size you need.
cp ~/zfs_1.iso ~/zfs_2.iso

Attaching the images as loopback

Regardless of the OS you’re using, you’re going to want to be attaching via loopback, not mounting.

On Linux this step is different, but on MacOS it’s the following.

~ hdiutil attach -imagekey diskimage-class=CRawDiskImage -nomount ~/zfs_1.iso
/dev/disk3
~ hdiutil attach -imagekey diskimage-class=CRawDiskImage -nomount ~/zfs_2.iso
/dev/disk4

That just attached ~/zfs_1.iso to /dev/disk3 and ~/zfs_2.iso to /dev/disk4.

Linux

~ losetup /dev/loop0 ~/zfs_1.iso
~ losetup /dev/loop1 ~/zfs_2.iso

Windows

I think this is what you want, although I know almost nothing about Windows nowadays. Good luck.

Now to make them a single ZFS array

The command below will make them into a single striped (non-redundant) pool/dataset, but if you’d like to do something more complex (like mirror, raidz1, raidz2, or raidz3), you can always check the zfs docs.

sudo zpool create ISOPOOL /dev/disk3 /dev/disk4

That’s it!

If you do a df, you'll see your pool:

~ df
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
...
/dev/disk5s1 15235506 4815892 10419614 32% 121 10419614 0% /Volumes/ISOPOOL

(Ok, yours will be empty, of course…this df is from after I put stuff on it)

Now, put stuff on it

Remember to chown the volume to your user first: sudo chown michaelrenz:staff /Volumes/ISOPOOL

I put things I considered “good to test burn” on the new pool. Basically a couple semi-large files I had sitting in my Downloads folder. You’ll see them below.

Time to burn it to DVD!

You can do this one of two ways.

If you’re like me, you’ll want to properly export the pool before you burn the images.

I did that with the following: sudo zpool export ISOPOOL

You could also just burn the ISOs without exporting although I don’t really recommend that.

Also you don’t have to hdiutil detach /dev/disk{3,4} although I did just to be careful.

Now, put a DVD in your drive and hit the Burn Disk Image "zfs_1.iso" to Disc option when you right/option-click the iso file in Finder.

Do that for both images.

Now the hard step

Got two DVD drives?

You’re going to need two DVD drives.

Because you’re going to put each disc in its own DVD drive and execute the following:

~ sudo zpool import  # You'll hear both drives spin up like mad here
pool: ISOPOOL
id: 12503363458523863009
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

ISOPOOL ONLINE
PCI0@0-XHC1@14-@1:0 ONLINE
PCI0@0-XHC1@14-@2:0 ONLINE

~ sudo zpool import -o readonly=on ISOPOOL

Yeah, that actually works:

~ df -h
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
...
/dev/disk4s1 7.3Gi 2.3Gi 5.0Gi 32% 121 10419614 0% /Volumes/ISOPOOL

~ ls -lah /Volumes/ISOPOOL/
total 4814134
...
-rw-r--r--@ 1 michaelrenz staff 1.7G Apr 5 2019 ableton_live_standard_10.0.6_64.dmg
-rw-r--r--@ 1 michaelrenz staff 491M Jul 8 12:57 camtasia.dmg
-rw-r--r--@ 1 michaelrenz staff 134M May 4 09:14 gephi-0.9.2-macos.dmg

Conclusion

Ok, I’m going to leave it to you as to how useful this is, or what insane things can be done with it.

Seeing as how 100GB BD-Rs are ~$5.50 per disc on Amazon right now, and M-disc versions of that size run ~$14 each, this could end up being an interesting way to back stuff up long-term, providing the discs are stored right.

But even moreso, I think it’s important that we start a conversation about the side uses of ZFS in a world that doesn’t really know that it exists anymore.

Check out Open ZFS on OSX and the OpenZFS project itself.

Happy tinkering!

--

--