Setting up TrueNAS with Crashplan Pro Backup

shellster
shellster
Feb 28 · 11 min read

I recently wanted to update my NAS to use TrueNAS as it has come a long way from the old days of FreeNAS. However, my current NAS solution was a custom Debian server, with OpenZFS on Linux, CrashPlan Pro, Plex, Syncthing, and various monitoring scripts to scrub the ZFS tank, monitor my UPS, and send me phone alerts. Most of this stuff was easy to transition to TrueNAS. Unfortunately, CrashPlan has not been supported for a long time, and I could not find a good guide to setting this up, so I made my own…

Crashplan is not supported for a variety of reasons. First Crashplan long ago dropped any semblance of support and compatibility with FreeBSD this is likely at least partially to make it harder to do exactly what I want to do and also a consequence of the CrashPlan Pro service moving away from Java to Electron. Another issue is that CrashPlan Pro basically made GUI a requirement (again probably to prevent people from doing what I am trying to do). What follows is how I got it working in my environment.

Installing Debian in Bhyve VM

Next, create your VM. On the first screen check configure settings similar to the following:

Step 1 of VM Setup

Make sure to select a “Guest Operating System” of “Linux”, check “Start on Boot”, “Enable VNC”, “Delay VM Boot Until VNC Connects”, and set a “Boot Method” of “UEFI”.

I created a VM with the following settings which are slightly more than Crashplan’s basic requirements (https://support.code42.com/Small_Business/Get_Started/CrashPlan_for_Small_Business_requirements), and these seemed to be fine for my 24TB NAS: 3Gbs of RAM, 10 GB of harddrive space, 2 CPUs, 2 Cores, 4 Threads:

Step 2 of VM Setup

The rest of the setup should be pretty straight forward. Makes sure to leave the network settings to allow the VM to have it’s own IP address, and select and upload the Install CD for your distro (assuming Debian from now on).

Before you start the VM, click on it, go to “Devices”, and then click “Edit” on the VNC device. Set the resolution to “800x600” or “1280x1024”. This is critical or you will see a scrambled mess when you boot of the VM.

Now start the VM. Then connect using the built in VNC display. When prompted quickly select the “Limited Graphical” installer. Now do your install as normal. I recommend using a light GUI like either XFCE or LXDE. It is critical that you also check “OpenSSH Server” in the installer. I would deselect the “Print Server” as well.

Once the install completes, it will tell you to remove the install disk and restart. As far as I can tell the only way to do this, is to navigate to the “Devices” tab under your VM in TrueNAS and delete the CDROM device.

Now click edit on the VNC device and uncheck the “Delay VM Boot Until VNC Connects”. Finally edit the resolution to something you would like to use normally. I set it to “1600x1200”. Whatever you pick, make sure it is a standard resolution and remember it as we will need it later.

Fixing Booting

Perform the following sequence from the UEFI shell:

Shell> fs0:
edit startup.nsh
\EFI\debian\grubx64.efi
ctrl-s <cr>
<enter>
ctrl-q <cr>
reset

At this point, your VM should boot normally after a delay. However, instead of being greeted with a graphical display, you’ll either see only console output or a scrambled mess. Don’t worry, we’ll fix that in a moment.

Fixing X Windows (Graphical Display)

First thing we need to do fix our environment, and get vim installed before updating grub to set our resolution correctly:

# apt update
# apt install vim
# export PATH=$PATH:/sbin:/usr/sbin
# vim /etc/default/grub

Now add the following two lines to /etc/default/grub, replacing any existing similar lines, and changing the resolution to match the one you previously set for Bhyve’s VNC device:

GRUB_GFXMODE=1600x1200
GRUB_GFXPAYLOAD_LINUX=keep #Not sure this is needed

Save the file, then run the following:

# update-grub

The last step is to tell XWindows to use a fbdev device for the graphical interface. To do this we need to create “/usr/share/X11/xorg.conf.d/10-fbdev.conf”. Put the following content inside:

Section "Device"
Identifier "Card0"
Driver "fbdev"
EndSection

This last step is not documented anywhere that I could find, but I was able to derive this based on instructions for setting up a BSD based VM.

At this point, you should be able to reboot the VM. Then reconnect via VNC and eventually be greeted with a graphical login screen.

For reasons that are unknown to me, sometimes X Windows will fail to launch if you are not connected via VNC when it gets to the X Windows launch point in the bootup process, at least in Debian. However, this will not matter as far as backing stuff up goes as the Crashplan service is already launched at that point. If you connect via VNC and see console output stating that X Windows is about to launch, either log in via SSH and restart X or restart the VM, then immediately attach via VNC until it gets to the login screen.

Configuring CrashPlan

We need to expose our NAS shares to the VM so that they can be backed up by CrashPlan. As far as I can tell, there is no way, at least from the TrueNAS interface to expose them directly like you can in a TrueNAS jail. The solution I arrived at was to share my NAS files via NFS and mount that in the VM.

To do this, click on “Sharing” in TrueNAS, then on “Unix Shares (NFS)”. Now click on “Add”. In the “Paths” section, select the path to the files you want backed up. In my case, I just put the root mount point for the ZFS tank. Make sure that the “Enabled” box is checked. Now click on “Advanced Options”. Make sure that “Mapall User” is set to “root” and “Mapall Group” is set to “wheel”. Security should be set to “SYS”. Under Hosts, add the IP of the Crashplan VM. These settings will ensure that anyone that can mount the NFS share has full access, and that only the CrashPlan VM IP can mount the NFS share:

Make your settings look similar.

Now click “Save”.

Go back to your CrashPlan VM. From a root console, run the following:

# apt update
# apt install nfs-common
# vim /etc/fstab
# mkdir /mnt/<your share mount point>

At the end of /etc/fstab add the following :

<ip address of your NAS>:<NFS export path> /mnt/<your share mount point>  nfs  defaults  0  0

If that was confusing, here’s what mine looks like:

192.168.20.150:/mnt/tank /mnt/tank nfs  defaults  0  0

Now reboot your VM. When you log back in, you should be able to go to /mnt/tank (or where ever your NFS mount should be mounted) and you should see all your NAS files. Finally, you should be able to browse around and add/delete files.

At this point you should be able to configure CrashPlan to backup your files.

Monitoring and Restarting CrashPlan

To solve the first issue of the CrashPlan service dying:

The first step is to create a restart script. I created it on my NAS share so that it also gets backed up. In my case, the script is located at “/mnt/tank/monitor/Crash_Plan_Monitor”. Here’s what the contents of this script looks like:

#!/bin/bashps auxw | grep -q [C]ode42Service || systemctl restart code42

Basically, the above script will restart the “code42” service if it is not running. You should make sure to “chmod +x” the script once you have created it. Now, inside the Crashplan VM, use “crontab -e” as root to edit the system cron tab and add the following lines to the bottom (updating the path appropriately for your use case):

* * * * * /mnt/tank/monitor/Crash_Plan_Monitor

Now the cron job will run every minute and restart CrashPlan if it dies for any reason.

To solve the second issue of CrashPlan completely locking up the VM:

This issue is a bit more complicated. I found that simply creating a cron job to reboot the VM every 24 hours was insufficient, as the VM will lock up so completely that even this cron job would not function. To address this issue, I created a heartbeat script, and then a monitoring script to reboot the VM from the outside. Let’s look at the steps.

The first step is to create the heartbeat script. This is as simple as adding the following line to your Crashplan VM’s root crontab, similarly to how we added the previous cron job above:

* * * * * touch /mnt/tank/monitor/crashplan_heartbeat

This will run every minute (when the VM is not completely locked up) and update the modified date on the “crashplan_heartbeat” file on the NAS share. It is a good idea to update CrashPlan to not attempt to backup this file, otherwise it may endlessly be backing it up.

To handle the monitoring of this file and the restarting of the VM, we are going to take advantage of TrueNAS’s API and cronjob capabilities. First we we need an API key. Log in with a fully privileged account (I used the “root” account), then click the gear icon in the top right, then “API Keys”

API Keys Menu

Now click “Add” and create the new API key making sure to copy down the API token it displays.

Now create the following scrip (obviously change to match your path): /mnt/tank/monitor/Crash_Plan_VM_Restart.py

Your script should look something like this:

CrashPlan Monitoring Script

Make sure to put your API token in the “bearer_token” variable. The “crashpan_vm_name” variable should hold your exact CrashPlan vm’s name as shown on the TrueNAS VM page. The “heartbeat_path” variable should be the path to the heartbeat file we previously setup.

The above script will check that the last modified date on the heartbeat file is not older than three minutes. If it is, the script will make API calls to TrueNAS to stop then restart the CrashPlan VM. There is an API call to restart the VM, but I chose to make separate calls to Stop, then Start because, at least in the TrueNAS web application, clicking “Restart” on a locked VM does not appear to always be reliable. You can find more information on TrueNAS API calls here: https://www.truenas.com/docs/hub/additional-topics/api/rest_api/

The last thing we need to do is create the cron job on TrueNAS to kick off our VM restart script. To do this, navigate in TrueNAS -> Tasks -> Cron Jobs. Click “Add” and configure it thusly (updating paths as necessary for your case):

In my example, I configured my job to run every five minutes, which should be often enough.

That’s it! You should now have backups that work reliably, in spite of CrashPlans’ propensities to crash and lock up.

The Continued Adventures of Preventing CrashPlan Lockup…

When I setup my VM, I gave it three gigabytes of ram. What I want to do is limit the CrashPlan server to two gigabytes, so that there is always at least one gigabyte of ram that can be used for other things. I figured out how to do this in a SystemD world using Control Groups.

First, from a root prompt within the CrashPlan VM, you will need to install “cgroup-bin”:

# export PATH=$PATH:/sbin:/usr/sbin
# apt update
# apt install cgroup-bin

Now we need to update the CrashPlan init service to create and properly apply our Control Group on startup. To do this, edit “/etc/init.d/code42” and replace the “SCRIPTNAME” line with the following:

cgcreate -g memory:CrashPlan
echo 2G > /sys/fs/cgroup/memory/CrashPlan/memory.limit_in_bytes
echo 3G > /sys/fs/cgroup/memory/CrashPlan/memory.memsw.limit_in_bytes
SCRIPTNAME="cgexec -g memory:CrashPlan /usr/local/crashplan/bin/service.sh"

The first command creates the Control Group, and calls it “CrashPlan”. The second line limits physical memory the group can use to two gigabytes. The third line limits virtual memory, including disk swap to three gigabyte. This gives CrashPlan a tiny bit of breathing room above the hard memory limit of two gigabytes by allowing an additional gigabyte of swap. The updated SCRIPTNAME line makes sure that the CrashPlan service always launches in the control group.

Now save the file. Then run the following to update SystemD’s service daemon:

# systemctl daemon-reload

Lastly, we need to update our Grub config to make sure Control Groups are configured to work. To do this, edit “/etc/default/grub”. Locate the “GRUB_CMDLINE_LINUX” variable and append the following to the existing line (adding a space if there is anything there already):

cgroup_enable=memory swapaccount=1

Now run the following to update Grub and then reboot:

# update-grub
# reboot

Now CrashPlan should be restricted so that it cannot entirely tank the VM. In addition, the two previous watch jobs should reboot the CrashPlan service if it crashes, or reboot the VM if it entirely locks up. This should result in a fairly stable environment.

Nerd For Tech

From Confusion to Clarification

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store