Inexpensive DIY NAS & VS Code Server on ARM64 SBCs

Arko Basu
16 min readMay 24, 2024

--

Welcome, tech aficionados and budget-conscious fellow enthusiasts, to a journey where frugality meets functionality! In a world riddled with inflation where every byte counts and every dollar matters, the idea of crafting your own personal server might seem like a relic of the past, but fear not! We’re about to embark on an adventure that proves that building your own low-cost DIY machine is not only feasible but fabulously rewarding.

Inspiration

Truth be told, I’m tired of spending on pricey cloud services and navigating various providers, each with its own costly offerings. I often find myself juggling snapshots, file systems, object storage, and ephemeral compute runtimes, with monthly bills easily exceeding my $150–200 budget for small personal projects. This includes paid plans for hosting, compute, and storage across providers like Huggingface, Google One, Google Cloud, AWS, Azure, and Colab. Besides the cost, each provider requires learning their tech stack, making integration time-consuming and expensive, especially for multi-cloud setups.

And although each Cloud Provider offers a moderate enough Free Plan — the usage of these services are very limited. Take for example this analysis done at BBVA. You will notice that even though at a first glance Server-less services seem very cost effective, it doesn’t really take that much to reach a break-even point in terms of costs. And once you do — the costs for server-less solutions rise very rapidly. Witnessing the rise in expenses first hand for cloud services inspired me to migrate all non-essential data to a low-cost home-lab data lake.

Recently, I discovered inexpensive single-board computers (SBCs) and it has been an absolute delight playing around with them. And seeing the value it provides, I decided to consolidate my data from cloud onto a self-hosted data lake that can also host a VS code server with Copilot features.

I’m embarking on this journey to gain control and reduce cloud costs for personal projects, aiming to build an affordable home storage and coding hub. This article shares my experience starting with an inexpensive NAS and Code Server.

In this article we are not going to touch topics of High-Availability (HA) for Fault tolerance architectures and data resiliency. Will revisit HA around the storage layer in a later article where we discuss building a distributed storage layer to extend our NAS capabilities.

In this article we are not going to touch topics about Local Copilot features and instead use Github’s Copilot plugin in your remote VS Code Server. Will revisit in a separate article about Local Copilot features so you can truly have no subscriptions at all.

Goal

Build a low cost NAS and Code Server for storage and hosting a personal development environment. All for about USD 200. At the end of this article — you should be able to use a simple self hosted NAS server to migrate all your OneDrive, GoogleDrive, and Dropbox files. Additionally you should also have a fully functional remote Visual Studio Code Server deployment allowing you to preserve your coding experience no matter the device and preserve battery life of your own local device when you’re on the go since all compute intensive tasks run on your server.

What you are going to need:

  1. Orange Pi 3b 8G + 256Gb eMMC — USD 69.99
    The reason I am going with the OrangePi 3b 8G in this article is — I didn’t need a NAS Server that was bigger than 4TB to hold most/all of my non-essential data. And since the OPi 3b has 8Gigs of LLDR4 RAM, it should be able to handle up to 8TB of Data just fine with whatever RAID or Crush Map settings for a data lake.
    As for why Orange Pis? Take a look at this article which compares the same series of Pi Models across various vendors. I personally prefer an Orange Pi even though the software support is minimal because of the power it packs for the price point which is the lowest in the market as compared to all it’s competitors.
  2. 5v 3A charger — USD 9.87
  3. 1TB NVMe m.2 2230 SSD — USD 78.99
    This will be dedicated for our VS Code Server. To give us the best possible read/write speeds on the SBC.
  4. (Optional — but recommended) Orange Pi 3b Case — USD 14.99
    Dramatically helps in cooling. Like by a difference of 10 Degrees making the SBC perform way better.
  5. 4TB External SSD with USB-C to USB 3.1 (On-Sale for 1st time users) — USD 13.00
    You can also grab any old External SSD and use a USB-C to USB 3 adapter to connect it as well.
  6. (Optional — but recommended) A good Gigabit Ethernet Cable — USD 5.95
    You can totally use the WIFI on-board the SBC but if you are going to use it as a NAS server and a remote VS Code Server — then you are much likely to get better speeds for downloads and uploads with an Ethernet cable. Also the WIFI requires you to have a monitor and a keyboard connected to the SBC in order to get it on your network.
  7. Some degree of enthusiasm to take control of your data and cut costs.
  8. A Micro SD Card — USD 11.23
  9. A desktop/laptop with any OS (but preferably Ubuntu Jammy Server/Desktop) and Docker Installed. Feel free to use a free tier VM on any cloud if you’d like. Use a machine that at least has 4–6 cores.

Total Costs: USD 204.02 + Taxes (as of writing this article).

Note: This is not the exact storage configuration I am using for this article. The only difference is the the SSDs. I am using a smaller NVMe m.2 2230 card of size 256 GB (which further cuts your costs by 60 dollars) and an old external SSD from a long time ago (which further cuts costs by 15 dollars). Bringing my total costs down to 130 US dollars.

The Code Server I am going to run will run off the NVME m.2 ssd. And for testing purposes 256 gigs should be enough.

Let’s get started.

Step 1: Choosing the right Operating System

Choice: Self compiled Armbian of flavor Debian Bookworm — Minimal/Server/CLI version(with custom software packages and services) + lightweight CasaOS on top for GUI experience.

Justification: Selecting the right operating system (OS) for your single-board computer (SBC) is crucial for performance and reliability.

Why Debian Bookworm?

  1. Stability and Reliability: Debian is known for its stability, making it a great choice for continuous operation required by NAS servers.
  2. Extensive Software Repository: With one of the largest software repositories, Debian makes it easy to find and install NAS-related packages without third-party sources.
  3. Strong Community Support: An active community offers extensive documentation and support, helpful for troubleshooting and optimizing your setup.
  4. Security: Debian provides regular updates and patches, ensuring your NAS server remains secure against vulnerabilities.
  5. Customizability and Flexibility: Debian allows high customization, letting you tailor the OS to your specific NAS needs.
  6. Efficiency: Debian runs efficiently on SBCs, ensuring optimal performance with minimal resource use.

While other linux distributions like Ubuntu or Fedora offers a user-friendly experience and more frequent updates, Debian Bookworm’s superior stability, security, and efficiency make it a more suitable choice for setting up a reliable and powerful NAS server on your SBC. Debian’s minimal resource usage and extensive customization options ensure optimal performance and tailored configurations for your specific needs.

Additionally for the Orange Pi 3B all officially supported Ubuntu Flavors have a kernel issue for Display Drivers that generates a high load average on CPUs. The issue is with all Ubuntu distros and versions — no matter the vendor. Even 24.04 LTS.

Note: arm64/aarch64are the same thing.

Why Armbian and not the official build framework to build custom ones?

For the fact that the Armbian build framework is highly customizable where I can do custom patching as you will see later in the article. Which is still doable in other distributions like — Joshua Riek’s Rockchip Ubuntu, or the Orange Pi’s official build framework but would require things like forking source code for linux kernel build frameworks and maintaining it — which defeats the purpose of this being a simple solution. Also that the Debian Bookworm has no load average problems and has been the most stable of all the distros I have tried and hence a personal preference.

Why CasaOS?

Alright, so we plan to have a Debian server running. What’s next? Everyone wants an UI right? I mean sure you could love CLIs and work mostly out of Terminals if you have to, but if you are to handle large scale file transfers on a NAS setup or even test out some web applications on your server — you need a UI. Sure you can go with a Debian flavored desktop image with Armbian, or even enable a desktop interface (like MATE, GNOME, Xfce, Plasma, etc) after you have installed the generated server image. But that comes with a lot of setup for things like enabling remote access, setting up the right packages and updating firewalls. Additionally it comes at a performance cost. You don’t really need a desktop UI running all the time for what we are building this server for.

Enter CasaOS! It is an open-source home cloud operating system designed for ease of use and robust functionality. Tailored for both tech enthusiasts and beginners, it transforms a commodity hardware (like Orange Pi or Raspberry Pi like devices) into a versatile, personal cloud server. CasaOS provides an intuitive web interface, allowing users to effortlessly manage storage, applications, and connected devices. Its modular design supports a variety of plugins and integrations, enhancing its capability to serve as a media center, file server, or smart home controller. By prioritizing user-friendliness and customization, CasaOS makes managing a home server accessible to a wide audience.

Okay now that we are decided on the foundations for the system let’s get started on some hands-on stuff.

Step 2: Building the ISO Image

For this you will need either a Ubuntu Jammy based amd64system (desktop/server) or any other OS (with any architecture) that has docker installed on it. Please check the requirements for more details.

I am doing this on a personal desktop:

CPU information where ISO build is performed
  1. Let’s update the Current Build System. (Physical machine/Virtual Machine)..
apt-get -y -qq install git  
mkdir armbian && cd armbian
git clone --depth=1 --branch=main https://github.com/armbian/build
cd build

2. Let’s create some Image customizations for the ISO:

mkdir userpatches
touch userpatches/customize-image.sh

# Open the file and copy over the code in the following block and make any adjustments as necessary
vim userpatches/customize-image.sh
#!/bin/bash

# Exit upon error - so the compilation fails
set -e

# Update sudoers to run snap package binaries - This is natively taken care off in Ubuntu but not in Debian
# We would ideally like to use snap to simplify some application deployment and without this change
# users are not able to call Snap package binaries even if users have sudo permissions
echo Defaults secure_path=\"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin\" > /etc/sudoers.d/10-include-snap-bin

# Upgrade the OS with all latest libraries and Install required packages
apt update
apt upgrade
# Installing lm-sensors for the OS to correctly resolve sensors temps. and neovim for
# text editing. Why neovim? Because it supports github copilot plugins.
apt-get --assume-yes install lm-sensors neovim

# Feel free to add packages you need here

# Daemonize Performing sensors-detect on first boot
target_file="/usr/local/bin/run-sensors-detect.sh"
service_file="/etc/systemd/system/sensors-detect.service"

# Write the content to the target file
cat << 'EOF' > "$target_file"
#!/bin/bash

# Run sensors-detect
sensors-detect --auto

# Check if sensors-detect was successful
if [ $? -eq 0 ]; then
# Remove the script and the systemd service
rm -f /usr/local/bin/run-sensors-detect.sh
rm -f /etc/systemd/system/sensors-detect.service
else
echo "sensors-detect failed"
fi
EOF

# Make the script executable
chmod +x "$target_file"

# Create a systemd service to run sensors detect on first boot using
# the script we created previously
cat << 'EOF' > "$service_file"
[Unit]
Description=Run sensors-detect on first boot
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/run-sensors-detect.sh
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload

# Enable the service
systemctl enable sensors-detect.service

All image customizations run with chroot permissions so no need to worry about adding sudo in this script

4. Compile the image:

# Compile the ISO Image
./compile.sh BOARD=orangepi3b BRANCH=current RELEASE=bookworm BUILD_MINIMAL=yes CONSOLE_AUTOLOGIN=no COMPRESS_OUTPUTIMAGE="sha,xz"

# Feel free to provide a Vendor name in order to further customize the build
./compile.sh BOARD=orangepi3b BRANCH=current RELEASE=bookworm BUILD_MINIMAL=yes CONSOLE_AUTOLOGIN=no COMPRESS_OUTPUTIMAGE="sha,xz" VENDOR="DreamfuseLabs-official"

# Show images
ls output/images

Give it about 15–20 min and you should have your ISO image ready to be flashed to your OrangePi SBC.

Image compilation completed with generated ISO images

Note: If this compile step fails complaining about loop devices.. Then simply run lsblk and check if the there are more than 10 loop devices held by Snap packages. It usually happens when you are using a Ubuntu server that has other snap packages installed. All you would need to do is remove those snap packages and then remove snapd from your system and you should be good to go. If you can’t seem to remove them because they might break other applications running on your build system — simply use a Virtual Box, a cloud VM or the docker approach from Armbian documentation.

Step 3: Flash the Orange Pi 3B eMMC with the generated ISO Image

Flash the SD card using BalenaEtcher. You can chose the ISO image you generated in the above step or any other Linux distro for this step.

Booting with SD Card using Ubuntu Server Jammy flavor in order to flash eMMC drive with custom ISO
  • scp the Generated ISO image to the SD card after it has booted from the Orange Pi.
# When using scp make sure target directory exists
scp -r output/images/<your-desired-image> orangepi@<your-ip-address>:/home/orangepi/<sub-folder>
Copying over generated ISO images to SBC booted from the SD Card.
  • From your Orange Pi — run the following commands to flash on-board eMMC drive with the generated ISO image.

Why eMMC?
eMMC (embedded MultiMediaCard) offers several benefits, particularly in compact and cost-sensitive applications like smartphones, tablets, and IoT devices. Its integration of flash memory and a controller simplifies the design and reduces the space required on circuit boards, leading to sleeker, lighter devices. eMMC also provides a balance of performance and reliability, with faster read/write speeds compared to traditional SD cards, and enhanced data management through built-in error correction and wear leveling. Additionally, its standardized interface and widespread adoption facilitate compatibility and ease of development, making it an efficient and practical choice for our OS.

# Check the eMMC device that you are going to flash the ISO to
# This can be different for you
# Replace value of this command in the rest of the following
ls /dev/mmcblk*boot0 | cut -c1-12

# Replace the device from the above command for of=
sudo dd bs=1M if=/dev/zero of=/dev/mmcblk0 count=1000 status=progress
sudo sync
# Replace the image file with the one you generated and scp-d to this machine
xzcat <your-generated-image> | sudo dd bs=1M of=/dev/mmcblk0 status=progress
sudo sync

# Power off the board
sudo poweroff
Flashing eMMC with generated ISO image
  • After it powers down, simply eject the SD card and power cycle the board. At this point your Orange Pi 3B board should boot from the eMMC with the custom ISO you generated.
    After it boots, you can either login using a connected keyboard and monitor or over ssh. To use ssh — simply find the IP address of the new machine (named orangepi3b) that joined on your network from the router management app/web portal. I use Eero at home so when a new device joins my network it automatically notifies me.
  • Use default root credentials:
    Username: root
    Password: 1234
    The first login will force you to reset root password and also create a new user with sudo rights assigned for yourself.
ssh root@<device-ip-address>
First login after booting with flashed ISO image that was generated

If the boot fails — it’s likely that you have a board whose SPI Flash was used or tested. All you need to do is clear the on-board SPI Flash using the official documentation. Just search for: “Using RKDevTool to clear SPI Flash”. You will need a Windows system to do this. Once completed you should be able to boot from eMMC.

  • Relogin with your newly generated credentials. And run some custom package binary that you installed as part of your custom ISO image. In my case let’s check if sensors-detect were successful and if we can run neovim
All custom packages working
  • Run an apt update + upgrade. You are likely not going to see any package installation since we did this earlier as part of the customization to make sure we are updating the image from the stable release with all the latest packages.
All packages are on their latest versions

Step 4: GUI Experience with CasaOS

Since we have already discussed our choice let’s dive right into installing CasaOS on our Debian Server. It is as simple as running the following command:

# Run the following command to deploy CasaOS
curl -fsSL https://get.casaos.io | sudo bash

Give it about a minute. Once it completes setup — simply navigate to the provided URL address in the logs from any device on your home network and you are good to start using this the GUI desktop server with NAS capabilities.

CasaOS Installation completion
CasaOS Web GUI

Step 5: Example usage of NAS Server

With CasaOS storage management is simple. You can use this server from any device and upload files directly from the UI.

Storage capabilities

Storage Management is super simple. Any connected drive should show up on the UI’s storage manger widget.

Allows for merging drives into logical volumes.

You can easily connect Google Drive, One Drive, and Dropbox directly from your web browser and start moving files right away. If you are using Ethernet you will see a very decent speed as well.

Images showing CasaOS file app to get started with connecting google drive as a mountable drive.
Copying over large files from Google Drive to my DIY NAS Server. A 3.5 Gig transfer took about 35 secs. Not too bad right?

The CasaOS ecosystem is extremely versatile. It allows for easy deployment of any docker based containerized app. It has an app store that has pretty much everything you need for a home-lab — from Plex server, Home Automation apps, Cloudflared (to expose custom docker based apps), Chatbots (with integration to OpenAI’s ChatGPT, and Azure’s Open AI), and so much more.

Some sample apps in the CasaOS app store.

Step 6: Deploying VS Code Server

Time to get a Visual Studio code server running on the SBC. There are many ways to install Code Server. But for our use case we are going to focus on the 2 shortest ways of going about it:

Install using a one-shot script as a service on the primary drive.

Just run the following commands from the terminal:

# Download and install
curl -fsSL https://code-server.dev/install.sh | sh

# Run as a background service
sudo systemctl enable --now code-server@$USER
Installation of code-server using installation script

Note: This is not a recommended way for deploying this. There are security implications. This should only be for testing purposes.

Once installed just run the following command to check for your Code-Server deployment details:

systemctl status code-server@$USER.service
Showing service status and information for logging in.

As you can see this service is only exposed internally to localhost so we need to do a few things to gain access here. Simply follow the 3 steps defined here to expose your code-server deployment to your local machine over SSH. Once completed you can visit: 127.0.0.1:8080 on your local machine to access the deployment.

Accessing your newly deployed Code Server over on Web UI.

And voila!

Deploying Code Server as a docker image using NVMe drive for persistent storage, securing remote access to it using Cloudflare products and enabling copilot for doing some quick code generation.

To be split to a separate article since this is become a way too long one already.

Conclusion

Orange Pi SBCs are inexpensive devices that can be used for a wide array of things ranging from: IoT applications, Robotics, AI on Edge, low power consumption Storage servers and so much more. This article is geared to expose you to some real world usage for these devices. And also the potential to cut storage costs for all your non-essential data. This is not meant to be used for mission critical data or applications. This was just to give you a taste of the power these bad boy SBCs have and the things you can do with it.

In some up coming articles we will build on top of this. We will look at some topics like:

  • Massive expansion of Storage for DIY data lake using a higher model of Orange Pi SBC (Orange Pi 5 Pro) — This model has a 16 Gig RAM with a much higher frequency and it can easily handle up to 16 TB with decent enough performance. We are going to look into using some M.2 to SATA adapters and using multiple SATA Drives with external PSU to increase our NAS Server size.
  • Building a High Availability architecture of a data-lake. We will build on an idea discussed in a previous article. Utilize a whole array of these cheap SBCs to create a fault tolerant and resilient data lake at home. For dirt cheap where I wouldn’t mind moving some of my mission critical data to.
  • Building a ML development workspace using Kubernetes and Canonical products on ARM64/Aarch64 SBCs.

I hope this has not been too long a read for you all. And that it has been fun one. Please don’t hesitate to reach out with corrections and suggestions. I am new to this. Cheers!

--

--