Professionalize your home lab with a Raspberry PI and a NAS — Part 4

Maximilian Kilian
Mar 9 · 10 min read

Table of Content

Part 1: Modeling our network setup
Part 2: Setting up our NAS
Part 3: Setting up our Ubuntu server
Part 4: Putting it all together (that’s where you are now)

TL;DR part 1–3

In the previous 3 parts we have covered the basic idea of setting up an infrastructure divided by a public and a private part. In parts 2 and 3 we did the configuration and setup of our Ubuntu Raspberry Pi server and Open Media Vault as our NAS system. If you are having difficulties to comprehend what we are doing now I’d suggest you to reread parts 1–3 to get a better understanding.

Configuring the public part of our network

As outlined and proposed in part 1 we want to host services like Nextcloud and Apache webservers which can be accessed from outside our home network. Due to the rising length of this series I chose to skip the setup of Nextcloud with OCR and show everything by the example of a running Apache.

If you still want to know how to utilize this setup to host your own Nextcloud instance with OCR, let me know and I’ll put it into an own story (requiring this setup).

Making our internal network accessible by setting up port forwarding

Remember what we’ve modelled in part 1? We wanted to divide our home network into a public and a private section and route between those two “splitted” networks by using Traefik. Therefore, one of the first questions to ask ourselves is the following: How can we achieve to accept traffic from outside our network and forward it into our public area without introducing a security hole?

The first part of this question is easy to answer: We open the port 443 and port 80 of our router and redirect it to our Raspberry Pi’s port 8127 and 8167 (for port 80). Don’t worry if this does not make any sense at the moment, I’m going to explain it later in detail.

Finished! Now our router should be accessible from the ouside. But, hmm, ehm, what is the IP of our router?

Well, if you haven’t bought a static IP from your ISP you have an IP which is changing every 24 hours. Luckily there’s a technology called DynDNS, helping us to map a domain name to our every day changing IP. Let’s configure it right now and for free.

Making our internal network even more accessible by setting up DynDNS

For everyone not familiar with the term DynDNS, let me quickly summarize what DynDNS is all about. As I’ve told you, your IP is changing every 24 hours. If we now want to access our router every day by the same domain name we need to have a service available pushing our router’s IP (whenever it has changed) to another service. Thus we have a mapping of a static domain to our daily changing IP.

One of this services is DDNSS allowing you to register your IP with a domain in the following format: your-domain-name.ddnss.de. After having created your free account, the last thing you need is a tool which is handling the synchronization part of your router’s IP with DDNSS. Personally I’m using my FRITZ!Box 7490 and the following setup guide: Click me!

Setting up name resolution with PiHole as our home network DNS server

Technically, we are now able to reach our home network from the outside, but another challenge is on its way. All of our services are running on a single IP, our Ubuntu (Raspberry Pi) server serving as docker host. This leads to the important question: How do we access our services?

One way with docker would be to expose different ports for every service but let’s be honest, who wants to access a service by IP and port, e.g. 192.168.178.155:9817?

Another way is PiHole configured as our DNS server. With PiHole managing our home network name resolution we are able to set up our own DNS entries and redirect them to our Ubuntu server. Admittedly this still does not solve the ports problem, but luckily Traefik comes to our aid. But one thing after another. So, first things first: Let’s setup PiHole.

Below you’ll see an example docker-compose file for PiHole.

Run “docker-compose up -d” and check if the container has started and is not exiting. Now we have a DNS up and running (yaay!) but it will not be used by our devices yet. First, we need to tell our router to not take care of DNS by itself but to provide all connected network devices with the IP of our Raspberry PI (which is serving as our docker host).

As I’m using the Fritz!Box 7490 I followed this guide by AVM: https://en.avm.de/service/fritzbox/fritzbox-7490/knowledge-base/publication/show/165_Configuring-different-DNS-servers-in-the-FRITZ-Box/.

Finally, whenever a new device logs into our Wifi, it will be notified about our PiHole as shown in below’s image.

With PiHole up and running it does make sense to think ahaed. In part 1 we concluded that we want to host our own GitLab instance (not accessible from the outside) — thus we need to add a DNS entry:

The domain gitlab.local will now be resolved to the IP of my Raspberry Pi, which is and no I’m not going be tired of saying it, our docker host.

Save our services’ data to “local” and shared docker volumes

If you have a closer look at the volumes part of the docker-compose file (line 17/18), something like /home/ubuntu/local-volumes is specified. The name is actually showing us how I am storing the data of the PiHole container and that is locally (BAM!). Line 17 and 18 are telling our docker container to mount our host directory “local-volumes” inside its virtual file system. Therefore, all files created by PiHole will be accessible under this path: /home/ubuntu/local-volumes/pihole.

This is a crucial part of our setup and I want to take time to explain this in detail. For our complete setup, we’ll ned two “types” of volumes: The first ones are volumes which are locally available on our Raspberry Pi and the second type are shared volumes hosted on our NAS by using a NFS share.

This enables us two differentiate between two types of data: data we don’t need to have a backup of and we want to access without the speed loss of NFS and data we urgently want to have a backup of. Regarding our PiHole setup, I am not doing a lot of fancy stuff. Currently I’m using PiHole only as an ad blocker for my internal network (nice to have if you’re connecting through VPN and have all your visited sites stripped of advertisement though) and in regards of configuring name resolution for my internal domains. This is the reason behind saving the data only locally and without some sort of backup.

Configure Traefik to manage our public network section

Once again, this is really crucial for our setup. In the beginning of this story we opened the ports 443 and 80 of our router and forwarded 443 to Pi’s 8127 port and 80 to 8126.

But why the heck are we doing this?
In order to answer this questions properly, we need to have a closer look into the inner working of Traefik. Let’s keep our eye at line 15: Here we do mount the docker.sock inside the docker container. This is the UNIX socket that the Docker daemon is listening to. This enables our traefik container to make notice of all of our containers running on our Pi. Another important part is the definition of our network. Through lines 23–25 we define a network with the name “traefik-external-network” and with line 12 we add Traefik to it.

That’s really all we need to create a public and private network. By defining an own network for our external and internal services, we are able to isolate our docker container from being accessed out of each other’s network.

The image above is show the use case in details. Arequest from the outside reaches our router at port 443 and will be redirected to port 8127 on our Raspberry Pi. Aaaaaaaaand our Traefik container (running on our Raspberry Pi) is exposing the exact port (8127) and therefore redirecting it to it’s internal port 443. Therefore we’ll have two docker networks later on: an external and an internal one which our traefik instances will be listening to. Basically, the internal traefik instance will only see container defined in the internal network and vice versa (this will be more clear after adding the first service to our external network).

Above’s docker-compose file also shows the usage of so called label tags:
Line 19 tells traefik to listen on port 8082 and redirect it to trafik’s internal port 80. By typing [IP of Raspberry Pi]:8082 we are now able to access traefik’s dashboard.

Cloudflare

Unfortunately this is once again not enough. We also need to set up cname records in order for Traefik to distinguish which service should be served (you could potentially do this with ports, but who would be so crazy?). In short cnames are nothing else than redirecting your domain to another domain.

I’m a huge fan of the free service Cloudflare is offering as you can define as many 1st level subdomains as you want to and also have the possibility of many different DNS entry types. If you want to give Cloudflare a try, register your account at https://dash.cloudflare.com/. Furthermore I am assuming you have at least one domain you’re renting (e.g. over a service like Strato). I do rent my domains at Strato as they are offering the possibility to add a different nameserver (managing your DNS entries).

While the creation of your account at Cloudflare they’ll show you the IPs of their nameserver. Note them down and change the nameserver’s entries of your rented Strato domain and save the changes. Now, grab a cup of coffee and wait for Cloudflare to pick up the changes. After Cloudflare has successfully initialized your domains, you can add the following entry for your top level domain:

  • Type = CNAME
  • NAME = top level domain
  • Content = DynDNS address (e.g. *.ddnss.de).

What is happening under the hood as soon as somebody requests your top-level-domain.de is that it will be forwarded to your DynDNS address BUT with the domain name of your top level domain.

Set up a website hosted inside our external docker network

We’ll now host a small website serving HTML and PHP to showcase our configuration.The following items have to be inserted by manually:

  • [your_container_name] e.g. my-great-site
  • [your_site_name] e.g. my-great-site.de
  • [your_service_name] e.g. my-great-site

Line 9 is telling docker to be part of the previously defined traefik-external-network. This is neceassary to limit the access. Line 10 says that we want Traefik to manage our service. Line 11 says that our service is accessible over port 80 and line 16 is defining the domain names Traefik will be listening to. Also note, that I’m still using the local-volumes file path, as the website is just plain html and php (which can be backed up through code repositories).

That’s it, if you have configured everything correctly you should now be able to access your own locally hosted website! Now let’s do the same for our private section of the network!

Configuring the private part of the network and using shared docker volumes on NFS

Until now we have setup an isolated docker network by using docker-compose and made our service accessible from outside our network. You’ll probably remember that I’ve also promised you a way to set up shared docker volumes (on our RAID-1 NAS) and thus having data redundancy in case of any failure, so here we go!

Code repositories are super important tools for us developers, so let’s set up a GitLab instance only accessible from inside our network. As we should be used by now, we are going to use docker volumes for this. For our internal setup we used locally stored (e.g. on our Raspberry Pi) volumes which will not be backed up.

This time, as our code is unbelievably important, we will use our up and running NAS. I don’t know if you remember (this series is reaaally long) but in part 2 we have created a NFS share. Now’s finally the time to use it! Navigate to your preferred location on your Raspberry Pi and create a directory with a meaningful name, maybe something like nfs-volumes and issue the first command for a NFS share without an user and the second one if you’ve secured it with an user:

sudo mount -t nfs [NAS IP]:/[SHARE_NAME] nfs-volumes/sudo mount -t nfs -O user=[User],pass=[Pass] \ 
[NAS IP]:/[SHARE_NAME]/[PATH] nfs-volumes/

Now let’s start our internal Traefik instance managing our internal services:

In line 18 you see another network defined. This time it’s the traefik-internal-network. As the preliminary work’s done, let’s spin up our GitLab instance with the following docker-compose file:

Line 44 is telling the postgres database to initialize its docker volume on our network share. If you get any permissions error, just set as a configuration option in Open Media Vault the nfs share to include “no_root_squash” (check part 2). If you’ve set up the domain gitlab.local on our PiHole you should now be able to access your GitLab through http://gitlab.local while your data is being stored on your NAS.

Conclusion

In this 4 part series we have seen how (more or less) easy it is to set up a really nice infrastructure inside your home network with a cheap NAS, a Raspberry Pi 4b and docker. I do understand that it’s in some steps not as detailed in as you’d maybe wish it to but don’t be shy to ask in case of any questions or errors. I’d be glad to help you!

Also I wanted to highlight that all of the docker-compose setups should be seen as a MVP which can be enhanced by a lot e.g. by using an environment file defining static parameters as the top level domains of our internal domain names (.local)! If you’d like me to set up a home-server project with everything preconfigured, just leave me a note and I’ll see if I’m able to provide you with a GitHub project.

Last but not least I’d like to ask you to show me if you had fun with this series by clapping. I’ve really put some thought power into this setup and also more hours of problems solving than I’d like to admit, therefore it would make my life easier to know that it was not a waste of time and people are actually interested in this topic.

A series on how I professionalized my home infrastructure with a cheap Raspberry Pi and a self-built NAS

How to easily host your own websites and services inside your home network.

Maximilian Kilian

Written by

I love to solve problems with a technical solution and I am absolutely convinced that technology can be used to facilitate every day’s aspect of life.

A series on how I professionalized my home infrastructure with a cheap Raspberry Pi and a self-built NAS

Did you ever want to to host your own services like Apache, GitLab and Nextcloud inside your home network and be the root of your setup? After the completion of this 4 part series you’ll have an understanding of how this is possible and maybe even decide to try it out on you own.

Maximilian Kilian

Written by

I love to solve problems with a technical solution and I am absolutely convinced that technology can be used to facilitate every day’s aspect of life.

A series on how I professionalized my home infrastructure with a cheap Raspberry Pi and a self-built NAS

Did you ever want to to host your own services like Apache, GitLab and Nextcloud inside your home network and be the root of your setup? After the completion of this 4 part series you’ll have an understanding of how this is possible and maybe even decide to try it out on you own.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store