How I was able to host my services on my home network

Saarang
13 min readJul 20, 2024

--

As you keep improving as a full stack developer, you start to realize that your basic cloud hosting services — like railway — won’t cut it for your now advanced applications. I’ve personally made several articles on medium about how to host things like a database and an email server, however for a lot of people, the next step would be to get a virtual machine of some sort so that you can try out all of these new things. The problem with this however is that as you need more and more resources, the virtual machines will start to become very expensive, and you might not even be able to afford one. However, there is one saving grace that a handful amount of people can do themselves — hosting their own cloud servers at home. Not many people have the WiFi access or bandwidth to be able to accomplish such a thing, however for the handful of people that do, this is an amazing way to host all of your applications, and today I’ll guide you through how you can do the same securely.

Prerequisites

Before we actually get started with this, I should probably mention some of the things that you’ll need. You will obviously need a very good wired internet, no exact number obviously but I’d recommend 200 mbps+.

You also need a free USB drive for linux installation, a good router to handle the bandwidth coming in / out, a good PC (ideally headless), and a UPS incase your power goes out. If you wanna see my personal part list, you can view them here:

  1. TP-Link Archer BE24000 Quad-Band WiFi 7 Router
  2. UPS Battery Backup
  3. DbillionDa Cat8 Ethernet Cable
  4. GMKtec Mini PC N100

For the rest of the tutorial, we’ll be using these pieces of hardware in order to physically host our servers.

Steps

For this tutorial, I will be going over the following things:

  1. Setting up your PC / hardware
  2. Setting up a static IP
  3. Setup port forwarding on your router
  4. Hosting multiple different applications (including the MongoDB database & mail server)
  5. Using Cloudflare tunnel to securely tunnel your application
  6. Backups
  7. NMAP scan to check for open ports

If you need to skip to any part of the tutorial at any point, feel free to ctrl+f and search using the headers.

Setting up your PC / hardware

First off, you’re going to want to get your bootable USB, and install linux on it. There are many tutorials out there for this, however I’ll provide the official tutorial from Ubuntu here. I would personally recommend version 22.04, as that’s the latest one supported by MongoDB. Once you get that setup, plug it into your headless PC, and boot the PC up. It should show up something like this on your screen:

Bootloader

You’re going to want to select ubuntu, and from there go through the installation process. Feel free to choose what amount of software you want installed on your PC. If you’re not interested in having a GUI at all, Ubuntu Server is also another great option. However, the GUI itself is helpful especially during the beginning stages when you’re still in development mode. At the very end, you’re going to want to erase disk and just install Ubuntu, just like shown on screen:

Once done, you can reboot your PC without the USB, and get started on your new fresh installation of Linux!

After that’s been setup, you want to (at the very least) check that you have SSH enabled on your system. If you don’t have SSH, you want to install OpenSSH, which you can find the tutorial for here. Installing just the server and client should work fine, however if you want SSH keys, you can feel free to configure them yourself. Afterwards, you’re going to want to find a good place around your house to put your router, UPS, and PC. Ideally, you want it in somewhat open space so that air can flow through. If you have my UPS, you should plugin both the PC and the router into the black ports, as those are the ones that can be powered by the UPS:

The white plugs are for devices to pass through the UPS, and not use the battery. The ethernet cable should ideally be plugged in to the 10 GBPS port, and then connected to the headless PC:

These steps can obviously be done at a later time, and completely depends on when you feel comfortable leaving your GUI after some developmental testing. Once you have all of these things setup, you can now officially connect to your computer via SSH remotely for the rest of the tutorial!

Setting up a static IP

These next 2 steps don’t require your PC at all, and actually just require your router / ISP. To setup a static IP, you first want to text your ISP about getting a static IP address for your router. This may take a few days, so you’re going to want to be patient. Once done, they’ll most likely send you an email with the actual IP, DNS servers, gateway, and netmask. You want to go into your router settings, and configure the IPv4 address similar to the following:

After you’ve configured this, double check to make sure that your internet is working fine. If it is, that means that you now have a static IP address!

Setup port forwarding on your router

You might be asking, why not set this up after setting up my web applications? Well the truth is, you don’t need to expose your web app ports, and the ports you do need to expose (i.e. database, mail, ssh) are all ports that you already know about beforehand. So, I think it’s best to start off with this before we proceed.

Port forwarding basically allows the IP address to forward any specific port to an internal computer’s port, and allow the port to be accessible from the outside world. Most routers have an easy way to do this, however I’ll be going over the process for TP-link specifically. You want to open up your browser to http://192.168.0.1, which should open up your local router’s website:

It’ll show you a prompt like this, and the local password should be the password that you inputted for the router when you first set it up. Once done, you want to go into the sidebar, click Advanced, and then NAT forwarding. From here you can forward any port from your local device to the IP address, just like I’ve already done here:

port forwarding

This picture in particular only shows ports 22 and 25, however if you want the setup I have, you want to forward ports 22, 25, 465, 587, 143, 993, and 27017. These group of ports will include your ssh server, smtp, imap, and MongoDB port. The device IP address should be the actual internal address, which should be handled by TP-link automatically when you choose what device you want to forward. Once done, you can now move on to actually hosting your applications!

Hosting multiple different applications

I’ll first be going over hosting your web applications. Obviously there are many different ways you can do this, but due to how easy it is to cloudflare tunnel using it, I’ll be using docker.

If you don’t already have docker engine installed, you can install it via the following:

  1. SSH into your computer like so:
ssh [username]@[IP address]

If it asks you if you trust this fingerprint say yes, and input your user’s password.

2. Setup Docker’s apt repo like so:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

3. Install all the packages:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

4. Verify that they were installed using the command:

sudo docker run hello-world

Once it prints out a confirmation message, it’ll exit immediately after, and that’s how you know docker was installed properly.

To use docker, you’re going to need a docker file setup on your project. There’s multiple ways to do this, however I personally prefer the simplest route, which would be to just do the following:

FROM node:22
WORKDIR /app
COPY package*.json ./
COPY . .
EXPOSE [PORT]
CMD npm run start

This docker file configuration should work for most Node.js apps, and you can edit the CMD at the end if you use a different package manager like pnpm or yarn. If you do not have a start script setup, you’re going to want to go into your package.json and add one in, sort of like this Next.js app I have configured:

package.json script

In our case, we’re going to use GitHub to push the code to our repository. Once done, you can go back your PC, and pull your project into some directory using git. If you do not have git installed, you can install it using the following command:

sudo apt install git-all

Your command & output should look something like the following:

If you were pulling a private repository, you would first want to create an API key, copy the key and name, and put it in the link exactly like the following:

 https://[name]@[API Key]:github.com/gdhpsk/gdlrrlist

Obviously this is my repository, but you can adjust it to your link as well. Afterwards, once you build your application (i.e. a next.js app), you want to go into that directory, and run a docker build command for your project like so:

sudo docker build -t [project name] .

Project name can be anything that you think fits your project. Once done, you can run your application using the following command:

sudo docker run -d --restart always -p [port]:[port] [project name]

The d parameter will automatically detach your terminal from the docker applications terminal, and the restart parameter makes sure that it’ll always restart if there’s any failures. If you want to make your restart parameter a bit more complex (i.e. specify a number of restarts) you can read the docs here. The port number should be the exact same port that you specified earlier in the docker file, and the project name is the same project name you specified in the build command.

You should also note that this docker instance will not have access to any outside files, so if you want that, you’re going to need to use something like the -v parameter, which will map something from outside your project as something inside your project. For example, if you wanted to connect to say a backup USB, something like this:

-v /media/backups:/backups

will map /media/backups as /backups inside your project, and you can use that directory name inside the container.

To check if your project is working successfully, you can run:

sudo docker ps

To list all the current docker containers. To restart every single container, you can run the command:

sudo docker restart $(docker ps -a -q)

Mail & MongoDB databases

The way you do these will also be pretty much the exact same way as I described in my other 2 articles, however for MongoDB, you want to put the 2nd bind IP as your internal IP, not your host name, similar to the following:

You should also automatically enable the MongoDB app to boot on startup with the following:

sudo systemctl enable mongod

For mail, a lot of ISP’s don’t automatically have port 25 enabled for security reasons, so you want to tell them to enable both outbound and inbound transfer. You will also need to contact them in order to get your reverse DNS setup.

Now that you have everything configured, it’s time to actually expose your web apps to the real world!

Using Cloudflare tunnel to securely tunnel your application

Even though setting up a reverse proxy like Nginx would also do the trick in terms of exposing your apps, I personally think cloudflare tunneling provides the most secure way to do it (providing that your domains are owned by cloudflare). If your domains aren’t owned by cloudflare, you can still use nginx, however cloudflare tunneling will help a ton when it comes to backups of the actual server.

To setup cloudflare tunneling, you first want to go to the website, which is linked here.

cloudflare tunnel website

Once done, you can click view in dashboard, and select the account you want for tunneling. Do keep in mind that you’ll only be able to tunnel domains that are owned by this account. Once done, you want to go to Network > Tunnels > Create new tunnel, use cloudflared, name it whatever you want, and then use docker as the installation method.

tunnel installation

Even though the docker installation method is nice, the one cloudflare provides is slightly incomplete. If you want to use it like how I do, make sure the command looks like the following:

sudo docker run -d --net=host cloudflare/cloudflared:latest tunnel --no-autoupdate run --token [token]

The net parameter allows the docker container to access websites on your localhost, or 127.0.0.1, network. If you did it correctly, it should show up as a connection on the cloudflare website, and you can now continue. You can add in any website / subdomain you want, and then point it to your localhost port, similar to the following:

Once done, whatever domain / subdomain your pointed your tunnel to should now serve content from your localhost port (including both TCP and UDP connections)! Due to us using Docker, you can add in as many accounts as you’d like, and also as many websites as you’d like as well, just that they need to be cloudflare domains.

Backups

Now let’s say you have a big outage at your house that not even your UPS can last. What would you do then? Me personally, I have a DigitalOcean virtual machine snapshot on standby, which is a very cheap backup alternative. On the actual virtual machine itself, I use Node’s PM2 module as a lightweight system to host all my applications. To install node.js, you can use the NodeSource PPA linked here. Obviously setting up the mail and database can be done using the normal articles that I have already provided. To run a cloudflare tunnel to all the apps on that backup, you can use a JS script, like this one, to achieve that. Once all these steps are done, you can create a snapshot by going to the snapshot tab like so:

Snapshot tab

and create a snapshot from there. Once done, you can turn off the droplet, and destroy it. Your snapshot will always be on standby on the Backups & Snapshots tab on the lefthand side:

To recreate it, just click More > Create Droplet, and your backup is now ready to go. Using pm2 and cloudflared, you can also create an automatic backup script that boots up as soon as you create the droplet from the snapshot. Just make sure to use the no-autoupdate option in pm2 as well. If you want to see an example backup script, you can look at my personal one here.

NMAP scan to check for open ports

Now that you basically have everything setup, the last thing you need to do is an NMAP scan to make sure no unnecessary ports are passed through. I didn’t actually realize how easy it was to get the exposed ports of an IP address til I found out about this tool. To install it, just go to https://nmap.org/download, and download it for the appropriate system. Once done, you can open it up, and input your static IP address in the top left:

You want to make it scan for all TCP ports so that it gets every single port number. If you see any port that you didn’t configure yourself, it’s most likely made from the router.

If you don’t see anything unusual, then viola, you have officially created a home server fully controlled by you!

Conclusion

I’m personally very fortunate to be able to even setup something like this because I know several of my friends who don’t have anywhere near as good internet as I do, and I personally know that, which is why I try to take the most advantage of it. I think the best advantages of a system like this are that:

  1. You can fully control what ports get mapped to the IP, which is very helpful as a firewall
  2. You get to customize your own hardware as much as you’d like, and add in resources when needed
  3. It’s virtually free after you buy all the parts that you need!

That last portion is probably the biggest reason I even considered this in the first place, and the fact that I actually have this setup is really cool. I hope you guys enjoyed this article, and hey, maybe you can setup your own servers too!

--

--