Tutorial: How To Deploy Prometheus and Node Exporter as Containers on a Remote server (with $5)

Graham Atlee
Nerd For Tech
Published in
6 min readAug 1, 2021

--

Intro

The purpose of this tutorial is to show you how to set up Prometheus monitoring on a remote server. Not a localhost on your laptop, but an actual remote server. You’re still going to need your laptop but we’ll be scraping hardware/OS metrics by placing Node exporter on a different computer. And then will visualize the data we collect using a Grafana dashboard on your local machine.

Diagram representing what we’ll be doing

Prerequisites

  • You have Docker installed on your computer and you know how to use the CLI
  • A high-level understanding of what Prometheus is and the underlying architecture
  • You know what Grafana is and know it is used to visualize analytics from a data source
  • You’ve written a docker-compose file before
  • You know how to use the Linux command line

This is not aimed to be a beginner-friendly tutorial. I’m assuming you’re reading this article because you have experience with all of the above.

Buy a Linode server for $5

We’ll be using the hosting company Linode to set up our remote servers. Go to Linode.com and sign up for an account (you won’t be be billed right away). Click Linodes > Create a Linode and configure your settings like this:

Scroll down to enter a name and a root password for your server (keep it simple). Then select a region that’s closest to you (for me it’s Atlanta GA). Make sure you select the cheapest option as we won’t be needing anything beefy for this tutorial.

SSH into our server and install Docker

You can find your server's IP address by clicking on your node.

Now fire up your terminal and ssh into the server. Make sure to enter the same password that you entered when creating your server.

Note: Normally you would create a new user with sudo priveledges but for the sake of saving space in this article you can just log in as a root user for now.

apt-get update && apt install docker.io

Now install Docker on the server.

Install Prometheus and Node exporter using Docker compose

Now for this next part, we could individually create our containers using the following docker run commands:

#start node-exporter container
docker run -d \
--net="host" \
--pid="host" \
-v "/:/host:ro,rslave" \
prom/node-exporter:latest \
--path.rootfs=/host
# start promethues container
docker run \
-p 9090:9090 \
-v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus

However, since this is a multi-container application and there are a lot of runtime parameters it would be smarter to use a docker-compose configuration.

apt install docker-compose

So let's just install docker-compose and create a YAML file to handle our container runtime.

vi docker-compose.yml

Let's add the following to our configuration file to set up a multi-container application:

If you want to know more about what each of the fields actually does I recommend reading the Prometheus and Node exporter documentation.

Create a Prometheus config file and then run the containers

Now we have to configure Prometheus to scrape our node-exporter for metrics.

vi prometheus.yml

Create a prometheus.yml file and add the following to it:

It is very important that your target(s) is the name of your container followed by the port it will be listening on: <container_name: port>

docker-compose up -d

Upon runtime, Compose will handle copying the single prometheus.yml file into our container. Next, run docker ps to make sure the containers are running. You should get the following output:

Compose will also handle setting up a default network for our app. Each container joins that network and is reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Pull metrics into Grafana by using Prometheus as a data source

Now if your main goal was to set up Prometheus and Node Exporter as containers then you can stop here. You can go to your browser and type your server’s IP address followed by port 9090 (http://45.79.207.223:9090/). This will take you to Prometheus’s default React dashboard where you can query for metrics using PromQL.

However, these services are usually used in conjunction with tools like Grafana to better articulate the metrics we are receiving.

Let’s run Grafana as a Docker image on your local machine and designate a volume for information storage.

# create a persistent volume for your data in /var/lib/grafana 
docker volume create grafana-storage
# start grafana
docker run -d -p 3000:3000 --name=grafana -v grafana-storage:/var/lib/grafana grafana/grafana

Go to your localhost:3000 on your machine and log in using ‘admin’ for the username and password.

Then navigate to data sources and click add a new data source.

Select Prometheus and you will see a page to configure your data source. You will need to get your server’s public IP address. Log back into your server and enter the following command:

hostname -I
# 45.79.207.223 172.17.0.1 172.20.0.1 2600:3c02::f03c:92ff:fe54:25ee

Copy the first IP address 45.79.207.223 as that is the one that will be used by Grafana to target Prometheus.

Enter the full URL as http://45.79.207.223:9090 and make sure the access is set to Server. Scroll down and set your HTTP method to post. Then click save & test and you should see a banner that says the Data source is working.

The final step is to configure our actual dashboard. For this, we will be using a pre-built dashboard designed specifically to display Node exporter metrics.

Navigate to Dashboards > Manage and select import.

13978 is the ID for the Node Exporter Quickstart dashboard.

Fill out whatever information you would like here. I’m going to stick with the generated defaults.

And there we have it! We’re viewing hardware metrics from a computer in Atlanta GA.

I hope you found this short guide helpful. During my own search, I found there was a lack of insightful documentation when it came to deploying these 2 services as containers instead of as daemons.

Perhaps in a future tutorial, I will show you how to use an Ansible playbook to deploy these containers across more than one server.

--

--