How to monitor MongoDB Atlas™ with Prometheus and Grafana

And gain unprecedented insights!

Edward Diaz
Globant
8 min readJun 29, 2024

--

Monitoring MongoDB performance

It is common for many companies to manage, customize, or outsource their database monitoring processes; for the on-premise versions of MongoDB, tools such as Prometheus and Grafana could be used for this task. For new cloud versions, this feature changed and must be done through new integrations. This article explains the Integration between MongoDB Atlas and Prometheus to use the Grafana console for dashboards and alerts.

About MongoDB

MongoDB Atlas is a non-transactional multi-cloud database service built for resilience, scalability, data privacy, and security. Unlike traditional relational databases, MongoDB stores data in JSON documents, allowing for a more dynamic and agile data structure. It is ideal for handling large volumes of data and is particularly useful in applications that require rapid development and frequent schema changes. With features like replication and sharding, MongoDB ensures high availability and performance, making it a popular choice for web and mobile applications as well as real-time data analytics.

MongoDB Monitoring

For this type of platform, we have often implemented local monitoring solutions for our databases. In the case of MongoDB, there are already functional solutions such as the Prometheus exporter modules, but in the case of the Cloud version of MongoDB, we find ourselves surprised that it is incompatible. MongoDB does not provide access to direct monitoring connections for security and performance issues.

As an alternative solution, MongoDB can use a group of native Integrations that will allow monitoring alerts to be managed autonomously or customized (external to Atlas). The following graphic shows an example of a monitoring dashboard for a DB instance in MongoDB ATLAS:

Sample of various types of graphs

Grafana has a wide variety of types of graphs that facilitate the user experience when reading or interpreting data.

Proposal

The following image shows the overall architecture needed to create a robust monitoring ecosystem consisting of the following components:

Services architecture for monitoring Atlas databases.
  • MongoDB Atlas (Cloud) database instance is either [Replica Set] or [Sharded Cluster], in general, replicaSets are used for small or test environments and Sharded Clusters for large and productive environments.
  • Prometheus integration into the MongoDB portal. This setting is simple to do using the MongoDB Atlas portal interface.
  • Prometheus server deployed and running. The implementation of the management and monitoring infrastructure on Linux distributions is highly recommended.
  • The Grafana server is installed and working, but it is also recommended that it be installed on Linux distributions.

Prerequisites

Before starting, you need the following list of components:

  • MongoDB Atlas (Cloud) DB instance already configured, either Replica Set or a Sharded Cluster.
  • The IP address of the Prometheus server must be added to the list of allowed IPs of the project in MongoDB.
  • A Prometheus server was installed and configured as a service on Linux (recommended).
  • A Grafana server was installed and configured as a service on Linux (recommended).

Necessary Settings

Below are the steps required to configure and interconnect the different components to begin data extraction and subsequent graphing. At this point, the data becomes information for decision-making.

MongoDB Native Integration for Prometheus

The mongoDB integration allows you to configure Cloud Manager to send metrics data about your deployment to your Prometheus instance. Follow these steps to enable the integration with your Prometheus instance:

Enter the organization and project that will be monitored and select the [Integrations] option:

Atlas management portal — Integrations menu

From the list of available integrations, select the Configure option from the external provider [prometheus]:

Atlas management portal — Prometheus integration page

In the configuration menu, generate a password for the user. The password can be personalized or auto-generated. It must be saved securely to be used in subsequent configurations. Finally, select the File Service Discovery option and save:

Atlas management portal — Prometheus integration configuration page

Configuring a new Job for data capture in Prometheus

Prometheus is a specialized software that functions as a monitoring and alerting system. To ensure convenience and accessibility, all data and metrics are stored in the internal database as time series. Once Prometheus is up and running as a service, a new Job must be created with the connection data and data capture frequency. Follow these steps to create a new job:

Generate the list of servers that comprise the MongoDB project instances to monitor and execute the following discovery query updating the user, password, and project ID:

curl --header 'Accept: application/json' \
--user prom_user_abcabcabcabcabc:promUserPassword \
--request GET "https://cloud.mongodb.com/prometheus/v1.0/groups/6351820881a81b0250XXXXXX/discovery"

The output of the query from the previous step must be saved in a JSON file and later invoked from the Prometheus YAML configuration file. For this example, the resulting file was saved in the same path as the Prometheus libraries in /etc/prometheus/:

sudo vi /etc/prometheus/m002.json
[
{
"labels": {
"cl_name": "M002",
"group_id": "6351820881a81b0250XXXXXX",
"org_id": "635180be5768c8623aXXXXXX"
},
"targets": [
"m002-shard-00-00.xxxxx.mongodb.net:27018",
"m002-shard-00-01.xxxxx.mongodb.net:27018",
"m002-shard-00-02.xxxxx.mongodb.net:27018"
]
}
]

In the [scrape_configs] section of the prometheus.yml file, add the configuration of the new monitoring JOB:

sudo vi /etc/prometheus/prometheus.yml
...
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets: ["localhost:9090"]

- job_name: "M001-mongo-metrics"
scrape_interval: 10s
metrics_path: /metrics
scheme: https
basic_auth:
username: prom_user_6351820881a81b025XXXXXXX
password: promUserPassword
file_sd_configs:
- files:
- /etc/prometheus/m002.json

Restart the Prometheus service to enable the new JOB:

sudo systemctl daemon-reload
sudo systemctl restart prometheus

Enter the Prometheus administration interface. In this example, the address uses your local IPaddress (i.e. http://192.168.XX.XX:9090):

This completes the configuration of the data capture job in Prometheus.

In the Status->Targets menu, if there are no errors in the connection and data capture, the State value of the servers must be UP.

Creating a new data source in Grafana

Grafana is a very popular user interface tool focused on obtaining data from queries or data concentrators or collectors, as well as storing and displaying data. It is completely open source and supported by a large community that constantly contributes new templates, plugins, and interconnections. With that said, now it is time to move to Grafana to generate a new dashboard:

The first step is to establish a new connection to Prometheus to retrieve the input data and subsequently display the data on the screen. Go to the Grafana management portal, enter the option Connections->Add New Connection, search for the Prometheus data source, and select it:

Grafana management portal — New Prometheus connection page

For the new data source, enter a new name and the URL of the Prometheus service additionally, enter the authentication data if necessary:

Grafana management portal — Prometheus configuration page

Go to the Connections->Data Sources menu and validate that the data source created in the previous step is already available:

Grafana management portal — Data sources page

Creating a new dashboard in Grafana

Grafana’s dashboards allow real-time data visualization through interactive graphs, tables, and alerts. They are highly customizable, enabling users to combine multiple data sources into a single view, facilitating in-depth analysis and informed decision-making. Its intuitive and flexible interface makes creating and adjusting panels simple, adapting to various needs in different fields. This section will explain how to create a connection from a dashboard to a Prometheus data source:

Generate a new dashboard using the data source generated previously:

Grafana portal — Dashboard configuration page

At this point, the limit is the creativity and needs of what parameters you want to monitor/control.

For example, you could create dashboards of different types, such as graphs, tables, counters, tachometers, etc. You can also use parameters such as:

  • Operation counters (insert, update, delete, commands, getMore).
  • Connection counters (available, active).
  • Memory and cache status (total, available, in use).
  • opLog status.
  • Disk space (total, available, in use).
  • Use of CPU and machine resources.
  • Counting objects (databases, indexes, collections).
  • Use of indices.
  • Cursor status.
  • Accumulated IOPs.
  • Object size (DB, Indexes, and tables) plus document/record count (tables)

Conclusions

By monitoring MongoDB ATLAS through native integrations with Prometheus as a collector and Grafana as an observability tool, a robust monitoring solution has been created. This solution provides a comprehensive view of the status/performance of different database instances, whether they are replica Sets or Sharded Clusters.

Prometheus gives us the simplicity of a data collector, and Grafana gives us an immense amount of graphical possibilities that are both useful, eye-catching, and customizable. This combination offers the possibility of creating technical, functional, or managerial dashboards, which allows us to transform data into valuable information for our company and decision-making/actions.

The availability of dashboards created by the Grafana community for MongoDB is small, and for MongoDB Atlas, it is zero. There are still no examples of dashboards for this new version. This article aims to start filling this gap in technical knowledge; many companies prefer to manage the history and alerting of the parameters and behavior of their technological infrastructure in a local and personalized way.

The entry of non-transactional databases into the technological market has been strong in the last decade. It is important to strengthen the internal management processes in companies that make them independent of the management processes offered by the developers of these platforms. Due to their global nature, they are not designed to meet the specific needs of each company or business sector.

References

--

--

Edward Diaz
Globant
Writer for

DBA and Project manager with more than 20 years of experience, working in large Production environments in multiple sectors.