How to apply ELK Stacks to startups

Let’s monitor Spring boot & nginx server logs with ELK Stacks

CPLABS
CPLABS-TECH
6 min readFeb 2, 2021

--

What made me write this?

I’m a member of Coinplug’s server development team. We used to check the log by entering the server through the ssh terminal which has a view full of white letters on a black background. If you try to retrieve the logs, you’ll see:

And you had to enter commands one by one, which made developers and operators who checked the log files very frustrated.

Log examples checked by terminal

Since it is inconvenient for developers to check log files, important logs are often missed or misinterpreted. In order to prevent this from happening, a log monitoring system seems to be needed.

After finding out that startups usually use ELK stacks to monitor logs, I decided to write this post and show you how to apply ELK Stacks to a log monitoring system.

What is the ELK Stack?

The ELK Stack is a technology stack of three open source products: Elasticsearch, Logstash, and Kibana.

  • Elasticsearch (E): a search and analysis engine
  • Logstash (L): a pipeline that collects, converts, and sends multiple log files to Elasticsearch
  • Kibana (K): a tool that visualize the stored data in elastic search
Structure map of ELK stack

Simply put, Logstash collects the logs generated from the server and sends them to Elasticsearch. This organizes the logs and sends them to Kibana and, finally, Kibana visualizes the cleaned logs with GUI and presents them to the users.

How to install ELK Stack

Docker-compose

To simplify the process, install Elasticsearch, Logstash, and Kibana at once using docker-compose. The required docker-compose file is the following:

docker-compose.yaml

You also need two setup files as follows:

kibana.yml

kibana.yml is a configuration file required to connect Elasticsearch and Kibana. It contains the Elasticsearch address to connect to, as well as the ID and PW to connect to Elasticsearch.

kibana.yml file connecting Kibana and Elasticsearch

logstash.conf

logstash.conf is another configuration file that defines the input and output of Logstash.

Logstash can receive input from various sources. It can receive files or a regular keyboard input (stdin). In this case, let’s presume we simply receive tcp input and define tcp ports and JSON codecs in the input area. Since Logstash must receive the log and forward it to Elasticsearch, I put the Elasticsearch address, the connection information and the Elasticsearch index name where the log will be stored in the output section.

logstash.conf file defined with Logstash input and output

Once you complete the setup, place docker-compose.yaml, kibana.yml, logstash.conf files just like the image below:

After that, please enter the following command stated below to install and run the ELK stack.

If you see the following screen after connecting to http://localhost:5601 and enter the Elasticsearch ID and password, you are on the right track.

Main page of Kibana

Monitoring examples

Now that the ELK Stack installation is completed, let’s generate the server logs and send them to ELK Stack for monitoring.

Because our team normally uses nginx as a reverse proxy server and uses spring boot as a WAS server, I am going to monitor the nginx and spring boot server.

Spring boot server log monitoring examples

First, I created a simple server because we need a sample server to generate logs.

Sample server codes

The server above is a simple http server that returns the string associated with “Hello world” and “value”, upon GET /api/v1/hello/{value} request. In order to leave an intermediate log and intentionally leave an error log, an incoming value greater than 100 is input and causes RuntimeException.

When a request comes in, the log will remain like this:

sample log

Now that the log has occurred, let’s send it to the ELK Stack. Logback makes the process of sending spring boot logs simple, but first you need to add Logstash logback encoders.

If you are using maven as a build tool, add them like this:

or add them like this if you are using gradle:

Now, let’s add the following logback-spring.xml file to src.main.resources.

Logback setup file with Logstash logback encoder

You can monitor the log by restarting the spring boot server and entering Kibana’s Discover page.

You can also select the field you want to monitor in the Available fields on the left. You only need to put the log level, log message, and logger name.

You can monitor the field that you chose

You can also look for the log you want in the search field. Let’s search the error log:

The result of searching with level: ERROR and additionally selecting stack trace

Example of spring boot server log monitoring: Custom field monitoring

In the previous example, only fields defined by Logstash logback encoder, such as log level, logger name, and log message, could be monitored. Then, what should I do to monitor user-specified values? Like for instance, what should I do to keep an eye on the value changes that come from spring boot server as a HTTP request?

The custom field feature provided by Logstash Logback encoder allows the user to specify the values to be monitored directly. For example, if you want to monitor the value that comes in as a request to the sample server, you only need to add one line:

Sample server code with added sentence for logging value

The code below the mark, saying “HERE!”, is the code that logs the custom field. Although it’s only the integer that is logged on the code above, you can log various types like Map, String, Object and so on. If you want to know other ways to log custom fields, please refer to this link.

Now, let’s restart the sample server and generate some logs by sending HTTP requests. The value field will appear in Available fields on the left side of Kibana’s discover page and you can monitor a value once you select it.

Image of monitoring the requested value

The advantage of monitoring the value field is that it is easy to search. For example, if you filter your search with a condition of value>50, you can find the log that meets the criteria.

Result of searching value higher than 50

Example of nginx server log monitoring

To send Nginx server logs to the ELK stack:

You can find the details in the Kibana Home page > Add data > Logs > Nginx logs. (The location may vary slightly depending on the Kibana version. Use 7.10.1 in this article)

Location of nginx log monitoring guide
nginx log monitoring guide provided by Kibana

Basically, all you have to do is follow the Kibana’s guide. In this process, filebeat.yml is written like this:

Now put your code on modules.d/nginx.yml.

But please keep in mind that you need to put the actual path of nginx access log in access.var.paths and put the actual path of nginx error log in error.var.paths. Once you’ve done that, set the index pattern at the top left of Kibana discover page to filebeat*.

Now you can monitor nginx log.

nginx access log monitoring

Conclusion

So far, I’ve demonstrated the process of applying a log monitoring function. In this case, I used only one Elasticsearch node, but it is desirable to cluster multiple Elasticsearch nodes to prevent the overload in case the amount of logs to monitor increases.

*Thank you for reading my article. If you have any opinions or issues related to this blog, please contact me through email written below.

Email: contact@coinplug.com

Reference

--

--

CPLABS
CPLABS-TECH

We unify and connect global businesses via Blockchain technology