ELK stack in GoLang — 2

Fırat Atmaca
5 min readMay 23, 2022

--

Kibana

To store data is not important when we don’t make operations such as questioning, watching, and statistics. Kibana which is completely an open-source and browser-based application provides an interface for operations such as searching, analyzing, and visualization of the data on Elasticsearch. Kibana runs on NodeJS, and the installation packages come with the necessary files built in. By default, users use KQL (Kibana Querying Language) language to query data on indexes.

Common searching types on Kibana:

  • Free text searches: Used to quickly search for a specific thread.
  • Field-level searches: Used to search for a string within a specific field.
  • Logical expressions: Used to combine searches into a logical expression.
  • Proximity searches: Used to search for terms within a certain character affinity.

There is an auto-complete feature to help search on Kibana. When we enter a query, Kibana offers to us search syntax. As we type the query, the relevant fields are displayed, and we can complete our queries with just a few clicks. This makes querying in Kibana much simpler.

Kibana is so famous for with visualization talent of it. We can visualize our data with much more graphics and separate pieces. We can create custom graphics thanks to Vega and Vega-Lite.

Visualization is categorized into 5 different types in Kibana:

· Basic Graphs (Area, Heatmap, Horizontal Bar, Line, Pie, Vertical bar)

· Data (Date Table, Indicator, Target, Metric)

· Maps (Coordinate Map, Region Map)

· Time series (Timeline, Visual Builder)

· Other (Controls, Markup, Tag Cloud)

By using the graphics supported by Kibana, we can prepare user-friendly dashboards and increase the traceability of our application. In addition, many other dashboard tools are licensed by Elastic.

docker-compose.ymlversion: '2.2'

services:

// remainder omitted...

kibana:
image: docker.elastic.co/kibana/kibana:7.6.2
depends_on:
elasticsearch:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-s", "-f", "http://localhost:5601/api/status"]
interval: 3s
timeout: 3s
retries: 50
ports:
- 5601:5601
networks:
- mynet

Kibana was installed the baseline docker image, depends on Elasticsearch, has a health check of its own and maps its internal port 5601 to the machine’s real port 5601. When we go to http://localhost:5601 and we will see Kibana’s home page load.

Beats

Beats are tools that act as agents that are uploaded to application servers to collect logs and other data. They are generally developed with GoLang. They are installed with simple installations and work without any dependencies.

· FileBeat: As main target, it is developed to collect and send the Log files, but it can use for a lot of different aims. FileBeat is almost run on every OS including Docker, and it also have internal modules for the platforms such as Apache, MySQL, Docker.

· PacketBeat: PacketBeat observe network among servers, it is used to trace application and performance. PacketBeat can be installed on the monitored server or on its own private server.

· MetricBeat: As FileBeat, MetricBeat was developed to collect statistics data from specific platforms. Data collection frequency of MetricBeat and which metrics to collect can be configured with the configuration file.

· WinlogBeat: Especially, WinlogBeat was developed to collect logs and metrics for Windows OS. It can be used for analyzing of Security events and updates.

· AuditBeat: Developed to monitor activities on servers with Linux operating system. It can be used the monitoring of Security breaches, configuration changes.

· FunctionBeat: It is beats that works with the developed Serverless architecture to collect data and send it to the ELK stack. Designed for monitoring cloud environments, FunctionBeat is tailored for Amazon deployments and can be used as an Amazon Lambda function to collect data from Amazon CloudWatch, Kinesis, and SQS.

We will use FileBeat in the tutorial that we are developing.

docker-compose.ymlversion: '2.2'

services:

// remainder omitted...

filebeat:
image: docker.elastic.co/beats/filebeat:6.5.1
depends_on:
elasticsearch:
condition: service_healthy
volumes:
- ./config/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./logs/:/logs/
networks:
- mynet

FileBeat depend on Elasticsearch being healthy, both depend on their respective baseline docker images. I’ve mounted the FileBeat configuration in the config directory on the container.

filebeat.ymlfilebeat.inputs:
- type: log
paths:
- /logs/*.log

output.logstash:
hosts: ["logstash:5044"]

This configuration specifies that the service will listen for changes to all log files in /logs. Additionally, It will output any new logging info to logstash via port 5044.

RabbitMQ

RabbitMQ was developed with Erlang programing language. It runs as an asynchrony, and it is a queue application working with fire and forget principles. Although it is generally used in messaging systems, its usage area has expanded due to the prolongation of response times in large-scale and demanding applications. It works with the principle that producers leave data to the queues and consumers read the data from the queues. Reasons such as not supporting multiple data centers and deleting the item from the queue after being read can be listed as its weaknesses. In our usage scenario, our application that we developed with GoLang will leave the logs to RabbitMQ, and Logstash will act as a consumer, dissolve the queue, filter the log information it receives and write it on Elastic.

docker-compose.ymlversion: '2.2'

services:
// remainder omitted...

rabbitmq:
image: rabbitmq:3-management-alpine
hostname: "rabbitmq"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "admin"
RABBITMQ_DEFAULT_PASS: "amin"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- esdata:/var/lib/rabbitmq/
- esdata:/var/log/rabbitmq
labels:
NAME: "rabbitmq"
networks:
- mynet

I have installed classic RabbitMQ baseline image on Docker. I have added username and password to configuration and defined volumes. When I don’t define volumes, the data on queues will store on memory. If something goes wrong, we can lose the data.

Application

We have two endpoints in the sample application that we developed. One of these endpoints writes logs to txt files with the .log extension, and the other writes to a queue in RabbitMQ. You can access the source code of the application on GitHub.

.env
Dockerfile
config
|-- elasticsearch.yml
|-- filebeat.yml
|-- logstash.conf
core
|-- config.go
|-- logger
| |-- ilogger.go
| |-- log-client.go
| |-- logger-rabbitmq.go
| |-- logger.go
| |-- model
| | |-- log.go
docker-compose.yml
go.mod
go.sum
server.go

Project structure;

· config — You can find Elasticsearch, Filebeat and logstah configuration file in here.

· core — You can find logger functions and .env read function in here.

· logger — You can find logger functions in here.

· model — You can find logger model in here.

We stored username and password the .env file in the project root directory and set up the environment variables like this:

RABBITMQ_URL=amqp://admin:admin@rabbitmq:5672/

RABBITMQ_QUEUE_NAME=ApplicationLog

We applied dependency inversion pattern on tutorial, and we create an interface to isolate logger functions from another functions.

Conclusion

Logs need to be monitored regularly, and, occasionally, we will find yourself needing to dig deep into them to investigate a particular problem. When this happens, ELK stack meets our needs. By shipping logs to an ELK stack, we can leverage Kibana to search across large quantities of data, narrow down the information we need, and minimize the time it takes to track down.

--

--