Visualising AWS ALB logs with ELK

Arun Gowda
4 min readJun 6, 2020

--

The ELK stack stands for Elasticsearch, Logstash and Kibana .ELK is a popular for searching, analyzing and visualizing data.

Elasticsearch is a distributed, open source search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.

Logstash is a data processing pipeline for managing event logs.

Kibana is a web application for visualizing data in Elasticsearch.

On the other hand, AWS ALB logs provide detailed information (like client’s IP address, request paths server responses etc.) about requests sent to your load balancer. So it’s important to monitor access logs. We’ll do this using ELK here.

Requirements

  • Enable Access logs for Application Load balancer.
  • AWS Elasticsearch Domain.
  • Logstash server on Ec2-instance.

1.Enable Access logs for Application Load Balancer.

To enable access logs for your load balancer, you must specify the name of the Amazon S3 bucket where the load balancer will store the logs. You must also attach a bucket policy to this bucket that grants Elastic Load Balancing permission to write to the bucket.

Create an S3 Bucket

  1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/
  2. Choose Create bucket.
  3. On the Create bucket page, do the following:

a.For Bucket Name, enter a name for your bucket. This name must be unique across all existing bucket names in Amazon S3. In some Regions, there might be additional restrictions on bucket names.

b.For Region, select the Region where you created your load balancer.

c.Choose Create.

4.Attach a Policy to Your S3 Bucket .

5.Choose Permissions and then choose Bucket Policy. Add the Policy that grants Elastic Load Balancing permission to write to the bucket.

6.Choose Save.

Enable Access logs

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. On the navigation pane, under LOAD BALANCING, choose Load Balancers.
  3. Select your load balancer.
  4. On the Description tab, choose Configure Access Logs.
  5. On the Configure Access Logs page, do the following:
  6. Choose Enable access logs.
  7. For S3 location, type the name of your S3 bucket.
  8. Choose Save.

2.Creating AWS Elasticsearch Domain

  1. Go to https://aws.amazon.com, and then choose Sign In to the Console.
  2. Under Analytics, choose Elasticsearch Service.
  3. Choose Create a new domain.
  4. For Choose deployment type, choose the option that best matches the purpose of your domain: Production, Development and Testing, Custom or UltraWarm preview. We used Custom in this blog
  5. Please select the Elasticserach version and click Next.

6.Elasticsearch domain name, enter a domain name.

7. For Availability Zones, choose 1-AZ,2-AZ, or 3-AZ , we selected 1-AZ

8.For Instance type, choose an instance type for the data nodes.

9.For Number of nodes, choose the number of data nodes.

10.For Data nodes storage type, choose either Instance (default) or EBS.

11.Choose NEXT.

12.In the Network configuration section, choose either VPC access or Public access. We selected Public access in blog

13. Add Access policy for the cluster. You can select the access policy from the dropdown. Here we selected Allow open access to the domain template.

14. Confirm

This will create the AWS Elasticsearch cluster.

3.Installing Logstash Server on Ec2-instance.

In this blog we are using Amazon Linux 2 AMI 2.0.20181008 x86_64 HVM gp2 instance to set up Logstash.

Prerequisites:

Logstash requires Java 8 or Java 11. Use the official Oracle distribution or an open-source distribution such as OpenJDK.

Steps to Install Logstash.

We are installing Logstash from Package Repositories.

  1. Download and install the public signing key:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

2.Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix, for example logstash.repo

[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

3.And your repository is ready for use. You can install it with:

sudo yum install logstash

Configuring the Logstash server.

To configure Logstash, you create a config file that specifies which plugins you want to use and settings for each plugin.

  1. Create config file in /etc/logstash/conf.d (ex: docsapp.conf)

Input configuration file on “/etc/logstash/conf.d/docsapp.conf”

input {
s3 {
bucket => "docsapp"
prefix => "AWSLogs"
region => "us-west-2"
type => "elblogs"
codec => "plain"
delete => "true"
secret_access_key => "AWS_SECRET_KEY"
access_key_id => "AWS_ACCESS_KEY"
}
}

Add grok pattern to filter the ALB Logs

filter {
grok {
match => {"message" => '%{NOTSPACE:request_type} %{TIMESTAMP_ISO8601:log_timestamp} %{NOTSPACE:alb-name} %{NOTSPACE:client} %{NOTSPACE:target} %{NOTSPACE:request_processing_time:float} %{NOTSPACE:target_processing_time:float} %{NOTSPACE:response_processing_time:float} %{NOTSPACE:elb_status_code} %{NOTSPACE:target_status_code} %{NOTSPACE:received_bytes:float} %{NOTSPACE:sent_bytes:float} %{QUOTEDSTRING:request} %{QUOTEDSTRING:user_agent} %{NOTSPACE:ssl_cipher} %{NOTSPACE:ssl_protocol} %{NOTSPACE:target_group_arn} %{QUOTEDSTRING:trace_id}'}
}
grok {
match => ["request", "(%{NOTSPACE:http_method})? (%{NOTSPACE:http_uri})?"]
}
grok{
match => [ "target_group_arn","(?<target_group_arn>[/^])(?<target_group>.[^/]*)"]
}
mutate {
convert => {
"elb_status_code" => "integer"
"target_status_code" => "integer"
}
}
}

Output configuration

output {
file{
path => "/var/log/logstash/alb-accesslogs-%{+YYYY.MM.dd}.log"
}

elasticsearch {
hosts => "http://search-docsapp-es-.us-west-2.es.amazonaws.com:80"
index => "alb-accesslog-%{+YYYY.MM.dd}"
}
}

Save the file

Now add the config file i.e docsapp.conf into the pipeline.yml

2. You can find the pipeline.yml in /etc/logstash path.

3. Edit pipeline.yml

add the pipeline.id and path config

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: main
path.config: "/etc/logstash/conf.d/docsapp.conf"

4. Restart the Logstash.

sudo service logstash restart

5.Check your index should be created now

--

--