Monitor ASP.NET Core in ELK through Docker and Azure Event Hubs
Anyone who has been managing a real world application, knows that a proper monitoring infrastructure is not optional. You definitely want to make sure everything is working properly, without waiting for people complaining on Twitter in order to know that something is going wrong.
Being able to
- track what is going on on your system,
- code complex alert rules,
- and see at a glance how your application is performing
are definitely aspects that you cannot build in house. There are several systems out there, like Application Insights or Log Analytics on Azure, or the Grafana stack. However, I’m personally in total love with Kibana or, better said, the ELK (ElasticSearch-LogStash-Kibana) stack: it has practically become an industry standard, it’s free, it has a powerful query language and, man, I’m such a fan of those beautiful dashboards!
The goal of this article is integrating it with an ASP.NET Core 2.0 application, and we are going to use some very cool technology in order to do it:
- Serilog in ASP.NET Core as our client-side logger
- Azure Event Hubs as the delivery infrastructure for log entries
- A full ELK stack (ElasticSearch, LogStash and Kibana) running in Docker containers
So… let’s get started!
Designing our logging infrastructure
Strictly speaking, Kibana only needs a working ElasticSearch cluster — even just a single server — to properly function. However, getting data into the search engine is all but trivial.
At a first glance you might think that you could just push data into ElasticSearch from your logger using its RESTful API. However, this doesn’t scale well when your rate of requests per second increase. A solution that logs thousands of entries per second needs a proper ingest pipeline, which gathers entries from various sources, transforms them and pushes them into ElasticSearch. Hence LogStash.
Note: For several scenarios, LogStash might seem like a bit of an overkill, since it’s quite heavyweight. A common alternative is Beats, still from Elastic, a suite of data shippers which are easier to install and less eager of resources. I’ll probably blog about them some time in the future :)
How do we deliver log data from applications to LogStash? Well, we have several options here, I definitely want to show in the future how to use local files, for example. However, for the sake of this article, we are going to push our log data to Azure Event Hubs.
For those of you who are not acquainted with Azure Event Hubs, feel free to jump and have a look at the official documentation, whose tl;dr version is pretty much like:
“they behave almost like a queue, however they are designed to scale up to millions of messages per second”.
There are a few reasons of why you want to do use Azure Event Hubs to deliver your logs:
- They are supported by LogStash, through a specific input provider built by the Azure team;
- If you use Serilog in ASP.NET Core, there’s a sink that integrates with Event Hubs
- Since they behave like a queue, there’s no need of physical connectivity between the servers running ASP.NET Core and the ones with ELK.
So, let’s start setting up our ASP.NET Core application.
Logging from ASP.NET Core to Event Hubs
ASP.NET Core Logging API doesn’t support Azure Event Hubs out of the box. However, there are several providers out there that implement the ILoggerFactory interface and can seamlessly fit into the general runtime architecture. One of theme is Serilog. So, first step is adding the Serilog NuGet package, together with its sink for Event Hubs:
Install-Package Serilog.Extensions.Logging
Install-Package Serilog.Sinks.AzureEventHub
Once this is done, all we have to do is configuring it in the Startup class, like in the snippet below:
As you can see, the code creates a new Logger that writes to Azure Event Hub. It uses a JsonFormatter, which will ensure that data is easily consumed by ElasticSearch, and holds a reference to the hub through a connectionString and an entityName. They can be retrieved from the Azure Portal, once we’ve created an instance of Event Hubs. The official documentation holds some detailed tutorials on how to do it.
The other bit on that code snippet is the configuration of a global filter called LogResultFilter. This basically is the component that will be triggered for every request, responsible of logging some basic info about the invoked action:
It implements the IAsyncResourceFilter interface, which is the most external filter in the ASP.NET Core pipeline. This ensures that, by the time it runs, the action execution has completed and we have, for example, its response status code. However, it doesn’t do much, apart from retrieving some tracing info about the request.
That’s really all we have to do on the ASP.NET Core side, time to deploy an ELK stack on our dev machine. But before that, there’s a little customisation to do.
Use LogStash with Docker and Azure Event Hubs
If you have a look at the documentation in the Elastic website, there are several ways to install and execute the ELK stack locally. Personally, I’m a total fan of the Docker option, the main reason being that ELK is quite heavyweight, and I love to have the chance to spin it up and tear it down with just a couple of commands on Docker CLI.
Elastic provide a repository with all the images you need and, as we are going to see shortly, having a fully fledged cluster running is just a matter of a couple of files.
The only complexity resides in LogStash, because it doesn’t support Azure Event Hubs out of the box. However, it wouldn’t be so popular if it wasn’t pluggable, and the good news is that there’s an input provider created by the Azure team, available in GitHub.
So, first step is to clone the repository locally, and then create our custom image with the following Dockerfile:
FROM docker.elastic.co/logstash/logstash:6.2.3WORKDIR /plugin-installCOPY ./plugins .RUN logstash-plugin install logstash-input-azureeventhub
The content is quite self-explanatory:
- we use the official LogStash image as a base
- we create a
/plugin-install
folder in which we copy the source code of the plugin downloaded from GitHub - we run the
logstash-plugin
command to install it in LogStash
Then we can create our Event Hubs-ready LogStash image by just running the following Docker command:
docker build -t logstash-eh .
Now we do have all the ingredients ready in order to spin up our ELK cluster in Docker.
Configure the ELK cluster in Docker
As you’ve probably figured out by now, an ELK cluster is basically made of three different containers running together and interacting with each other:
- LogStash will be our log collector: it will download data from Azure Event Hubs and push it to ElasticSearch
- ElasticSearch indexes the log data and ensures we have the maximum flexibility and speed when creating our dashboards or querying it
- Kibana is a gorgeous front end for all of the above.
It makes 3 containers in total and, if you are familiar with Docker, I’m sure you agree that the easiest way to run all of them is via the Docker Compose tool that will describe 3 services. Let’s go through them one by one.
ElasticSearch is quite trivial: the only aspect worth noting is that it uses a volume in order to make sure that data is preserved after restarts. It creates a service that responds at http://elastic:9200.
elastic:
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.2.3'
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
Kibana is also simple: we expose port 5601 so that we can access it from our localhost, and we use an environment variable to configure the URL for ElasticSearch that we’ve defined before.
kibana:
image: 'docker.elastic.co/kibana/kibana:6.2.3'
environment:
ELASTICSEARCH_URL: http://elastic:9200
ports:
- 5601:5601
depends_on:
- elastic
LogStash is interesting: apart from the fact that we are using our custom image, and the usual variable for the ElasticSearch endpoint, it also needs a configuration file that specifies how it is going to receive, transform and ship logs.
logstash:
image: logstash-eh
environment:
XPACK_MONITORING_ELASTICSEARCH_URL: http://elastic:9200
volumes:
- c:/Users/marco/Desktop/elk-test/logstash-cfg:
/usr/share/logstash/pipeline/
depends_on:
- elastic
The easiest way of doing it is maintaining this file locally, and pass it to the container through a shared volume to the/usr/share/logstash/pipeline
folder.
In the example above, the file is stored in my ../logstash-cfg
local folder and contains a JSON that looks like the one below:
input
{
azureeventhub
{
key => "giS1etkQ..."
username => "defaultPolicy"
namespace => "destesthub"
eventhub => "samplehub"
partitions => 2
}
}
output {
stdout { }
elasticsearch {
hosts => ["http://elastic:9200"]
}
}
As we would expect, the input is of type azureeventhub, and requires a bunch of quite obvious parameters similar to the ones we have set in ASP.NET Core before.
There are two outputs:
- stdout that logs tothe console, useful if we want to make sure that LogStash is effectively receiving messages;
- elasticsearch, that ships the data to the indexer.
Brilliant, time to see it running!
The full docker-compose.yml file is similar to the one below:
At this point, starting up the cluster is just a matter of a single line of code:
NOTE: as mentioned a few times already, the ELK stack is quite heavyweight and hungry of memory. If you ran it with the default 2GiB of RAM that Docker sets for the Host Machine, it would probably crash. My suggestion is to give it at least 5GiB or RAM.
It takes a while to the whole system to warmup, but once it’s there, try to hit a few URLs and you’ll slowly see data starting to appear in Kibana. Play with it a bit and I’m sure you’ll be able to create a dashboard way better than the one below:
Conclusions and going to production
During this article we’ve presented a solution to monitor an ASP.NET Core application using the ElasticSearch-LogStash-Kibana stack. The ultimate result in undoubtedly interesting, although there are still a few limitations and steps to go in order to be production ready:
- the data reaches ELK through Azure Event Hub. This is definitely a scalable solution, although the provider we’ve used has some major limitations. The biggest one is that it doesn’t use the EventProcessorHost, which means that there’s no support for lease management and offset storage. This is a major issue, which prevents you from having many concurrent LogStash instances and might cause messages to be lost should the container restart. I’m probably going to spend some time and write a new provider soon. :)
- ElasticSearch is memory hungry. It has some tough system requirements and it’s probably advisable to deploy the ELK system in a stand-alone cluster, perhaps used by multiple applications. I also recommend to carefully read the documentation before going to production.
- Data will keep building up over time. We need a system to archive old data on a regular basis. Curator is probably the most widely used tool to achieve this, although it’s out of scope for this article.
I’m going to address these issues in one of the next articles. In the meanwhile, you should have all the tools and info you need in order to start experimenting with ASP.NET Core and ELK.