Build Centralized Logging using Grafana observability stack for .NET Web API

Chaitanya (Chey) Penmetsa
CodeNx
Published in
6 min readMay 10, 2024

In this blog let’s see how we can setup centralized logging using Grafana observability stack, which includes Grafana, Loki and Promtail. We will dig deep into what are these different components and see how we can setup below shown diagram for centralized logging using .NET Web APIs.

Author created image

What is Grafana observability stack?

Grafana, Loki, and Promtail are three components of the Grafana observability stack, each serving a distinct purpose in the monitoring and logging pipeline:

  • Grafana — Grafana is an open-source analytics and visualization platform designed for monitoring and observability. It provides a powerful interface for creating, exploring, and sharing dashboards, graphs, and alerts from various data sources. Grafana allows users to visualize data from sources like Prometheus, Graphite, Elasticsearch, and others, making it a versatile tool for monitoring and troubleshooting. Grafana can be used to create dashboards that display metrics, logs, and other data collected from different sources.
  • Loki — Loki is a horizontally scalable, highly available log aggregation system inspired by Prometheus. It is designed specifically for collecting, querying, and analyzing logs in a cloud-native environment. Loki takes a unique approach to log aggregation by indexing log streams using labels, like Prometheus metrics. This label-based indexing enables efficient queries and reduces storage costs. With Loki, users can easily search, filter, and visualize logs using Grafana. It integrates seamlessly with Grafana, allowing users to create dashboards that combine metrics and logs for comprehensive observability.
  • Promtail — Promtail is the agent responsible for collecting logs and sending them to Loki for storage and indexing. It tails log files or receives logs over syslog and forwards them to Loki. Promtail is designed to be lightweight, efficient, and easy to deploy alongside applications in a Kubernetes/Docker environment. It supports various log formats and provides powerful configuration options for filtering and enriching log data. Promtail can automatically extract labels from log files or add custom labels based on log content or metadata, which enables flexible querying and analysis in Loki.

In summary, Grafana is a visualization platform for monitoring and observability, Loki is a log aggregation system optimized for cloud-native environments, and Promtail is the agent responsible for collecting logs and forwarding them to Loki. Together, they form a powerful observability stack that enables users to monitor, analyze, and troubleshoot applications and infrastructure effectively.

Setup Grafana observability stack in docker

In this section we will setup all the three components as Docker containers using below docker compose file.

version: '3'

services:
loki:
container_name: loki
image: grafana/loki:latest
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
# volumes:
# - ./loki-config/:/etc/loki/
healthcheck:
test: wget =1 --tries=1 -O- http://localhost:3100/ready
interval: 3s
timeout: 3s
retries: 10
start_period: 10s

promtail:
container_name: promtail
image: grafana/promtail:latest
ports:
- "9080:9080"
volumes:
- ./promtail-config/config.yaml:/etc/promtail/config.yaml

grafana:
container_name: grafana
image: grafana/grafana:latest
# volumes:
# - ./grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
ports:
- "3000:3000"
environment:
- gf_paths_provisioning=/etc/grafana/provisioning
- gf_auth_anonymous_enabled=true
- gf_auth_anonymous_org_role=admin
depends_on:
- loki

Also, we will keep the configuration to default for all the three components. We will dig deep into each individual configuration separately in future blogs, for now leaving everything to default. Once all these are setup then run below docker command and make sure all the containers are in running state.

docker-compose up

Setup API and Configure Serilog Sink to write to observability stack

We’ll utilize Serilog sinks to direct log output to Loki. It’s worth noting that with Serilog, we have the flexibility to route logs either to Promtail or directly to Loki by specifying the respective URL. In the context where Promtail is operational, and our application runs within Docker containers, Promtail can be configured to ingest container logs and forward them directly to Loki. However, in our local setup, we’re configuring our application to dispatch logs to Promtail. If preferred, you can bypass Promtail entirely and configure Serilog to send logs directly to Loki using the appropriate sink.

Setup a sample WebApi and install Serilog sink using below commands:

dotnet add package Serilog.AspNetCore
dotnet add package Serilog.Sinks.Grafana.Loki
dotnet add package Serilog.Settings.Configuration

Now add below settings to your appsettings.json file.

{
"AllowedHosts": "*",
"Serilog": {
"Using": [
"Serilog.Sinks.Grafana.Loki",
"Serilog.Sinks.Console"
],
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "Console"
},
{
"Name": "GrafanaLoki",
"Args": {
"uri": "http://loki:3100"
}
}
]
}
}

Configure you program to use serilog as shown below:

using Serilog;
using Serilog.Formatting.Compact;
using Serilog.Sinks.Grafana.Loki;

//var configuration = new ConfigurationBuilder()
// .SetBasePath(Directory.GetCurrentDirectory())
// .AddJsonFile("appsettings.json")
// .AddJsonFile("appsettings.Development.json")
// .Build();

//Creating the Logger with Minimum Settings
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Override("Microsoft", Serilog.Events.LogEventLevel.Warning)
.Enrich.FromLogContext()
.WriteTo.Console()
.CreateLogger();

var builder = WebApplication.CreateBuilder(args);

//Read from appsettings.json
builder.Services.AddSerilog(options =>
{
//Override Few of the Configurations
options.Enrich.WithProperty("Application", "ProductAPI")
.Enrich.WithProperty("Environment", "Dev")
.WriteTo.Console(new RenderedCompactJsonFormatter())
.WriteTo.GrafanaLoki("http://loki:3100");
});

// Add services to the container.

builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}

app.UseAuthorization();

app.MapControllers();

app.UseSerilogRequestLogging();

app.Run();

Now when you run the application locally and have them generate log entries you can verify the log in Grafana after configuring Loki in it.

Configure Grafana and Loki

For checking the logs first you need to browse to Grafana UI using http://localhost:3000, then it will prompt for login. Use “admin” as both username and password, then you will be prompted to change password. Once done you need to follow below steps for setting the source and viewing logs:

Once done navigate to data sources and click explore the data source you want to view the logs:

Then you can start searching the logs as shown below:

Then you will start seeing logs as shown below:

Even though we have written directly to Loki, you can change the code to have Promtail do the same. We did not delve into building dashboards using Granafa, which we will cover in future blog.

Source code for this blog can be found below:

🙏Thanks for taking the time to read the article. If you found it helpful and would like to show support, please consider:

  1. 👏👏👏👏👏👏Clap for the story and bookmark for future reference
  2. Follow me on Chaitanya (Chey) Penmetsa for more content
  3. Stay connected on LinkedIn.

Wishing you a happy learning journey 📈, and I look forward to sharing new articles with you soon.

--

--

Chaitanya (Chey) Penmetsa
CodeNx
Editor for

👨🏽‍💻Experienced and passionate software enterprise architect helping solve real-life business problems with innovative, futuristic, and economical solutions.