A Comprehensive Guide to Configuring Logging with Serilog and Azure App Insights in .NET

Shazni Shiraz
Ascentic Technology
10 min readApr 4, 2024

--

Imagine it’s Friday, and your team is gearing up to deploy a new feature. After working tirelessly to perfect the code and QA testing, you’re ready to roll out the update. But just as you hit the deploy button, disaster strikes — an unexpected issue crops up. Now, your colleague is sifting through logs, trying to unravel the problem.

Unexpected error lack of observability in place

In moments like these, having reliable logging becomes crucial. Enter Serilog, a powerful logging library, and Azure Application Insights, a robust monitoring service.

In this guide, we’ll simplify the deployment process and show you how to set up effective logging with Serilog and Azure Application Insights.

Why Serilog?

Before we delve deeper into the configuration process, it’s important to address a fundamental question: why choose Serilog for logging in your application? Let’s explore some of the key features that make Serilog a popular choice among developers:

Fully Structured Logging:

  • Serilog enables you to log data in a structured format, allowing for easy querying, filtering, and analysis of log events.

E.g,

_logger.LogInformation("Adding product: {@product}", product);

leads Azure App Insights to show the log entry as

How a log looks when logged with structured object in Azure App Insights traces

(The benefits of structured logging will be discussed in detail later on 😉)

Note: The above example demonstrates logging an object; it’s generally advisable to log information that helps us troubleshoot issues rather than for storage purposes.

Sinks:

  • Serilog supports a wide range of sinks (Provided Sinks · serilog/serilog Wiki · GitHub), which are destinations where log events can be written. Whether you need to log to a file, console, database, or external service, Serilog offers a variety of sink options.
Serilog can push to many services via Sinks
An error in a slack channel when the API fails so that developers can take immediate action.

Enrichers:

  • Enrichers in Serilog allow you to enrich log events with additional contextual information. This can include details such as timestamps, machine information, user properties, or custom metadata specific to your application.

Dynamic Levels:

  • Let’s say you’re running a weather app, and suddenly, users start reporting errors when fetching forecasts. With dynamic logging in Serilog, you can quickly ramp up the logging level to capture detailed information about API requests and responses. This helps you pinpoint the issue, fix it, and then dial back the logging level once everything is back to normal. Dynamic logging saves the day by giving you just the right amount of detail precisely when you need it.
catch (Exception ex)
{
// Log the exception with an error level and a message indicating the operation that failed.
_logger.LogError(ex, "An error occurred while performing the operation.");

// Set the log level to Debug using the provided service.
_loggingLevelSwitchService.MinimumLevel = LogEventLevel.Debug;
}

Filters:

  • For example, as a developer, I would appreciate receiving a notification from Slack in the event of a failure in our API. However, I prefer not to receive messages concerning authorization failures, as these typically stem from issues with the user’s token rather than problems with the API itself. In such scenarios, filters prove invaluable as they allow me to exclude unauthorized exceptions and any other errors that are irrelevant.
.WriteTo.Logger(lc => lc
.Filter.ByExcluding(logEvent =>
logEvent.Exception?.GetType() == typeof(UnauthorizedAccessException))
.WriteTo.Slack(new SlackSinkOptions
{
WebHookUrl = "https://hooks.slack.com/services/xxxxxxxxxx/xxxxxxxx/xxxxxxxx",
MinimumLogEventLevel = LogEventLevel.Warning
}))

Now that we are aware of the benefits Serilog offers us it’s time to start getting into work and setting up logging.

Install Packages

  1. NuGet Gallery | Serilog.AspNetCore 8.0.1
  2. NuGet Gallery | Serilog.Sinks.ApplicationInsights 4.0.1-dev-00043
  3. NuGet Gallery | Serilog.Sinks.Slack 2.2.2
  4. NuGet Gallery | Microsoft.ApplicationInsights.AspNetCore 2.22.0 (Microsoft’s long term plan is to get away from this API and make use of Open Telemetry however, few features are unavailable -and if the sacrifices are affordable highly recommending to Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications — Azure Monitor | Microsoft Learn)

Configuring Serilog & App Insights

When it comes to setting up Serilog in your project, you have two main approaches: configuring it via appsettings.json or through code. Each approach has its own set of advantages and considerations, and the choice between them depends on your project’s specific requirements and preferences.

In this article we will be focusing on configuring via Code,

public static class ConfigureSerilogLogging
{
public static IHostBuilder AddSerilogLogging(
this IHostBuilder builder)
{
builder.UseSerilog((hostingContext, services, loggerConfiguration) =>
{
var levelSwitch = services.GetRequiredService<ILoggingLevelSwitchService>().LevelSwitch;

loggerConfiguration
.MinimumLevel.ControlledBy(levelSwitch)
.Enrich.With(new ThreadIdEnricher())
.Enrich.WithProperty("Version", "1.0.0")
.WriteTo.Console(
outputTemplate: "{Timestamp:HH:mm} [{Level}] ({ThreadId}) {Message}{NewLine}{Exception}")
.WriteTo.Logger(lc => lc
.Filter.ByExcluding(logEvent =>
logEvent.Exception?.GetType() == typeof(UnauthorizedAccessException))
.WriteTo.Slack(new SlackSinkOptions
{
WebHookUrl = "https://hooks.slack.com/services/XXXXXX/XXXXXX/XXXXXXX",
MinimumLogEventLevel = LogEventLevel.Warning
}))
.WriteTo.Debug();

loggerConfiguration.WriteTo.ApplicationInsights(
services.GetRequiredService<TelemetryConfiguration>(), TelemetryConverter.Traces);

});

return builder;
}
}

Program.cs

builder.Services.AddApplicationInsightsTelemetry();
builder.Host.AddSerilogLogging(configuration);

Now it’s time to identify the critical points in your application where logging is necessary. This may include:

Once you have successfully completed the above steps, it’s time to view your logs.

Playground Project — Ascentic/dotnet-serilog-with-azure-app-insight · GitHub

A sample project has been published under Ascentic/dotnet-serilog-with-azure-app-insight · GitHub where you’ll get to play around with the project and get a hands-on experience.

To get started:

  1. Clone the project.
  2. Set up an Application Insights resource on Azure.
  3. Navigate to appsettings.Development.json and replace APPINSIGHTS_INSTRUMENTATIONKEY and APPLICATIONINSIGHTS_CONNECTION_STRING with the corresponding values obtained from...
Instrumentation Key & Connection String to be copied

Exploring Azure App Insights

Azure Application Insights offers several features to leverage the data we ingest. Below are the features we rely on most in our day-to-day operations:

Application Map

Shows a breakdown of all your services and how, what they communicate giving a high-level breakdown and highlighting performance and error rates based on the communications.

Live Metrics

Live monitoring on how your API is performing see instantly incoming requests and logs etc.

Performance

Visualize performance breakdown of your APIs

Logs ⚡

Finally, the most crucial feature is the Logs section, where we can visualize all the logs pushed by our application. Within this section, there are several tables, based on the Application Insights telemetry data model — Azure Monitor | Microsoft Learn. The App Insight SDK pushes data to these tables, and Serilog channels all logs into the traces table based on our configuration.

Before proceeding further, let’s discuss on KQL (Kusto Query Language).

A read-only request language used to write queries for Azure Data Explorer (ADX), Azure Monitor Log Analytics, Azure Sentinel, and more.

KQL is a language used for querying logs in Azure Application Insights. Microsoft has made it easy to learn KQL via Write your first query with Kusto Query Language — Training | Microsoft Learn or if you are coming from a SQL background you can refer SQL to Kusto query translation — Azure Data Explorer & Real-Time Analytics | Microsoft Learn

Time to write our first KQL query

The main tables we are going to be associating would be.

  1. requests — contains all the requests our APIs have processed and includes many columns such as origin, duration, resourceId and many more, Ideal if you want to generate output based on the API requests to your application for troubleshooting.
  2. traces — contains all the logs from our API with severity, objects etc.

Let’s focus on the request table first, let us do a simple select

requests
Logs highlighting the requests that the application has been processing

Let's try constructing a query to identify the API that takes the longest to complete requests, let’s outline the steps before we write the query:

  1. Select only the columns needed for the requirement: We need the columns name, success, and duration.
  2. Filter only successful requests: - We’ll include only the records where success is true.
  3. Narrow down to APIs in the Leaderboard controller: We’ll focus on requests made to APIs within the Leaderboard controller.

With these steps in mind, the query construction plan would be as follows:

requests 
| project name, success, duration
| where success == 'True' and name has "Leaderboard/"
| summarize AvgDuration = avg(duration) by name
| order by AvgDuration desc

Let’s try to understand this query

  1. requests: This specifies the table from which we are retrieving data. In this case, it represents request telemetry data.
  2. | project name, success, duration: The project operator is used to select specific columns from the dataset. Here, we are selecting the name, success, and duration columns.
  3. | where success == ‘True’ and name has “Leaderboard/”: The where clause filters the data based on certain conditions. Here, it filters for requests where the success column is 'True' (indicating successful requests) and the name column contains the substring "Leaderboard/".
  4. | summarize AvgDuration = avg(duration): The summarize operator is used to compute aggregations over the data. Here, we are calculating the average duration (AvgDuration).
  5. | order by AvgDuration desc: Finally, the order by clause sorts the results based on the AvgDuration column in descending order, showing the names with the longest average duration first.

Let’s run the query and get the output

As we can see our Submit score yakes more time compared to other APIs

Would there be a more appealing way to view this data? That’s where render comes into play, all we have to do is extend the query and select the type of visualization we would like to see the output

Visualizing query output in a bar chart format

Querying logs with structured objects

One of the reasons stated above for picking Serilog was for structured logging capability and let’s see how that comes into play.

As previously shown, we’re logging an object, and this object will be stored under CustomDimensions in the log entry. This setup allows us to filter logs based on the object’s parameters, similar to SQL. Let’s examine an example for clarity.

NOTE: We should refrain from logging business-related entities and only focus on logging information that aids us in troubleshooting purposes. This example is provided solely for illustrative purposes.

 public IActionResult SubmitScore([FromBody] ScoreSubmission submission)
{
this.logger.LogInformation("Submitting score submission {submission}", submission);

As seen above we are directly passing the entity as a parameter in our log statement let’s see how that will be reflected and how can we query and filter based on the entities parameters,

traces
| where message has "Submitting score submission"
| project ScoreSubmission = parse_json(tostring(customDimensions.submission))
| where ScoreSubmission.PlayerName != '' and ScoreSubmission.Score > 1
| project ScoreSubmission.PlayerName, ScoreSubmission.Score

Breaking down the query,

  1. traces- This indicates that we’re querying the traces table, which presumably contains log entries or traces generated by the application.
  2. | where message has "Submitting score submission" — This filters the log entries to only those that contain the message “Submitting score submission”. It ensures that only log entries related to score submissions are considered.
  3. project ScoreSubmission = parse_json(tostring(customDimensions.submission))- This line parses the JSON content stored in the submission field under customDimensions. It converts the JSON string into a structured object, which is then projected as ScoreSubmission.
  4. | where ScoreSubmission.PlayerName != '' and ScoreSubmission.Score > 1 - This filters the log entries based on criteria related to the score submission object. It selects entries where the PlayerName is not empty and the Score is greater than 1.
  5. | project ScoreSubmission.PlayerName, ScoreSubmission.Score- Finally, this projects the PlayerName and Score fields from the ScoreSubmission object. It essentially selects these fields for display in the query result.

Extracting properties from unstructured logs

If you’re dealing with unstructured logs, the task of extracting data into variables for further processing can be challenging. However, Kusto Query Language (KQL) provides a solution by enabling parsing based on patterns or utilizing Regular Expressions (Regex). Let’s dive into an example:

Unstructured logs

Let’s explore on how to extract the names to a seperate column.

traces
| where message has "Added new score"
| project message, timestamp
| parse kind=regex flags=Ui message with * "Added new score for player" PlayerNameExtracted ' succeeded.'
Player name extracted to a seperate column from unstructured logs.

As demonstrated above, we have successfully extracted the name into a separate column, a maneuver that opens the door to a multitude of possibilities. This newfound structured data can be utilized for various insights.

Overall Kusto Query Language (KQL) provides a rich set of capabilities for querying and analyzing data, making it a powerful tool for visualizing and troubleshooting. And below are few more scenarios that helps us address with proper observability,

  1. Identify Peak Traffic Times
  2. Detect Anomalies in Response Times
  3. Track API Version Performance
  4. Track API Performance Across Environments

By embracing proper logging setup organizations can gain deeper insights into their systems, optimize performance, and ensure smooth operations.

Hope this article was helpful on giving an overview of setting up Serilog with Azure App insights, make sure to clone Ascentic/dotnet-serilog-with-azure-app-insight · GitHub and playaround to get a hands-on experience and don’t hesitate to reach out if you have any questions or require further clarification. Your feedback and discussions are always welcome!

Also, the YouTube video is available incase you’re a fan of watching than reading. .NET Diaries: Logging Brilliance with Serilog Azure APP Insights (youtube.com)

References

1.(1) Serilog… why? : dotnet (reddit.com)

2.Analyze monitoring data with Kusto Query Language — Training | Microsoft Learn

3.Tutorial: Learn common Kusto Query Language operators — Azure Data Explorer & Real-Time Analytics | Microsoft Learn

4.Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications — Azure Monitor | Microsoft Learn

5.Writing Log Events · serilog/serilog Wiki (github.com)

6.Structured Logging In ASP.NET Core With Serilog (milanjovanovic.tech)

7.Logging in C# — .NET | Microsoft Learn

8.c# — Benefits of Structured Logging vs basic logging — Software Engineering Stack Exchange

9.Logging in Azure with Application Insights and Serilog | HackerNoon

10.Logging structured data without it appearing in the text message · Issue #35995 · dotnet/runtime (github.com)

--

--

Shazni Shiraz
Ascentic Technology

Passionate software developer fueling magic through code. Simplifying complexity with a touch of fun. Let's create something extraordinary together!