5 Rules for .NET Services Optimization

Ori Hers
CodeX
Published in
5 min readNov 19, 2022
Photo by Joshua Earle on Unsplash

It doesn’t matter which framework you are working on or what is the purpose of your service, we as engineers are always looking for opportunities to optimize. It can be refactoring a method to make it shorter, changing architecture to save memory, using new features which utilize CPU consumption or other ways.

In this article, I will present five rules that would help us take our application or service to the next level. The examples would be .NET oriented, but they are relevant to other frameworks as well.

Always Monitor.

As we monitor our services in production, we should also monitor the performance in the development cycle. Nowadays there are efficient built-in tools for IDEs that shows the memory and CPU consumption during the application run. We should always keep an eye on these — look for anomalies and spikes. It would help us find areas that can be optimized and quantify any regression or improvement that we have.

Visual Studio 2022 diagnostic tool

A tool that I find very useful for this task is JetBrains DotMemory, which runs EXE files and monitors their memory usage during their lifecycle. DotMemory can also take snapshots at specific times and give recommendations for better memory utilization at the selected point (the string duplications feature is GREAT).

JetBrains DotMemory tool

Use Latest.

We live in an era in which developer tools are one of the hottest fields in the industry, with thousands of talented engineers working hard to make our life easier and our apps faster. Why not benefit from it?

Update the framework to the latest version and enjoy performance improvements almost for free (for some versions there might be small breaking changes). Using the latest framework version is also recommended security-wise, as usually it includes vulnerability mitigations.

For example, in the blog below you can see some benchmarks of .NET 6 which show significant improvement over the previous version: Performance improvements in ASP.NET Core 6. Something that really caught my eye in the article, is the memory usage of secure WebSocket connections (WSS) on different frameworks.

That’s almost 4x memory reduction from net5.0 to net6.0!

Understand GC.

A garbage collector (GC) can be our best friend or the worst enemy, depends on our perspective.

Photo by zibik on Unsplash

In common language runtime (CLR) the GC serves as an automatic memory manager. The GC manages the allocation and release of memory for an application. For developers working with managed code, it means that there is no need to write code that performs memory management tasks. Automatic memory management can eliminate common problems, such as memory leaks or memory access for already deallocated objects.

In .NET, the application memory is divided into separated blocks:

  • 3 types of generations (Gen0, Gen1, Gen2), which hold small objects.
  • Large object heap (LOH), which holds objects bigger than 85K bytes.

The .NET GC algorithm is based on several considerations:

  • It’s faster to compact the memory for a portion of the managed heap than for the entire managed heap.
  • Newer objects have shorter lifetimes and older objects have longer lifetimes.
  • Newer objects tend to be related to each other and accessed by the application around the same time.

So far, GC sounds great! but with great power, comes great responsibility. GC actions consume CPU usage, so when GC runs it can block or slow your service.

For improving GC process in your service, it is recommended to read and learn about potential improvements and features (like GC mode for servers). I recommend this great documentation by Microsoft: https://docs.microsoft.com/en-us/dotnet/standard/garbage-collection/. Make sure to understand how it works — this will help you the most.

Avoid Reflection.

Reflection provides the ability to obtain information and access types and object members. For example, call constructor, set property value, add event handler, etc.

// Reflection example
using System.Reflection;

int i = 42;
Type type = i.GetType();
FieldInfo[] fields = type.GetFields();
foreach (var field in fields)
{
Console.WriteLine(field.Name);
}

The use of reflection to get information about types and members is not restricted. We can always use reflection to perform the following tasks:

  • Enumerate types and members and examine their metadata.
  • Enumerate and inspect assemblies and modules.

Although reflection is a powerful tool that can help in writing complex code, it has many downsides:

  • It is common to say that, on average, reflection is ~1000 times slower than getting data through an accessor.
  • Reflection may have security risks. Some were addressed in newer .NET versions. Remember “Use Latest” section? 😊
  • Reflection can be a “code smell” for a bad design. .NET is an object oriented (OO) language for a reason. When developers use reflection, they break the OO methodology. It can indicate that something was done wrong.

Holy Strings.

Strings are great, useful and beautiful. They are extremely common in any code base, but they can be tricky as well. In most programming languages, including C#, a string is an object — a collection of char objects. This is why a string will be saved on the heap and it’s pointer will be located on the stuck.

It is worth mentioning that strings are immutable. It replies that any string concatenation will cause a new string allocation. This will cause longer running time, more memory allocations, and extra GC work.

If possible, always try to avoid this syntax:

var str1 = "Hello";
var str2 = "World";
var newStr = str1 + ' ' + str2;

And go with the following:

var str1 = "Hello";
var str2 = "World";
var strBuilder = new StringBuilder(str1);
strBuilder.Append(' ');
strBuilder.Append(str2);
var newStr = strBuilder.ToString();

As for string interpolation, always tend to choose the $ — a special character identifies a string literal as an interpolated string, instead of concatenation. String interpolation is optimized in the newer .NET versions, and, in my opinion, it’s also more readable.

De-dupe (deduplication) our strings can also be useful for saving memory. Using const on commonly used strings will make the compiler keep only one copy of the object and reference it in every use. In memory-heavy systems, it can be useful to create Atomize strings. It will cache strings while going over big data structures with repetitive data. As mentioned above, DotMemory has a string duplicate feature that can help to deal with strings in the service.

There is always something to improve in our service performance. Keep it in mind and look for the next enhancement that will move the needle. Like us, our services should always be better than they were yesterday.

Thank you for reading.

--

--

Ori Hers
CodeX
Writer for

Out of the box thinker. Software engineer @ Microsoft by day, horse rider by night.