The article about .NET tools

Diyaz Yakubov
12 min readJul 26, 2022

--

Usually, in small companies, people wear several roles. And quite often, among technical people, there is a person or group of people who is responsible for designing and evaluating the system architecture. Commonly, they share architecture and development activities (to some extent), hence, they have to zoom out/in on a system again and again to design the right abstractions on the right level. Of course, it is highly possible to miss the context between switching modes, especially, betwixt switching high-level design (services, modules, contracts, etc.) and low-level design (components, interfaces, classes, etc.). However, it is possible to be efficient on both sides if you reduce the inter switchings’ “mental payloads”. Modes are different, and they need different ways of thinking, e.g. fast/slow, profound/cherry-picking, etc. In this article, I will show my battle-tested set of tools that help me to cope with the problem of designing high-level architecture and low-level engineering stuff for .NET projects.

Particularly, I want to cover the following questions:

  • How to observe the system’s landscape that is made up of N projects?
  • How to build complex checks that may evaluate some specific cases?
  • How to scale sanity checks and run them on a regular base?
  • How to share a system design with different audiences?
  • How to examine critical/crucial areas in a code that are important from an architectural point of view?

By addressing these questions, I want to share my set of tools though I will not dig into great details because it might be super boring, in fact not😀, and time-consuming. Rather than that, I am going to give some hints and ideas for further independent investigations and learning.

How to observe the system’s landscape that is made up of N projects?

Practically, the majority of developers are familiar with a dependency graph where a person can observe what dependencies are in a particular project. Likewise, well-designed solutions follow architectural patterns, principles and rules, which define how to separate layers and modules, establish communications, and so on. Analyzing dependency graphs may detect possible architectural problems at earlier stages. Although there are many tools for visualizing graphs, not all of them are handy. For example, diagrams that are generated by inbuilt Visual Studio or Rider are superficial, as a result, they do not give much information. Nevertheless, there is a tool that covers my needs, and its name is NDepend. The prime advantage of it over other tools is that produced graphs are dynamic, where by dynamic I mean that a dev can manipulate the graph by making custom queries and filterings, clustering components to simplify the diagram, visualizing the dependencies’ paths and cyclings, searching and highlighting, and many other things. I reckon that this part deserves its separate article, and there is an official page that shows the basics, which I highly recommend reading if you want to start using it. Moreover, a user can go throughout assemblies, namespaces, classes, and even methods and fields, which gives an incredible level of granularity, where by double-clicking on a diagram element, the user may jump to the lines of corresponding code. Therefore, any of us can observe the structure of a big solution in great detail and be super-efficient. The picture below illustrates how it looks in Visual Studio (Note: NDepend has a standalone application, and there are extensions for Visual Studio and Azure DevOps).

I am sorry for the readability of the diagram, it was taken from an actual project, and I tried to make it ‘anonymous’. However, I wanted to show how it can cluster projects, and there are various relationship types, which are shown on the right-side panel. To get a better grasp I recommend watching this 6 minutes video.

NDepend — Dependency Graph

How to build complex checks that may evaluate some specific cases?

Another very powerful feature of NDepend is its query language CQLinq, which allows querying the code with C# and LINQ. Basically, it doesn’t have a steep learning curve, every C#/.NET developer should easily start leveraging it. With that people are capable to build their own specific requirements for the code via creating queries, rules, and quality gates. For example, imagine that our team was practising DDD concepts, and we wanted to be sure that in an application, aggregates are light, that they have a depth of fewer than 3 objects. If that rule was violated, NDepend would show a warning message to indicate the issue. Then the team would consider refactoring that aggregate to prevent it from bloating and smelling. An example below shows how to implement a simple WARN rule in CQLinq that notifies if a type name that ends with the “Agg” suffix and its cyclomatic complexity is more than 15 (Note: we assume that a team agreed to mark all aggregates within an application with “Agg” suffix).

NDepend — Custom rule in CQLinq

As we can see from the code example above, the syntax is simple and doesn’t require much effort from devs. Consequently, devs have a solid tool that opens the possibility to build comprehensive checkers of any complexity.

Moreover, NDepend logic is prevalently based on Rules and Quality Gates. The difference between those is that the first one helps to keep code cleaner, and prevents it from bashing and potential problems, while the second one outputs one of the statuses (Pass, Warn, Fail) and could be considered as a quality gate for shipping software to production. NDepend has a very good pack of pre-installed sets of rules and quality gates where some of them can be turned off/on. And of course, a developer can extend the list of rules/quality gates by adding his/her own custom rule/quality gate.

How to scale sanity checks and run them on a regular base?

Controlling the code quality is an important part of any development process, it requires a significant amount of seniority level from its participants and the most valuable thing is time. Nevertheless, sometimes due to constraints and time pressure we may omit this part or do it superficially. Although some basic quality checks can be verified by static code analyzing tools and even be extended by custom directives, unfortunately, it is still not enough. What if we can combine the power of CQLinq, run its rules on a regular base via CICD and produce a report where you can see the quality of the code-base in a convenient graphical view. Indeed, we can do that, there is an assembly “/net5.0/NDepend.Console.MultiOS.dll” that could be executed on Linux machines via terminal. Accordingly, if we can execute it in the terminal, we can do that in CICD pipelines, and here is the link to how to do that with a more profound description. The resultant of it would be a report, which comprises HTML and Javascript files.

NDepend — generated report

First of all, the report is sort of interactive, so that, a specialist is able to observe many different diagrams, charts, metrics, etc. By navigating, clicking links, and hovering the mouse on some types of elements, the specialist may get concise information about the state of the system. The picture below illustrates the Abstractness vs. Instability diagram that shows which assembly could be potentially difficult to maintain. Dots represent assemblies, by hovering the mouse it is possible to see more details.

NDepend — Abstractness vs. Instability diagram

In addition, the report provides a technical debt estimation in human hours format with a level of severity. This might be useful for risk analysis, or simply for planning the debt compensation work. However, this is just an estimation which is made by an algorithm, so, it shouldn’t reflect reality. Rather than that, it might be good to consider it as a complementary indicator that shows the complexity of a particular issue.

Unfortunately, the world is not a perfect place, and NDepend like any other software product has its pros and cons. I encountered a lot of pros in the first part of the article, so, let’s continue with the cons. The first drawback is that on a Windows machine, the experience is much richer. For example, there is an extension for Visual Studio that provides a simple GUI, furthermore, it is also possible to run it as a standalone application. However, it is not the same for macOS or Linux OSs, there is only the MultiOss.dll file that should be executed in a console. I was able to mitigate this problem by installing Parallels Desktop software on macOS, where I can use windows apps along with native applications, but it is an extra expenditure. The second con is it costs money, and it is not cheap. I guess they are able to achieve more popularity at the cost of lowering the price. However, NDepend offers 14 trial days for hands-on experience, though it might not be enough to get a grasp of the product (they could extend the trial to 30 days).

How to share a system design with different audiences?

As usual, various groups are eager to pursue their interests, and all of them have different backgrounds and knowledge. Based on these facts, we can target the information to a specific group of people and convey our ideas and design more concisely. There are different methods how to do that, but I found one very simple method. It is the C4 model for visualizing software architecture by Simon Brown. The C4 represents four levels of abstractions, such as Context, Container, Component and Code where each type has its semantic payload. The picture below depicts the hierarchy of these diagrams.

C4–4 levels of the C4 model
  • The first diagram is the Context Diagram that shows the big picture, how the system fits in an existing world, what it does, and its relationships with other external systems and users. This type of diagram could be shown to a broad audience including non-technical people, and it can be shared with external participants.
  • In the same way, Container Diagrams depict the system’s internals, specifically how container-wise it is organized, where a container is a self-deployable unit ( don’t confuse it with a Docker container). For example, if the system consists of a database, backend, and client applications, the diagram will illustrate 3 containers and their relationships within a system border, moreover, each container will have a brief description and tech stack. This type of diagram could be shown to an audience that has a common understanding of how information systems work, though it also contains some technical details. And it can be shared with external stakeholders as well.
  • In comparison with the aforementioned diagrams, the Components Diagram shows components and their interactions in a particular container where each component should describe its purpose and technical details. The audience is technical people, and it reveals a lot of internals, hence, it is not recommended to share it with external people. This diagram is optional.
  • And the last diagram, the Code Diagram shows how a particular part of a system was built in terms of interfaces, classes and derivations. This diagram is optional. Furthermore, the C4 suggests not to design it because abstractions on a code level have a very high pace of changes, and it is difficult to keep up with updating the diagram. Likewise, the code-level diagrams could be generated with a help of modern IDEs.

The notation of the C4 model is not described here intentionally because they are pretty simple and the rules can be obtained from the documentation. Last but not least, I want to propose 3 tools that I use for drawing C4 diagrams:

  • draw.io — visual diagramming tool. Not the most efficient one, but it is useful when the diagram is getting complex and you need to adjust some parts specifically, and maybe, add some additional notations.
  • plantUML — text-based diagramming tool. This tool is in the golden middle because it is simple enough and powerful at the same time. With a help of an extension from GitHub, the diagrams look very good. The only shortcoming is when the diagram is getting bigger, it might be difficult to tell plantUML how to draw some particular areas in order to increase the readability. Although, I found that in 90% of cases, it works well for Context and Containers diagrams.
  • Structurizr — it is a text-based modelling tool, the most powerful and expensive 💰. It grants one free workspace, and it can be enough for your own ‘pet’ project or just to try it out. Nevertheless, one workspace is not suitable for a real project.

How to examine critical/crucial areas in a code that are important from an architectural point of view? (very quick glance 🐎)

From time to time, I have to write high performant code for some specific areas. To know what has been affected by my changes, I need to test and measure it. In fact, there is a vast range of diagnosing tools that can be used, but I want to show what I have been using for years. The very first is an amazing BenchmarkDotNet .NET library that does all the heavy work when you do micro benchmarking (testing classes and methods). The library may help to optimize some subtle things. Besides that, for a very quick investigation, there is another tool that can help to evaluate the .NET code and LINQ queries as well, by presenting the intermediate results of code compilation, its name is LINQPad. It has GUI but it is only a Windows tool, which is a downside of it. And quite recently, I discovered for myself a very useful .NET playground website SharpLab by Andrey Shchekin. Although it doesn’t measure the performance of a code, it is a great resource for learning the internals of .NET and new language features. In the example below, we do an inspection of memory graphs of a newly created array of strings and array of chars. On the right side, we can see how they are allocated in memory, what is on a stack or heap, and their references. As you see, it is a very explanatory visualization of the complex memory management aspect.

SharpLab — Inspection of a memory graph

Apart from these appliances, there is also a good set of growing dotnet tools that helps to diagnose the application and do precise investigations. There is a great deal of them, but it is worth mentioning the most well-known dotnet tools:

  • dotnet-counters — it is a global tool that monitors some counters (values) that should be measured, it elicits numeric values that could be used in performance analysis.
  • dotnet-trace — it is a global tool that collects traces including events and CPU sampling, Garbage Collector collections, and database commands.
  • dotnet-dump — it is a global tool that collects Windows/Linux dumps without involving any extra native debugger overheads.
  • dotnet-gcdump — it is a global tool that collects Garbage Collector dump. It reveals objects’ roots and some general heap’s statistics.
  • dotnet-monitor — it is a global tool that helps to get easy access to diagnostic information in dotnet processes. It units tools, such as dotnet-trace, dotnet-gcdump and dotnet-counters, and it also provides HTTP APIs for accessing those data. This approach simplifies the process of diagnosing an application wherever it has been running (Docker, K8s, local machine).

Recently, I started learning a new tool for benchmarking of multi-tiered applications (.NET and Docker)- Microsoft.Crank. I didn’t include it in the list above because I have not used it seriously on any project yet. Here is the link to a very good demo.

These dotnet tools may produce report files that can be later analyzed via a dotnet tool itself, in Visual Studio to some extent (not all analyzing features are available in VS), and finally, the most comprehensive tool for analyzing dumps, traces and samplings is PerfView (only Windows 😢).

All of these tools oblige profound .NET knowledge and some practice around that, it is especially relevant to low-level optimizations and performance improvements. Despite it, it could be beneficial to start using those tools in order to learn and understand .NET internals.

To sum up, I reckon that tools like NDepend significantly improve the process of designing complex .NET applications. Specifically, NDepend reduces the time spent on observing the application’s architecture, introduces new ways of checking (querying) the code, and gathers and shows important key indicators related to the code quality. Moreover, all checks can be done on a regular base by integrating NDepend into CICD pipelines. In the same way, the C4 modelling offloads our heads from keeping many details. It splits up the system architecture into several self-descriptive and concise diagrams that effectively convey the information to people with different backgrounds. And finally, the most sophisticated system’s parts require deliberate and precise engineering. And those listed tools are capable to do it very well. All these instruments do excellent jobs, but it could be even better if they would work together more coherently. In the nearest future, I am wondering to see more cross-platform tools that will extend each other’s capabilities through integrations. Maybe, we are going to see new standards for visualising, profiling and diagnosing that will facilitate different tools to work together and give an all-round experience.

--

--

Diyaz Yakubov

I'm a software engineer and a big fan of web development ;). I have passions for programming, architecture, reading, design and art.