Image for post
Image for post

Note: this is part 2 of a series of articles related to security in Blazor WebAssembly applications:
Part 1: Securing Blazor WebAssembly with Identity Server 4
Part 2: Role-based security with Blazor and Identity Server 4 (this article)

In a previous article we’ve introduced how authentication works in Blazor WebAssembly and we’ve seen a simple example on how to create a Blazor client, implement the login flow via Identity Server 4 and retrieve an access token to call a protected Web API.


Image for post
Image for post

Note: this is part 1 of a series of articles related to security in Blazor WebAssembly applications:
Part 1: Securing Blazor WebAssembly with Identity Server 4 (this article)
Part 2: Role-based security with Blazor and Identity Server 4

The new Blazor WebAssembly 3.2.0 includes support to client side authentication, which makes relatively simple to implement OpenID Connect and OAuth2 in your single page application. …


Image for post
Image for post

Last Friday I spent some time investigating an annoying bug we had in production: we have a relatively standard Single Page Web Application, in which a ASP.NET Core 2.2 Web API serves a React.js front-end. The Web API is runs on Azure App Service, using the InProcess hosting mode.

The issue: from time to time, the API would hang and start raising an HTTP 500.30 status code on any endpoint.


Image for post
Image for post

EDIT: This article has been originally published for .NET Core 3.1 — Preview2. .NET Core 3.1 Preview 3 introduces a breaking change that requires a slightly more clunky code in order to work.

When we create a single page web application with Blazor WebAssembly, it comes with a pre-configured HttpClient service in the IoC container. This means that we can simply inject an HttpClient dependency in our components and it will just work.


Image for post
Image for post

If you have been working with Azure in some real world project, sooner or later you have faced the 800 deployments problem: every resource group can only store a limited amount of deployment objects, which get created every time you provision a resource through the portal or ARM templates.

Think about a fully scripted CI/CD scenario, in which every commit on git kicks off a release that runs an ARM template, and you get the size of the problem: 10 releases a day will get you over the limit in the lapse of 3 months.

The only way to work around this is deleting the deployment object — no worries, your resources will stay untouched — but doing it on the Portal is painfully slow, at the point of frustration. …


Image for post
Image for post

In a previous article, I’ve briefly introduced a possible approach for a build and release pipeline in Azure DevOps that ultimately deploys a system on Azure Kubernetes Service.

However, solutions are seldom confined to the cluster itself: you might need databases, caches, storage, and although Kubernetes can potentially run this type of software as well, it’s best to handle stateful services differently.

If you are in Azure, for example, you might want to consume one of the various PaaS options, such as Azure SQL Database, Cosmos DB, Azure Database for MySQL, etc.

This article shows a simple approach on how you can integrate Cosmos DB and Azure Kubernetes Service in your release pipeline in Azure DevOps. However, as you will see, the code is pretty much entirely reusable for any other kind of database or external system we want to connect to. …


Image for post
Image for post

NOTE: I’ll be speaking at the next IT/Dev Connections conference in Dallas. If you want to join me and have a chat about ASP.NET, Docker, Kubernetes and Azure, use the DE SANCTIS code to get a discount on the conference fee.

Part 2: Integrate Cosmos DB (and other PaaS Services) to AKS in Azure DevOps

During the last few months, the offering in Azure for container based applications has improved dramatically: today we can privately host our images in Azure Container Registry, to run them either in a serveless or in a PaaS fashion, or we can set up our managed Kubernetes cluster in the cloud in literally minutes. …


Anyone who has been managing a real world application, knows that a proper monitoring infrastructure is not optional. You definitely want to make sure everything is working properly, without waiting for people complaining on Twitter in order to know that something is going wrong.

Being able to

  • track what is going on on your system,
  • code complex alert rules,
  • and see at a glance how your application is performing

are definitely aspects that you cannot build in house. There are several systems out there, like Application Insights or Log Analytics on Azure, or the Grafana stack. However, I’m personally in total love with Kibana or, better said, the ELK (ElasticSearch-LogStash-Kibana) stack: it has practically become an industry standard, it’s free, it has a powerful query language and, man, I’m such a fan of those beautiful dashboards! …


Image for post
Image for post

A typical pattern in a highly scalable distributed API, is having your cache servers as close as possible to your API boxes, in order to minimise the read latency. If you are on Azure and you are using Azure Redis Cache, I’m sure you’ve adopted a design similar to the following one, in which every API consumes a local Redis instance, which is kept up to date by a Cache Pre-Loader.

Marco De Sanctis

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store