.NET Core: Our Road To a Very Own “Runscope”
In this blogpost, I’d like to give an overview of the path I took to build our very own Runscope-like API monitoring solution, laying out the bits and pieces in three steps:
- First some hints and ideas on how to structure a .NET Core application
- A good deal is dedicated to a dive into the code: how to make use of React.NET, NodeServices as well as ASP.NET in general, useful services — e.g. caching headers, health-checks and response compression — and building the application with Cake.
- Finally we’ll hook up our app to our data sources and Prometheus.
At the core is our very own “Runscope” — we call it Web Exporter.
You can find the source of the solution here:
https://github.com/smartive/web-exporter
And a preview of the application here:
https://demo.web-exporter.smartive.cloud/
Basic Setup
The start of every project seems pretty much the same: Questions like “how should I structure my code” or “What are the common best practices for problem X” need to be answered. But let’s put these aside for a moment and look at the ingrediences first theat make up the app:
- .NET Core 2.2
- Web application with Razor Pages
- React.NET included in the web application
- Small SQLite database to store our data
- GitLab CI to build and deploy our application (in docker)
- semantic release for release management
- Prometheus and Grafana to export and show the metrics
- NodeServices to run the response tests
There are many ways to structure a project and since there are multiple “views” to a project structure — like Solution View or File View — you can structure your project however you see fit. As our Web Exporter was a true .NET Core project, I made a solution file and added the projects to this solution file.
If you’re going to build a big application, you may want to structure your project folder a bit. Since there’s a high chance that you have something like a DataAccess
and a Models
project among others, it could be wise to use a src
folder. And obviously a test
folder. My suggested structure would look something like this:
project-root/
├── src/
│ ├── WebApp/
│ │ └── WebAbb.csproj
│ ├── DAL/
│ │ └── DAL.csproj
│ ├── DataAccess/
│ └── └── DataAccess.csproj
├── test/
│ ├── WebApp.Test/
│ │ └── WebApp.Test.csproj
│ ├── DAL.Test/
│ │ └── DAL.Test.csproj
│ ├── DataAccess.Test/
└── └── └── DataAccess.Test.csproj
The src folder should protect your project root from pollution of csproject
folders. We’re going to add some more files later so the root folder is going to host some more files anyway.
Continuous Integration
If you’re just hacking something or making a proof-of-concept or generally don’t care about code quality — fine. In all other cases, I’d suggest you use a CI system to continuously test and build your code. In our case and for the sake of this how-to, we’re using GitLab with GitLab CI.
Since we started using semantic release at smartive, we’ve saved a s***load of time and that’s a very big plus for automation. With that technique, we don’t have to care about git tags, versions and changelogs — They are generated for us by the nice robots in the cloud.
semantic-release is a npm package that analyses your commits and automatically applies the version(-bump) for your repository. The only thing you’ve got to do is to convince your developers to adhere to a certain commit message structure. At smartive, we committed to the default of semantic release, which is the angular-commit-guidelines.
At Docker-Con 2018 in Barcelona, we learned a lot about golden images
— which is a term for prebuilt images that cover your exact use-case instead of trying to mash everything in scratch images over and over again.
We recently built such a golden image for semantic release, which is a small node:alpine
image that contains git
and the required packages for semantic-release. So all you have to do is tell your CI to use that image and run the default command npx semantic-release
. The image can be found in our github repository “semantic-release-image”.
A neat little thing called npm hooks
triggers a new build whenever one of the used packages gets a new release, so you can use the newest versions of the packages as soon as they’re out.
For GitLab CI we’ve got a straightforward template that handels building and deploying:
This means: When something is pushed into the master branch, semantic release triggers a new release. Since semantic release creates a new tag, the other two steps are triggered, creating a docker image and a Kubernetes deployment. We deploy our applications with “kuby”, a small CLI tool that helps with certain tasks during a deployment of Kubernetes applications.
Do some code
ASP.Net
First off, create a new web application project in your solution. The backend application will be powered by ASP.NET from the .NET Core framework. We use a “normal” paged application — that means no fancy single page application and no site reloads. ASP Razor Pages will do just fine for our use case.
With the rise of the new Razor Pages, convenience for developers has gone up the ladder. WebControllers
and ApiControllers
may be used in specific cases or when no “Page” is visible (like for login / logout methods or other elements that don’t render an HTML page). All other calls should go into the “code behind” file. This feels like a step back from MVC / MVVM but in this case it actually makes sense. The code comes around cleaner and each file covers one use case or page.
Also, dynamic bindings of complex objects are possible with Razor Pages. If you want to learn Razor Pages (which I’d suggest you do) have a look at https://www.learnrazorpages.com/razor-pages/.
Application bits
The first file you’ll encounter is Startup.cs
. It is responsible for the whole service registration and configuration of your application. There are many ways to configure your application, so I got you covered. Some parts are not in the Microsoft documentation or anywhere; I had to learn them the hard way. I’m not going to post the whole startup file, but go through certain relevant elements that could be useful further down your own road.
Basic MVC settings
As for the MVC settings, the example above is what I’d suggest for a .NET project. First of all, we want lowercase URLs. It just feels less like “screaming” to the user than myapp.com/User/1/Posts
— I know this is a personal opinion, but I grew up with lowercase urls, so why the hell not ;-)
ASP.NET will run its own web server (Kestrel). This is the place to deal with compression, so be sure to add .AddResponseCompression()
to the services (however, if you deliver the responses over a nginx or envoy proxy, deactivate it).
Of course, we need to add MVC capabilities via .AddMvc()
and later on we set some default JSON serialiser settings.
React and Server Side Rendering
Aside from the MVC settings, we need to provide some more basic settings for our application. Basically, the frontend is “static”, but there are some dynamic elements — like an input list — that need to be implemented (Obviously, we could do without it, but it’s arguably more convenient if changes are reflected immediately, without refreshing the page). For these dynamic frontend elements, we use React.
Luckily for us, the team that created React also thought about server side rendering in .NET, so they created a package that actually enables your .NET Kestrel server to render any exported React component server-side with initial data. This technique can be used to empower a static site (like the one I created) with certain dynamic elements or even back your single page application with server side rendering to improve performance and SEO.
Logging, React.NET, DataAccess, Caching, HealthChecks
The code file above is the more or less complete version of the Startup.cs
file. The metrics and Prometheus parts are still missing, but we’ll get to that later.
I’ll walk you through ConfigureServices()
first, then Configure()
.
ConfigureServices()
First of all, we add logging. As a prerequisite, you should have an appsettings.json
as well as an appsettings.Development.json
file in your project. Those files are normally generated by default when you create a new project. In this settings file, we configure the verbosity:
We also add the magic️™ of React.NET to the services. You need to register the IHttpContextAccessor
as a singleton so that the Chakra engine can access it during rendering. Then you add the ChakraCore
JS engine. This step enables your application to render React components. The configure parts afterwards tell the system where to search for components.
Next in the ConfigureServices
method is data access. I used our own library Smartive.Core
which contains certain easy to use repositories among other helpful parts used in .NET development. With that database library, we only need to declare our models that derive from a certain base class and after that, for simple CRUD operations we can use the ICrudRepository<TModel>
interface to access data.
As a last bit in the service configuration stage, we add health checks that return a specific response on specific URLs to provide orchestration systems like Kubernetes with information about the readiness or lifeness of the application.
Configure()
The next step in the application run order is Configure()
. This method effectively enables all the functionality we added above.
The first element in this method should be the configuration of React. We tell the application to actually use React and configure the render engine:
.SetReuseJavaScriptEngines(true)
does exactly what it says: It tells the app to recycle any instantiated JavaScript engines..SetLoadBabel
enables or disables Babel to add.jsx
files that are not transformed beforehand. You can add scripts and whole components through this configuration which is sufficient in most simple cases..SetLoadReact
tells the engine whether to include React. If you already bundled your components with webpack or another bundler, you most likely added React to it so you don’t need to load it again..SetJsonSerializerSettings
sets (more or less the same) serialiser settings as we did with the MVC settings. They are used when rendering React components and adding props to the components..AddScriptWithoutTransform
finally we add our compiled and bundled app.js file to the engine. As mentioned, you can add scripts that need to be transpiled first and the engine will do that and cache the result in its memory.
After React, there is an if
that decides whether to add development elements or production elements.
During development we:
- add the DeveloperExceptionPage which adds a nice way to see exceptions and the code that threw it.
- add the DatabaseErrorPage for exact errors when a data error happens.
- configure static files to set
"no-cache"
to all files so we get the new files on each request.
For production we:
- set forwarded headers (
X-Forwarded
) so that a proxy that terminates the TLS connection for us can correctly set a protocol or other elements. - configure static files to serve all files with the public pragma and a cache control header of
"public, max-age=31536000, immutable"
, making them cacheable forever. Since we’ve bundled our JavaScript, we add a hash to the request which is unique for this specific file, so we can cache a version forever.
Second to last, we configure the URL for the health checks and tell the application to actually “UseMvc”.
And last but not least we migrate our database. I know that there is a big discussion and controversy about how and when you should or shouldn’t migrate your database. In this particular use case it’s totally fine since we use a local SQLite database and we don’t have any other dependent applications that rely on our stored data. Always analyse your use case and decide wether to migrate your database automatically or by other means.
Start some “frontend” coding
Now that we’ve talked about the backend parts of the software, we need to talk about the frontend. As mentioned, we’ll use React to provide some dynamically rendered elements in input fields.
In the image above, you see the React component in action. When the user clicks “add”, a new name / value combination is added; when the user clicks on a delete link, the corresponding element is deleted without a page refresh.
webpack
First of all, create a React project. For a project as small as this, we don’t need any boilerplate. Actually, you could just use the TypeScript compiler and output everything into one file. I used webpack because I wanted to bundle styles as well.
The configuration above shows pretty much the whole config. To be fair, one part that I snipped out was the postcss-loader
which adds 30 more lines of code.
The relevant parts are:
- The TypeScript loader with the corresponding
CheckerPlugin
in the plugins section - SASS compiling and the
resolve-url-loader
for relatively placed asset files like fonts and icons - The favicon which should be placed in the media folder without any hashes
- Other files should be placed into the media folder with a corresponding content hash
- The
CleanWebpackPlugin
that deletes the “../WebApp/wwwroot” folder each time we compile the solution
We do compile and bundle the files without hashing them. This part is done later by ASP.NET.
ASP.NET
Assuming that you wrote your React components and compiled your frontend application to a js/app.js
and a css/app.css
file, you can now use those files in your — hopefully — shared Layout.cshtml
.
But let’s start from the beginning. Your Razor Pages behave similarly to the MVC Views
, they reside under a Pages
folder in your application source and can contain a _ViewStart.cshtml
as well as a _ViewImports.cshtml
file. In ViewStart we only define that the layout _Layout.cshtml
be used and in ViewImports we add some namespaces and import React.NET:
@namespace WebApp.Pages
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
@using React.AspNet
In our Pages/Shared/_Layout.cshtml
we inject the app.js
, app.css
and React initialisation scripts in the following way:
As you can see, we add content hashes forapp.js
, app.css
and favicon via the custom ASP.NET tag helper asp-append-version="true"
and therefore can use the advantages of our cache headers with those files.
React
Now for the funny part in React: all components that you want to use in your scripts need to be globally exported so that the Chakra engine can access and render them server-side. As we’ve decided to use our own React library and don’t include it via the React.NET engine, we need to export those elements as well.
The main parts of this implementation are:
- Export React, ReactDOM and ReactDOMServer in the global object
- Export all your components that you want to use in the global object (The property path that you name will be used for rendering)
- Code your React components like “normal”
To actually use the component in a cshtml
file, just use the React HTML helper of the library:
<div>
@Html.React(
"Components.WebcheckNameValueForm",
new { data = Model.WebCheck.Labels, enumerable = "Labels",
webCheckId = Model.WebCheck.Id })
</div>
This will render our component with the provided anonymous object as prop data.
Building: a piece of cake — literally
We’re done coding and happy with the result. The question now is: How do we build the application? The answer is “Cake”. Cake — at least in this case — is a composite word for “C# Make” and it’s an implementation on top of the Roslyn compiler to let you write your build scripts in C#.
As soon as you set up your project according to the “how to install” guide, you’re able to use Cake to build your application. One thing though: to actually build your project with .NET Core, you need a special build.sh
file. The one provided by the default Cake installation does use Mono. Since you don’t want to install Mono in your Dockerfile for building, I’d suggest you use the one over over here: https://github.com/nlowe/cake-bootstrap-dotnet.
Now we’re able to build our project in Mono and .NET Core. Next, we need to configure our build.cake
file, which is straightforward:
We only need two tasks for now: “Clean” which deletes the artifacts directory and performs a dotnet clean
command and “Build” which executes a dotnet publish --runtime linux-x64
command. By defining the default to be “Clean” and then “Build” we can simply build our application by running ./build.sh
and get the result into our artifacts directory.
This step can now be used in a Dockerfile or something similar.
Prometheus
We’ve covered the applications basics: A propper Startup.cs
is in place, webpack is doing just fine and we can build the application.
Now we need some metric calculation and delivery. I’m not going to explain Prometheus since this would be a whole book on its own. I assume that the basic knowledge of Prometheus is in place and that you have an instance running.
There are two basic ways to get the metrics to Prometheus: the push gateway and polling by Prometheus. In this particular case, I used polling because the data is gathered asynchronously and then held in-memory.
Gathering and delivering metrics
To get a metrics endpoint and provide Prometheus with data, let’s install the prometheus-net.AspNetCore dependency. With this package in place, we modify the Startup.cs
file and add the later described collector to the collector registry. To achieve this, add the following code to the Configure()
method:
DefaultCollectorRegistry.Instance.Clear();
DefaultCollectorRegistry.Instance.GetOrAdd(collector);
app.UseMetricServer();
We tell the application that we want to use the metrics server (which does expose the /metrics
endpoint) and add our custom metric collector to the collection.
As for the collector itself, we need to implement the ICollector
interface. This interface defines certain properties that are required by the Prometheus .NET engine. The collect method is responsible for defining the metrics and adding the values with the corresponding labels to the metrics.
When the metric is defined, we add all results to the specific metric (using a foreach loop) and in the end, a list of all metric-families is returned.
This is it for Prometheus. There’s no black magic needed to pass the metrics to Prometheus. It’s much more complicated to gather them in the desired form. But more on that later.
Go Fetch Prometheus
This part is very simple. We just tell the Prometheus configuration to scrap our metrics endpoint. The following lines define the static scrap target in prometheus:
scrape_configs:
- job_name: 'web-exporter'
static_configs:
- targets:
- web-exporter-path
“Runscope” a.k.a. Web Exporter
What we have until now is an implementation of a C# ASP.NET application with some good extensions to the startup methods and an endpoint for prometheus to gather data. Now we need to generate that data.
Get the data
To periodically generate the required data, we set up a HostedService
in the application. This is done via the IHostedService
interface and the registration of the hosted service in the ConfigureServices
method with the following code: services.AddHostedService<CLASS>();
. The documentation by Microsoft regarding background tasks is pretty straightforward and can be found here: Hosted-Services.
As soon as we get our service running we register a timer to actually trigger the Runscope-like API checks every 60 seconds.
The main magic™ is located in the WebCheckExecutor.cs
class (source). For each check that should be executed, the logic sends the HTTP request to the given target with the configured HTTP method. If you have configured any request headers — like authentication for example — they will be added to the request and sent along. When receiving a response, the elapsed time is measured and the response content is parsed into a string.
Response tests
So far, nothing fancy here — we send some requests and gather the content from the response. BUT: if you defined response tests in the UI with the very convenient monaco-editor (the engine that powers VS Code), they will be up next. The first version of this web-exporter did use the Chakra engine to run the tests in a browser-like environment. The Chakra engine is the JS engine that powers the Edge browser. Sadly, we ran into memory problems when parsing bigger JSON chunks so we had to move on to a better solution.
NodeServices to the rescue! After the response has been streamed into a string, each response test is executed via NodeServices. NodeServices is a .NET library that allows any .NET application to invoke scripts within a Node environment. You need to have Node installed admittedly, but what you get is quite awesome: The vast ecosystem of Node paired with the lean memory footprint of Node when processing such content.
The script that is executed in node is pretty simple:
It’s basically logging and the exporter function. This function is called by .NET which parses the callback when the function returns. Any errors that are thrown are caught and returned in the callback as well.
To facilitate testing, we add jsonpath
and lodash
to the sandbox but we don’t allow any other elements to be required or evaluated, so that no funky scripts can be injected. This is provided by the vm2
Node package. Even scripts like while(true){}
won’t do any harm since the script has a configured timeout of 1 second. As an example of such a response test, consider the following script:
responseTest(
(request, response) => {
const now = new Date();
if (now.getHours() >= 7 && now.getHours() < 9) {
return true;
}
const body = JSON.parse(response.content || ‘’);
return body.up_to_date;
},
);
Now our executor will use those scripts and perform the invocation from .NET:
When all those tests return true
the test is considered “OK”.
Result
Done. We now achieved a .NET application that triggers an execution of certain web checks each minute and delivers the data to prometheus when asked for.
To spare your eyes a sensory overload from emojis, I’d like to show you the application and the little demo that we host on our Kubernetes cluster.
References
- .NET Core: https://dotnet.microsoft.com/download
- GitLab: https://gitlab.com
- semantic-release: https://github.com/semantic-release/semantic-release
- semantic-release-image: https://github.com/smartive/semantic-release-image
- npm hooks: https://blog.npmjs.org/post/145260155635/introducing-hooks-get-notifications-of-npm
- kuby: https://github.com/smartive/kuby
- smartive-core: https://github.com/smartive/smartive-core
- Cake: https://cakebuild.net/
- prometheus-net: https://github.com/prometheus-net/prometheus-net
- Monaco Editor: https://microsoft.github.io/monaco-editor/index.html