Integrating Azure App Insights with NancyFX

Andy McKenna
DealerOn Dev
Published in
5 min readJun 21, 2019
Let’s go right!

Telemetry fever is sweeping the DealerOn Dev department and the reporting team is riddled with it. We’re starting to integrate Azure App Insights (referred to as simply “AI” now for brevity) into a number of our products so that the team can identify areas that need improvement as well as let management know what pages are most popular with our users. Our application uses NancyFX for our API framework and when I started researching how to integrate Azure App Insights, I began to look like Homer in the image above. Every article I read assumed you were using ASP.NET Core and talked about how plug-n-play it all was. There was usually some throwaway line about “if you’re using another framework, you’ll have to do a lot of this manually”. This article will show you how to do a lot of that manually.

Getting Started

The first thing you’ll need is an AI account. This is a well covered area so I won’t dwell on it but once you’re done you should have an Instrumentation Key that you can save with your other user secrets. Azure KeyVault is a great solution for that but anything that isn’t hard-coded into your source control will do.

Secondly, you’ll want to add the Microsoft Application Insights Nuget package to your solution. This will also create an ApplicationInsights.config file in that project. You can either store your Instrumentation Key here or load it in code and set it at runtime, depending on your preferences. I chose to wrap the TelemetryClient in another class so that we could mock it out when not in production and handle some of the boilerplate properties we’ll pass each time.

Logging Requests

When used with ASP.NET Core, the AI library can automatically track incoming requests because it already knows where to find them. We don’t have that luxury here so we need to wire that up ourselves. NancyFX has a few pipelines that you can add hooks to that will run at a certain time in the request life cycle. We’ll use a base class to add these so we don’t have to remember to include them in every class:

A base class for Nancy API endpoints that adds the same hooks to every request

The hook methods themselves are not very interesting, they are just grabbing the URL and route information from the NancyContext ctx variable and passing them on to our AI client. The start and stop request methods are actually using the StartOperation and StopOperation methods of TelemetryClient because you can tie the request to any sub-telemetry and present them together within the AI portal. This will be explained further in the Correlation section

These are pretty straightforward except for the RequestionOperations object. That’s a dictionary of IOperationHolder<RequestTelemetry> we hold on to so that we have it later to give to StopOperation. Stopping the operation will automatically calculate the duration and send the telemetry to the AI portal.

Database Calls

If you just dropped the AI client in to your app and let it run, you’d notice that you have a number of Events called SQL showing up in the AI portal with nothing else except a processing time. This isn’t terribly helpful because you have no idea what query this is and what parameters it’s using. Good luck optimizing that! Even the ASP.NET Core version of this is a little rough because it requires a secondary program installed on your server that will monitor the IIS process to extract the command text and parameters. The first thing we want to do is throw these right out, they just add clutter and waste bandwidth. A custom ITelemetryProcessor can be used to modify or remove telemetry before it’s sent to the AI portal. Here is a simple one I’m using to remove these SQL events:

Processors that just return without calling Next.Process(item) will discard the telemetry item

Now comes the fun part. All of our database queries ultimately end up at a single class that handles the SqlConnections and Dapper. We’ll wrap the database call with an object that we create in a using block so that we know the query is completed when the object is disposed. The method to actually format the parameters object isn’t included but you can implement that however you want.

With this class and your repository call wrapped in it, you’re now logging every query and all the information you need to need to drop right into SSMS and test it or view execution plans.

Note: It’s important to point out here that the parameters are viewable by anyone with access to your AI portal. If there’s anything sensitive in the query text or parameters you’ll need to think of a way to exclude or otherwise censor it.

Correlation

Our AI portal has a list of Requests and Events now but we don’t know which Events were spawned by which Requests. We need a way to tell the Events to tie themselves to a specific Request. You might have noticed that we returned telemetry.Id from our AI client’s StartRequest method. The BeforeHook method that gets that Id is putting it into a special class that stores it for the life of the request. Our AI client’s RecordEvent get’s that RequestId and uses it as it’s ParentId to establish the relationship within the AI portal.

This is possible because of the AsyncLocal<T> property. All of our endpoints are called with async handlers which means each time BeforeHook sets the request id, it’s in the context of only that request. When our AI client asks for the request id, it’s still within that same original async method for the request and therefore gets the correct one. When you view a request in the AI portal, you should be able to click “View All Telemetry” and see all of the children associated to it like in the screenshot below.

A sample Request and the SQL query it used

Errors

We’re using our ErrorHook to catch anything that bubbles up and would cause a 500 response. We send the exception to our standard logging library, tell our AI client to send an exception telemetry to the AI portal, and sanitize the response that is returned to the user. NancyFX ships with a slightly strange default error response and sometimes the full stack trace so you’ll want to substitute it with almost anything else.

Sending the exception to both our normal logger and the AI portal might seem redundant but allows us to see our traffic and errors in one place which will come in handy when we pipe all this data to a dashboard like Grafana.

Final Results

Once you get everything up and running you’ll discover you have a ton of raw data in the AI portal that is great for analyzing individual requests but is hard to see the 30,000 foot view. The other half of this coin is creating a dashboard you can use to visualize the overall trends of your app. My colleague Alex Johnston has some excellent Grafana starting points in his article Exception Handling and Telemetry with PostSharp, Application Insights, and Grafana.

--

--

Andy McKenna
DealerOn Dev

Dad, Developer, D-level goalie. Sr. Software Engineer at DealerOn