Consume Cosmos DB (or other PaaS services) from your ASP.NET Core application in Azure Kubernetes Service

Marco De Sanctis
6 min readDec 3, 2018

In a previous article, I’ve briefly introduced a possible approach for a build and release pipeline in Azure DevOps that ultimately deploys a system on Azure Kubernetes Service.

However, solutions are seldom confined to the cluster itself: you might need databases, caches, storage, and although Kubernetes can potentially run this type of software as well, it’s best to handle stateful services differently.

If you are in Azure, for example, you might want to consume one of the various PaaS options, such as Azure SQL Database, Cosmos DB, Azure Database for MySQL, etc.

This article shows a simple approach on how you can integrate Cosmos DB and Azure Kubernetes Service in your release pipeline in Azure DevOps. However, as you will see, the code is pretty much entirely reusable for any other kind of database or external system we want to connect to.

The pipeline at a glance

The ultimate goal is to have a fully automated pipeline that provisions Cosmos DB and configures its connection string into Kubernetes without me having to even see it. We can use the pipeline I’ve described here as a starting point, augmenting it with a few additional steps.

Release pipeline in Azure DevOps

The big picture we’ve implemented with these tasks unfolds pretty much in this way:

  1. in the first couple of steps, I extract the PowerShell scripts and ARM template I’m going to use from the artifacts, and then execute them
  2. As usual, I replace some tokens in the Kubernetes YAML files— more on this later
  3. The 4th step is about creating a Secret in Kubernetes, where I’m going to store the connection string to the database
  4. Last step is about creating the deployments and services of our system, making sure we reference the secret in our pods configurations.

Let’s go through all these steps more in detail.

Provisioning Cosmos DB

The creation of Cosmos DB in Azure happens through an ARM Template. The content of the JSON file is pretty straightforward, here’s an excerpt:

ARM Template for Cosmos DB

In this case I’ve opted for the MongoDB API. The key point to highlight is that I’m calculating and returning the connection string in the outputs section.

This connection string is returned in plain text, however in order to store it as a secret in Kubernetes we have to encode it in Base64 first. PowerShell makes it super-simple:

Powershell script

The very last line with Write-Host is the way we can expose an internal variable of the script as an environment variable in the Build and Release Agent of Azure DevOps. This will allow us to reference it in the next step.

Creating a secret in Kubernetes

A Secret in Kubernetes is the preferred object where to store sensitive data, such as connection strings, OAuth tokens, etc.

Although it doesn’t provide all the security features that come with Azure KeyVault (such as Hardware Encryption, integration with Azure Active Directory and Managed Service Identity), it probably is the easiest approach we can take in order to store our connection string.

Note: One of the biggest advantages of running Kubernetes in Azure is the integration with the whole Azure ecosystem. Azure AD Pod Identity is a project in GitHub that allows to inject a Managed Service Identity into your Kubernetes Pods, which is definitely a more secure option. More on this in an upcoming post :)

As usual, the way we create a secret is through a YAML file:

Secret definition in YAML

The snippet above creates a secret called aksdemocosmos which contains a value called connectionstring. Since we run this after step n.3, the #{CosmosDbConnectionString}# token is going to be replaced with the actual connection string to Cosmos DB.

It’s important to also notice that we create this secret within the same namespace where we are going to deploy the pods (see here). This is utterly important as Kubernetes doesn’t allow us to reference secrets belonging to a different namespace than the pod’s.

Once ready, we can just execute this YAML file through a Kubernetes Azure DevOps task:

Kubernetes task to create the Secret

Referencing the secret from our pods definition

The last step we have ahead of us in order to obtain a working solution is passing the secret to our pod and ultimately, reading its value from our application.

One of the easiest ways to achieve this is exposing it as an environment variable, which plays incredibly nice with ASP.NET Core. As you might now, ASP.NET Core configuration manager automatically overrides the content of appSettings.json with environment variables, based on a naming convention. All you have to do is creating a variable whose name is the JSON path of the property you want to override, using the double underscore “__” as a separator.

For example, say that our appSettings.json looked like the following file:

{
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Warning"
}
},
"ConnectionStrings": {
"mongo": "mongodb://localhost"
}
}

In order to override the connection string, all we have to do is creating an environment variable named ConnectionStrings__mongo . This can be directly mapped to the aksdemocosmos secret we’ve previously created in the YAML file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: desdemoregistry.azurecr.io/backend:...
ports:
- containerPort: 80
env:
- name: ConnectionStrings__mongo
valueFrom:
secretKeyRef:
key: connectionstring
name: aksdemocosmos

If you want to verify that Kubernetes has correctly applied the secret value to the pod, you can inspect the pod metadata by running

kubectl describe pod {yourpodname} -n {namespace}

which will contain a line like the following:

The secret is correctly referenced

Alternatively, you can even jump on the Kubernetes dashboard and see the value of the secret the pod is consuming:

We can see the actual value from the dashboard

Conclusions and next steps

In this article we’ve explored one possible approach to seamlessly integrate Azure Kubernetes Service with Cosmos DB, and automate all of this in Azure DevOps.

In our example we’ve augmented the release pipeline we designed in this previous article by automatically provisioning an instance of Cosmos DB, capturing its connection string and storing it into a Kubernetes secret.

This allows us to consume it from within an ASP.NET Core application by simply binding it to an environment variable, without ever getting in touch with the actual value of the connection string itself.

Go and grab the full solution here :)

In a next article, we’ll see how to use a more advanced secrets storage with Azure Kubernetes Service, such as Azure Key Vault.

Marco De Sanctis is a solution architect and technology lover.

He’s a freelance consultant in IT, with 15 years of experience, and he’s been awarded as Microsoft Most Valuable Professional for the last 8 years. He’s also a book author, trainer, mentor and customary speaker at tech conferences.

His skills range from a very strong tech background in C#, ASP.NET, the whole Microsoft stack and Microsoft Azure Cloud infrastructure, to Solution Architecture, project and business management. His field of interest currently include Docker, Kubernetes, and highly scalable microservices architectures.

https://uk.linkedin.com/in/desanctismarco

--

--