Running Azure Functions within Kubernetes Cluster

Paul Gerard
5 min readJan 24, 2020

--

With a small amount of effort searching the internet the benefits of Serverless cloud technologies are quickly laid out. We are keen to benefit from the available advantages but first needed to workout how they would fit within our existing architecture. The micro-services in our Azure based architecture all run within Kubernetes clusters, when we deploy Azure Functions, they can pick up messages from the Azure Service Bus that we use for inter-service communication, but can’t call the RESTful APIs that reside within the cluster.

We needed to prove that we could pick up a message from a service bus and use the message contents to make a call to a Web API within the cluster.

In order to easily call APIs that are hosted within the cluster, the client needs to be in the cluster. Whilst I accept we could have opened up ports to provide access, that would require more security than we were wanted to implement. The proposal was instead to deploy the Azure Function into the cluster within a Docker container.

Create the Azure Function Project

With Visual Studio 2017 being our principle IDE we get a template project to get us started.

Create a new project using the provided Visual Studio Template

We need to use a Service Bus Queue Trigger, specifying a name for the connection string configuration item and the name of the queue that the messages will be read from.

Selecting the function binding and adding preliminary configuration

Setup the Configuration

The ‘connection string setting name’ that we entered in the last dialog box needs adding to the project’s local.settings.json file. This should be the Service Bus Namespace in which the service bus queue is located. It is possible to use a default named configuration item AzureWebJobsServiceBus and not require the configuration string to be specified in the Service Bus Trigger but we have specified the configuration item here to aid clarity.

{    "IsEncrypted": false,    "Values": {    "AzureWebJobsStorage": "UseDevelopmentStorage=true",    "FUNCTIONS_WORKER_RUNTIME": "dotnet",    "ServiceBusConnectionString":  "[connection string]"    }}

With the configuration in place, the function can be run locally in the debugger to pick up messages of the queue. An event is logged to report when a message has been processed, this can be viewed on the Kubernetes Dashboard.

Submitting Messages to the Service Bus

The Service Bus Explorer Tool is useful to inject messages triggering the function. Having connected to the Service Bus Namespace using the connection string used to set ServiceBusConnectionString, simply right-click on the Queue that the function is bound to (specified in the dialog box during the project creation step) and select Send Messages. A dialog is presented in which the message contents can be define. When complete, Send submits the message to the queue. For the purposes of our experiment we are including a Shared Access Signature (SAS) Token and entity identifiers in the submitted message.

Setup a message to be submitted to the Service Bus Queue

Deploying the Docker Container

The Azure Function Core Tools provide a useful toolkit to support the development of Azure Functions. We use one of the commands now (from the Project folder) to create the Docker file for our project. We use docker-only because we’ve already created the project.

func init --docker-only --csharp

With the Docker file created, we are in a position to deploy the function to our Kubernetes cluster with another Azure Function Core Tool.

func kubernetes deploy --name pg-docker-azure-function --namespace default --registry [your container registry hostname]/docker-azure-function --csharp

This command reports

Running 'docker build -t [your container registry hostname]/docker-azure-function/pg-docker-azure-function E:\PGRepos\DockerAzureFunction\ServiceBusFunction'..doneRunning 'docker push [your container registry hostname]/docker-azure-function/pg-docker-azure-function'......donesecret/pg-docker-azure-function createddeployment.apps/pg-docker-azure-function createdscaledobject.keda.k8s.io/pg-docker-azure-function created

The Docker container is build, then pushed to the container registry. The connection string that we specified in the local.settings.json has been tucked away in a Kubernetes secret, and a scaled object has been setup.

If we check on the Kubernetes dashboard we can see the deployment, but no pods have been created. That changes when we submit a message to the queue, a new pod is created in which the function processes the message. With the default settings 5 minutes later, if no other traffic is received, the pod is deleted.

The pods are only create when messages are in the service bus queue. Once the queue has been empty for 5 minutes, the pods are deleted.

Calling APIs within the Cluster

To have access to the Message Properties of the Service Bus Message being received by the bound function we modify the default template function signature

Refined function signature giving access to the full Service Bus Message

Leaving the ServiceBusTrigger definition unchanged we substitute the default string argument for a Microsoft.Azure.ServiceBus.Message.

With this we can extract the contents from the submitted message.

Extracting message contents

Then submit a request to the Glasswall Engine API.

Filetype query to the Glasswall Engine API

The SAS Token provided in the Service Bus Message being submitted gives access to a file held in Azure BLOB Storage. Once the request is submitted to the Glasswall Engine API, the file content is read from storage and processed. In this case to identify the file type.

From the Kubernetes logs we can see that the call has been successfully made to the Glasswall Engine API and the file type identified as a JPEG.

Reported events showing the message content and the response from the Glasswall Engine API

Conclusions

From this experiment we have proven that

  • The Azure Functions can be generated quickly by developers with very little infrastructure required.
  • Shown we can extract messages content and use it.
  • Once deployed within the Cluster in a container, the Azure Function has access to all the APIs within that cluster.
  • When there is no traffic to handle, the Azure Function’s Pod is deleted.

--

--

Paul Gerard

Cloud Architect for Glasswall Solutions with 25 years of Software Development experience. Sharing the tech of this ever changing domain