Serverless with OpenFaas, Kubernetes, and Python

Suman Das
Suman Das
Mar 4 · 12 min read
Serverless with OpenFaas

In this tutorial, we will learn how to deploy Flask Microservice as a Serverless Function in OpenFaas. In one of my previous tutorials, we have already seen how to create a Flask microservice. We will use that microservice as a reference and deploy it on OpenFaas as a Function.

What is Serverless?

FaaS OpenFaas

Docker and Kubernetes

Kubernetes is a portable, extensible, and open-source platform that facilitates the automation of the deployment, scaling, and operations of application containers (Docker containers, in this case) across clusters of hosts.

For further details on Kubernetes please check:


For further details on OpenFaas please check:

We will use the following steps to deploy Flask microservice as a Serverless Function in OpenFaas:

  1. Setting up Kubernetes and OpenFaas
  2. Dockerizing Flask Application
  3. Creating a Custom Function
  4. Deploying the Function
  5. Testing the Function


  • Docker Hub Account: An account at Docker Hub for storing Docker images that we will create during this tutorial. Refer to the Docker Hub page for details about creating a new account.
  • kubectl: Refer to Install and Set Up kubectl page for details about installing kubectl.

1. Setting up Kubernetes and OpenFaas

Helm is a Kubernetes package and operations manager. With Helm, users can publish their own set of configurations for their application via Chart, which can then be discovered, installed, upgraded, and managed with Helm or third-party automation tools.

arkade is an open-source CLI written in Go that helps to easily manage apps.arkade internally uses helm charts and kubectl to install applications to Kubernetes cluster. arkade exposes strongly-typed flags for the various popular options of helm charts, which can be discovered through arkade install --help orarkade install APP --help.

Here, we will use arkade to install OpenFaas as it very easy and quick to install apps using it.

1.1 Install arkade

# For MacOS / Linux: 
curl -SLsf | sudo sh
# For Windows (using Git Bash)
curl -SLsf | sh

1.2 Install OpenFaas

arkade install openfaas --basic-auth-password password123 --set=faasIdler.dryRun=false

The arkade install command installs OpenFaaS using its official helm chart. Here, we have installed OpenFass with our own password and we have disabled faasIdler.dryRun. faasIdler.dryRun is required for autoscaling, which we will come to later on.

To check all the available options for installation of OpenFaas we can use the following command.

arkade install openFaas --help

To verify if the installation is successful we can execute the following command.

kubectl get deploy --namespace openfaas

Once the installation is completed, the output should look like this:


To check the status of all core containers in OpenFaas we can execute the below command.

kubectl rollout status -n openfaas deploy/gateway

The following example output shows that the gateway deployment has been successfully rolled out.

deployment "gateway" successfully rolled out

We will use the kubectl port-forward command to forward all requests made to http://localhost:8080 to the pod running the gateway service.

kubectl port-forward -n openfaas svc/gateway 8080:8080

The connection to the gateway service from our local machine will remain open for as long as the process is running. In case, if it disconnects then we can run the command again. Alternatively, we can also run it in the background using & at the end.

1.3 Install faas-cli

# MacOS and Linux users# If you run the script as a normal non-root user then the script
# will download the faas-cli binary to the current folder
$ curl -sL | sudo sh
# Windows users with (Git Bash)
$ curl -sL | sh

OpenFass is now ready. We can login to OpenFass either using faas-cli or the UI as follows:

faas-cli login --username admin --password password123

Here, the default username is admin and the password is the one that we specified earlier during OpenFaas installation in step 1.2 Install OpenFaas.

To open the UI, navigate to in a web browser.

2. Dockerizing Flask Application

2.1 Prepare the Application

git clone

You can check Sample-Flask-Application for details, how the Flask application works, the folder structure and files. We will use this Flask application to be packaged into a docker image.

2.2 Create a Dockerfile

Open the project folder and add a Dockerfile with the following contents:

FROM python:3.7-slim-buster

WORKDIR /home/app

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .

CMD python

Each line in the preceding file is a command that is executed in a linear top-down approach.

  • FROM specifies the base container image over which the new image for our application container will be built. Here, we have taken the base image as python:3.7-slim-buster, which is an official Python image in slim variant.
  • WORKDIR indicates the default directory where the application will be installed. I have set it to /home/app. Any commands that are run after this would be executed from inside this folder.
  • COPY simply copies the files specified on the local machine to the container filesystem. I have copied requirements.txt to /home/app.
  • This is followed by RUN, which executes the command provided. Here, we used pip command to install all the dependencies from requirements.txt.
  • Then, we simply copied all the files from our current local folder, which is essentially our application root folder, to /home/app.
  • Finally, we used CMD to run the application by running python

We need to add the host parameter in file which was downloaded while cloning the project earlier. This allows the application to be accessed outside the Docker container. The change required in is highlighted as follows:

if __name__ == '__main__':
ma.init_app(app), debug=True,host='')

2.2 Create Docker Container Image

$ docker build -t sumand/python-sample-flask .

Here, we asked Docker to build an image using the Dockerfile at the same location. The -t argument sets the name/tag for the image that would be built. The final argument is a dot (.), which indicates that everything in the current folder needs to be packaged in the build.

We can check the created image using the following command:

$ docker images

2.3 Push the Image to Docker Hub

$ docker push sumand/python-sample-flask:latest

Now, that the docker image is created, we can use this docker image to run the Flask application from anywhere we want.

3. Creating a Custom Function

Since, we have a dockerized Flask application we will use the dockerfile template of OpenFaas to create our function as follows:

export OPENFAAS_PREFIX=sumand
faas-cli new --lang dockerfile sample-flask-service

OpenFaas functions are stored as a container image. Here, we will use DockerHub to store our image. Therefore, we are setting the environment variable OPENFAAS_PREFIX with the DockerHub username.

faas-cli new enables us to creates a new function via the dockerfile template in the current directory

Here, we have created a function called sample-flask-service using the OpenFaastemplate dockerfile.

After executing the above commands we will find that the following files were created:

  • sample-flask-service.yml
  • sample-flask-service/Dockerfile

We have to update the above files according to our requirements.

3.1 Update sample-flask-service.yml


The code lines are explained as follows:

  • Lines 1 to 4 contain information about OpenFaas and Gateway. We don’t have to change anything as we are using local OpenFass, else we need to update the gateway URL.
  • Line 6 contains the name of the function, which we created earlier.
  • Line 7 specifies the template, which we used while creating the function.
  • Line 8 specifies the folder (not the file) where our function code is to be found.
  • Line 9 specifies the Docker image name of the function, which will be built with its appropriate prefix.
  • Environment Variables: Lines 10 to 14 contain the environment values, which we have overridden according to our requirement.
    Setting the environment variable RAW_BODY to true is required to set the context.body to the original request body rather than the default behavior of parsing it as JSON.
    Also, we have updated the timeout values, as when these values are set too low they can cause the function to exit prematurely.
  • Autoscaling: Lines 15 to 18 are required to enable autoscaling. By default, OpenFaas will maintain at least one replica of our function so that it is warm and ready to serve traffic at any time with minimal latency. Since we need our function to be serverless we need to set along with faasIdler.dryRun=false, which we did earlier in step 1.2.

3.2 Update Dockerfile


The code lines are explained as follows:

  • Line 1 specifies the of-watchdog:0.8.0 as a base image. The of-watchdog implements an HTTP server listening on port 8080, and acts as a reverse proxy for running functions and microservices. It can be used to forward the request to our application.
  • Line 2 specifies the image of our sample flask application, which we created in step 2.
  • From lines 6–7 we are installing watchdog from the base image.
  • From lines 9–19 we are setting up non root user along with access rights.
  • From lines 21-27 we are setting watchdog as a startup process and upsteam_url as our flask application URL.

4. Deploying the Function

faas-cli up -f sample-flask-service.yml

The above command internally performs the following actions:

  • creates a local container image of the function
  • pushes the image to the remote registry, which in our case is the docker hub
  • using the OpenFaas REST API, creates a deployment inside Kubernetes Cluster and a new Pod to serve the traffic

Now, our Flask application has been deployed as a function to OpenFaaS.

5. Testing the Function

We can easily deploy Grafana as a pod in Kubernetes Cluster using the below command.

kubectl -n openfaas run --image=stefanprodan/faas-grafana:4.6.3 \
--port=3000 grafana

Next, use port-forward from our local computer to access Grafana without exposing it to the internet.

kubectl port-forward deployment/grafana 3000:3000 -n openfaas

Open the OpenFaas Grafana Dashboard from Enter the default password for Grafana, which is admin: admin. The OpenFaas Grafana Dashboard would look as follows:

OpenFaas Grafana Dashboard

From the Dashboard, we can see that there is no running instance of our function. This is because we have enabled autoscaling along with in step 3. This means if we don’t have any request to process for a certain amount of time then all the functions will be stopped. This will help us to save costs.

Now, let us send a request to our flask application and check the output.

Testing one of the API exposed by our Flask application

Once our sample request is executed successfully, check the Grafana Dashboard again.

Grafana Dashboard Showing Autoscaling

From the Dashboard, we can see that the number of replicas has increased to 1 from 0 to serve the request. This replica count will further increase depending on the load till the com.openfaas.scale.max specified in step 3.1 Update sample-flask-service.yml.

The way autoscaling works in the background is that the requests are blocked until the desired count of replicas are achieved. We can verify that from the gateway logs as follows:

kubectl logs -n openfaas deploy/gateway -c gateway -f
Gateway Logs for replica count

Since we have enabled in our function, it will scale down automatically when not in use. We can verify the same in the Dashboard.

Scaling down to 0

Now, all the APIs of our Sample Flask application are available in OpenFaas and can be executed in a serverless manner.

Flask API Swagger

The Swagger image shown above is from our sample Flask application as a reference.

To call any of the above API in OpenFaas just add in the beginning followed by the endpoint. For example, to call the /stores API use .

If you would like to refer to the full code, do check:


References & Useful Readings

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data…

Sign up for Analytics Vidhya News Bytes

By Analytics Vidhya

Latest news from Analytics Vidhya on our Hackathons and some of our best articles! Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem

Suman Das

Written by

Suman Das

Tech Enthusiast, Software Engineer

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app