In this tutorial, we will learn how to deploy Flask Microservice as a Serverless Function in OpenFaas. In one of my previous tutorials, we have already seen how to create a Flask microservice. We will use that microservice as a reference and deploy it on OpenFaas as a Function.
What is Serverless?
Serverless is an architectural pattern where the business logic are written as functions that can be executed in a stateless manner. Serverless does not mean executing code without servers. It merely means we don’t have to provision for hardware and infrastructure while writing our code. The application still runs on servers, which are managed by third-party services.
Function-as-a-Service (FaaS) is a serverless model that provides the ability to develop, run, and manage application functionalities without the complex infrastructure mostly associated with building and deploying microservices applications. Building an application by following this model is one way of achieving a serverless architecture, and is typically used when building microservices applications.
Docker and Kubernetes
Docker is a platform that uses OS-level virtualization to deliver software in packages called as containers. Each of these containers bundles its own software, libraries and configuration files. Although, they are isolated from one another, they can communicate with each other through well-defined channels. Containers are a good way of developing and deploying microservices.
Kubernetes is a portable, extensible, and open-source platform that facilitates the automation of the deployment, scaling, and operations of application containers (Docker containers, in this case) across clusters of hosts.
For further details on Kubernetes please check: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
OpenFaaS is an open-source framework that enables implemention of the serverless architecture on Kubernetes, using Docker containers for storing and running functions. OpenFaaS makes it easy for developers to deploy event-driven functions and microservices to Kubernetes without repetitive, boiler-plate coding. It allows you to package your code or an existing binary in a Docker image to get a highly scalable endpoint with auto-scaling and metrics.
For further details on OpenFaas please check: https://docs.openfaas.com/
We will use the following steps to deploy Flask microservice as a Serverless Function in OpenFaas:
- Setting up Kubernetes and OpenFaas
- Dockerizing Flask Application
- Creating a Custom Function
- Deploying the Function
- Testing the Function
- Kubernetes cluster: We will need a running Kubernetes cluster. In case Kubernetes Cluster is not available then follow the instructions to Set Up a Kubernetes Cluster.
- Docker Hub Account: An account at Docker Hub for storing Docker images that we will create during this tutorial. Refer to the Docker Hub page for details about creating a new account.
- kubectl: Refer to Install and Set Up kubectl page for details about installing
1. Setting up Kubernetes and OpenFaas
OpenFaaS works on any local or remote Kubernetes cluster. There are many options for deploying a local or remote Kubernetes cluster. Here, we will use Docker Desktop standalone Kubernetes cluster to deploy OpenFaas. We can deploy OpenFaaS using either Helm or arkade.
Helm is a Kubernetes package and operations manager. With Helm, users can publish their own set of configurations for their application via Chart, which can then be discovered, installed, upgraded, and managed with Helm or third-party automation tools.
arkade is an open-source CLI written in Go that helps to easily manage apps.
arkade internally uses helm charts and
kubectl to install applications to Kubernetes cluster. arkade exposes strongly-typed flags for the various popular options of helm charts, which can be discovered through
arkade install --help or
arkade install APP --help.
Here, we will use
arkade to install OpenFaas as it very easy and quick to install apps using it.
1.1 Install arkade
Use the following command to install arkade.
# For MacOS / Linux:
curl -SLsf https://dl.get-arkade.dev/ | sudo sh# For Windows (using Git Bash)
curl -SLsf https://dl.get-arkade.dev/ | sh
1.2 Install OpenFaas
Since, we are using local Kubernetes Cluster, we will use the following arkade command to install OpenFaas.
arkade install openfaas --basic-auth-password password123 --set=faasIdler.dryRun=false
The arkade install command installs OpenFaaS using its official helm chart. Here, we have installed OpenFass with our own password and we have disabled
faasIdler.dryRun is required for autoscaling, which we will come to later on.
To check all the available options for installation of
OpenFaas we can use the following command.
arkade install openFaas --help
To verify if the installation is successful we can execute the following command.
kubectl get deploy --namespace openfaas
Once the installation is completed, the output should look like this:
To check the status of all core containers in OpenFaas we can execute the below command.
kubectl rollout status -n openfaas deploy/gateway
The following example output shows that the
gateway deployment has been successfully rolled out.
deployment "gateway" successfully rolled out
We will use the kubectl port-forward command to forward all requests made to http://localhost:8080 to the pod running the gateway service.
kubectl port-forward -n openfaas svc/gateway 8080:8080
The connection to the gateway service from our local machine will remain open for as long as the process is running. In case, if it disconnects then we can run the command again. Alternatively, we can also run it in the background using
& at the end.
1.3 Install faas-cli
Once OpenFaas is installed, we must install
faas-cli is required to deploy and test functions with OpenFaas. Execute the following command to install
# MacOS and Linux users# If you run the script as a normal non-root user then the script
# will download the faas-cli binary to the current folder
$ curl -sL https://cli.openfaas.com | sudo sh# Windows users with (Git Bash)
$ curl -sL https://cli.openfaas.com | sh
OpenFass is now ready. We can login to OpenFass either using
faas-cli or the UI as follows:
faas-cli login --username admin --password password123
Here, the default username is admin and the password is the one that we specified earlier during OpenFaas installation in step
1.2 Install OpenFaas.
To open the UI, navigate to http://127.0.0.1:8080 in a web browser.
2. Dockerizing Flask Application
In simple terms, dockerizing means using the docker containers to package, deploy and run applications. We will dockerize the sample Flask application which we created earlier in one of my tutorials.
2.1 Prepare the Application
Clone my sample Flask application project using the following command in any directory.
You can check Sample-Flask-Application for details, how the Flask application works, the folder structure and files. We will use this Flask application to be packaged into a docker image.
2.2 Create a Dockerfile
A Dockerfile is basically a text file which has precise instructions on how to build a Docker image for our project.
Open the project folder and add a Dockerfile with the following contents:
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD python app.py
Each line in the preceding file is a command that is executed in a linear top-down approach.
- FROM specifies the base container image over which the new image for our application container will be built. Here, we have taken the base image as python:3.7-slim-buster, which is an official Python image in slim variant.
- WORKDIR indicates the default directory where the application will be installed. I have set it to /home/app. Any commands that are run after this would be executed from inside this folder.
- COPY simply copies the files specified on the local machine to the container filesystem. I have copied requirements.txt to /home/app.
- This is followed by RUN, which executes the command provided. Here, we used pip command to install all the dependencies from requirements.txt.
- Then, we simply copied all the files from our current local folder, which is essentially our application root folder, to /home/app.
- Finally, we used CMD to run the application by running python app.py.
We need to add the host parameter in
app.py file which was downloaded while cloning the project earlier. This allows the application to be accessed outside the Docker container. The change required in app.py is highlighted as follows:
if __name__ == '__main__':
2.2 Create Docker Container Image
The creation of Dockerfile is followed by building a Docker container image using the docker build command. This will first read the Dockerfile where the instructions are written and then automatically build the image, which can then be run.
$ docker build -t sumand/python-sample-flask .
Here, we asked Docker to build an image using the Dockerfile at the same location. The -t argument sets the name/tag for the image that would be built. The final argument is a dot (.), which indicates that everything in the current folder needs to be packaged in the build.
We can check the created image using the following command:
$ docker images
2.3 Push the Image to Docker Hub
Next, we need to push the docker image to Docker Hub that will allow us to create, test, store and distribute the container images. Docker images are pushed to Docker Hub through the docker push command as follows:
$ docker push sumand/python-sample-flask:latest
Now, that the docker image is created, we can use this docker image to run the Flask application from anywhere we want.
3. Creating a Custom Function
In the previous steps, we deployed OpenFaaS to a Kubernetes cluster using the arkade CLI. Now, let us create a function using the OpenFaas template, which we can deploy in OpenFaas.
Since, we have a dockerized Flask application we will use the
dockerfile template of OpenFaas to create our function as follows:
faas-cli new --lang dockerfile sample-flask-service
OpenFaas functions are stored as a container image. Here, we will use DockerHub to store our image. Therefore, we are setting the environment variable
OPENFAAS_PREFIX with the DockerHub username.
faas-cli new enables us to creates a new function via the dockerfile template in the current directory
Here, we have created a function called
sample-flask-service using the
After executing the above commands we will find that the following files were created:
We have to update the above files according to our requirements.
3.1 Update sample-flask-service.yml
sample-flask-service.yml contains information on how to build and deploy function.
The code lines are explained as follows:
- Lines 1 to 4 contain information about OpenFaas and Gateway. We don’t have to change anything as we are using local OpenFass, else we need to update the gateway URL.
- Line 6 contains the name of the function, which we created earlier.
- Line 7 specifies the template, which we used while creating the function.
- Line 8 specifies the folder (not the file) where our function code is to be found.
- Line 9 specifies the Docker image name of the function, which will be built with its appropriate prefix.
- Environment Variables: Lines 10 to 14 contain the environment values, which we have overridden according to our requirement.
Setting the environment variable
trueis required to set the
context.bodyto the original request body rather than the default behavior of parsing it as JSON.
Also, we have updated the timeout values, as when these values are set too low they can cause the function to exit prematurely.
- Autoscaling: Lines 15 to 18 are required to enable
autoscaling. By default, OpenFaas will maintain at least one replica of our function so that it is warm and ready to serve traffic at any time with minimal latency. Since we need our function to be serverless we need to set
faasIdler.dryRun=false, which we did earlier in step 1.2.
3.2 Update Dockerfile
Now, update the Dockerfile for our function.
The code lines are explained as follows:
- Line 1 specifies the of-watchdog:0.8.0 as a base image. The
of-watchdogimplements an HTTP server listening on port 8080, and acts as a reverse proxy for running functions and microservices. It can be used to forward the request to our application.
- Line 2 specifies the image of our sample flask application, which we created in step 2.
- From lines 6–7 we are installing watchdog from the base image.
- From lines 9–19 we are setting up non root user along with access rights.
- From lines 21-27 we are setting watchdog as a startup process and upsteam_url as our flask application URL.
4. Deploying the Function
Our custom function is now ready to be deployed. Execute the following command to deploy the function.
faas-cli up -f sample-flask-service.yml
The above command internally performs the following actions:
- creates a local container image of the function
- pushes the image to the remote registry, which in our case is the docker hub
- using the OpenFaas REST API, creates a deployment inside Kubernetes Cluster and a new Pod to serve the traffic
Now, our Flask application has been deployed as a function to OpenFaaS.
5. Testing the Function
We have deployed our function as a Serverless function with autoscaling enabled in OpenFaas. Before proceeding to test our function we must set up monitoring of the process to understand how serverless actually works. For monitoring purposes, we can use Grafana.
We can easily deploy Grafana as a pod in Kubernetes Cluster using the below command.
kubectl -n openfaas run --image=stefanprodan/faas-grafana:4.6.3 \
Next, use port-forward from our local computer to access Grafana without exposing it to the internet.
kubectl port-forward deployment/grafana 3000:3000 -n openfaas
Open the OpenFaas Grafana Dashboard from http://127.0.0.1:3000/. Enter the default password for Grafana, which is admin: admin. The OpenFaas Grafana Dashboard would look as follows:
From the Dashboard, we can see that there is no running instance of our function. This is because we have enabled autoscaling along with
com.openfaas.scale.zero:true in step 3. This means if we don’t have any request to process for a certain amount of time then all the functions will be stopped. This will help us to save costs.
Now, let us send a request to our flask application and check the output.
Once our sample request is executed successfully, check the Grafana Dashboard again.
From the Dashboard, we can see that the number of replicas has increased to 1 from 0 to serve the request. This replica count will further increase depending on the load till the
com.openfaas.scale.max specified in step 3.1 Update sample-flask-service.yml.
The way autoscaling works in the background is that the requests are blocked until the desired count of replicas are achieved. We can verify that from the gateway logs as follows:
kubectl logs -n openfaas deploy/gateway -c gateway -f
Since we have enabled
com.openfaas.scale.zero:true in our function, it will scale down automatically when not in use. We can verify the same in the Dashboard.
Now, all the APIs of our Sample Flask application are available in OpenFaas and can be executed in a serverless manner.
The Swagger image shown above is from our sample Flask application as a reference.
Go inside the project folder and execute the below commands. We will use Pipenv to setup the VirtualEnv. pipenv shell…
To call any of the above API in OpenFaas just add
http://127.0.0.1.:8080/function/sample-flask-service/api/ in the beginning followed by the endpoint. For example, to call the
/stores API use
If you would like to refer to the full code, do check:
Using OpenFass templates we can easily port our existing Flask application to OpenFass and execute in a serverless manner.
References & Useful Readings
Kubernetes - OpenFaaS
Before deploying OpenFaaS, you should provision a Kubernetes cluster. There are many options for deploying a local or…
Helm is the best way to find, share, and use software built for Kubernetes. Helm helps you manage Kubernetes…
arkade provides a portable marketplace for downloading your favourite devops CLIs and installing helm charts, with a…
To find out more about the OpenFaaS templates see the faas-cli. Note: The templates are completely customizable - so if…
Scale to Zero and Back Again with OpenFaaS
In this post I'll explain how you can now save resources by having OpenFaaS automatically scale functions to zero…
The OpenFaaS watchdog is responsible for starting and monitoring functions in OpenFaaS. Any binary can become a…