Qinling, let’s the journey begin!

Gaëtan Trellu
May 23 · 4 min read

Few days ago I started to work on Qinling implementation in our OpenStack platforms and this Medium story (which is my first) is about it. But before going further, let’s have a quick overview of what Qinling is and what it does.

From @Neerja Narayanappa page

Qinling is an OpenStack project to provide “Function as a Service”. This project aims to provide a platform to support serverless functions (like AWS Lambda). Qinling supports different container orchestration platforms (Kubernetes/Swarm, etc...) and different function package storage backends (local/Swift/S3) by nature using plugin mechanism.


Basically it allows you to trigger a function only when you need it, this helps you to consume only the CPU and memory time that you really need without to configure any server. Which means at the end the billing should be lighter (everybody loves that).

I will not going deeper in what Qinling is, there are many posts about it around the Internet[1].

Our platforms are deployed and maintained by Kolla which is an OpenStack project to deploy OpenStack within Docker and configured by Ansible. First thing I checked was if Qinling was integrated to Kolla, helas… no.

When you have to manage production you don’t want/like to deal with custom stuffs impossible to maintain or to upgrade (the little voice in your head knows what I mean) which is the reason why I started the integration of Qinling within Kolla (Docker[2] and Ansible[3] parts).

qinling_api and qinling_engine containers are now up and running, configured to communicate with RabbitMQ, MySQL/Galera, memcached, Keystone and etcd. The final important step is to authenticate qinling-engine to the Kubernetes cluster and I must admit that this part was the most complex to setup, the documentation about this part is a bit confusing.

Our Kubernetes cluster has been provisioned by OpenStack Magnum, which is an another OpenStack project to deploy container orchestration engines (COE) such as Docker Swarm, Mesos and Kubernetes.

Basically, the communication between Qinling and Kubernetes is done by SSL certificates (the same that you are using with kubectl), qinling-engine needs to be aware of the CA, the certificate and the key and the Kubernetes API endpoint.

Magnum provides a CLI which allows easily to retrieve the certificates, just make sure that you have python-magnumclient installed.

# Get Magnum cluster UUID
$ openstack coe cluster list -f value -c uuid -c name
687f7476–5604–4b44–8b09-b7a4f3fdbd64 goldyfruit-k8s-qinling
# Retrieve Kubernetes certificates
$ mkdir -p ~/k8s_configs/goldyfruit-k8s-qinling
$ cd ~/k8s_configs/goldyfruit-k8s-qinling
$ openstack coe cluster config --dir . 687f7476-5604-4b44-8b09-b7a4f3fdbd64 --output-certs
# Get the Kubernetes API address
$ grep server config | awk -F"server:" '{ print $2 }'

Four files should have been generated in ~/k8s_configs/goldyfruit-k8s-qinling directory:

  • ca.pem — CA — ssl_ca_cert (Qinling option)
  • cert.pem — Certificate — cert_file (Qinling option)
  • key.pem — Key — key_file (Qinling option)
  • config— Kubernetes configuration

Only ca.pem, cert.pem and key.pem will be useful in our case (config file will only be used to get the Kubernetes API), which from Qinling documentation will become these options:

[kubernetes]
kube_host = https://192.168.1.168:6443
ssl_ca_cert = /etc/qinling/pki/kubernetes/ca.crt
cert_file = /etc/qinling/pki/kubernetes/qinling.crt
key_file = /etc/qinling/pki/kubernetes/qinling.key

At this point, if qinling-engine is restarted then you should see a network policy created on the Kubernetes cluster under the qinling namespace (yes, you should see that too).

The network policy mentioned above could block the incoming traffic to the pods inside the qinling namespace which result in a timeout from qinling-engine. A bug has been opened[4] about this issue and it should be solved soon, so right now the “best” thing to do is to remove this policy (keep in mind that every time than qinling-engine will be restarted the policy will be re-created).

$ kubectl delete netpol allow-qinling-engine-only -n qinling

Just a quick word about the network policy created by Qinling, this one has for objective to restrict the pod access to trusted CIDR list (192.168.1.0/24, 10.0.0.53/32, etc…) which prevent connections from unknown sources.

Before going on a different subject, one common issue is to forget to open the Qinling API port (7070), this will prevent the Kubernetes cluster to download the function code/package (it’s time to be nice with your dear network friend ^^).

One of Qinling pitfalls is the “lack” of runtime, which prevent Qinling to be widely adopted (IMHO), the reason why there are not that much is because of security reasons (which I understand).


Actually, in the production environment(especially in the public cloud), it’s recommended that cloud providers provide their own runtime implementation for security reasons. Knowing how the runtime is implemented gives the malicious user the chance to attack the cloud environment.


So far, “only” Python 2.7, Python 3 and Node.JS runtimes are available, it’s a good start but it could be nice to have one for Golang and PHP too (just saying, not asking).

The journey just began and I think Qinling has a huge potential which is why I was a bit surprised to see this project not popular as it should be.

Having it in Kolla, improve the documentation about the integration with Magnum, Microk8s, etc… and provide more runtimes could help the project to get the popularity it deserved.

Thanks to Lingxian Kong and the community to make this project happen!

Gaëtan Trellu

Written by

Stories of a Technical Operations Manager @Ormuco_inc