Dispatch + Services

Functions are stateless. Applications are not. Functions as a Service (FaaS) depend on external services to provide state and other services in order to build more interesting applications. This blog post highlights the new service management functionality within Dispatch. This feature is still experimental. Additionally, we will be demonstrating this feature a the Cloud Foundry Summit later this month.

In order to bring services to Dispatch, we are leveraging the Open Services Broker API (OSBAPI). This API defines a contract for listing, provisioning and binding services such as databases. By working against this specification, we can easily integrate with any service which has an open service broker implementation. This includes just about any service designed to work with Cloud Foundry, which immediately gives us a great service catalog.

This post will be a demonstration of using the new services support in Dispatch. More specifically, we will be provisioning and binding an Azure Postgres database from Dispatch, then making that binding available to Dispatch functions. Now your serverless applications have state!

As stated earlier, this support is still experimental, so it’s likely that the CLI options as well as installation steps will change.

Prerequisites

  1. A working kubernetes environment with helm initialized.
  2. Dispatch CLI (v.0.1.11 release)
  3. An Azure account

For this demo, we are installing Dispatch locally via minikube.

Installing

Install Dispatch

Install Dispatch and create some base resources. See the quickstart for more information about configuring and installing Dispatch on Kubernetes.

$ dispatch install -f config.yaml
...
Config file written to: /Users/bjung/.dispatch/config.json
$ cd examples/
$ dispatch create -f seed.yaml
Created BaseImage: nodejs6-base
Created BaseImage: python3-base
Created BaseImage: powershell-base
Created Image: nodejs6
Created Image: python3
Created Image: powershell
Created Function: hello-py
Created Function: http-py
Created Function: hello-js
Created Function: hello-ps1
Created Secret: open-sesame

Install the service catalog

Dispatch is leveraging the Kubernetes service catalog as a proxy to open service brokers. Installing the service catalog and service brokers is currently a manual step handled outside of Dispatch. This is all likely to change as the service management evolves.

# Add the svc-cat helm repository
$ helm repo add svc-cat \
https://svc-catalog-charts.storage.googleapis.com
"svc-cat" has been added to your repositories
$ helm install svc-cat/catalog \
--name catalog --namespace dispatch --wait

Install the Azure Service Broker

You will need an Azure account as well as some setup.

# Add the azure helm 
$ helm repo add azure \
https://kubernetescharts.blob.core.windows.net/azure
"azure" has been added to your repositories
$ helm install azure/open-service-broker-azure --name osba --namespace dispatch --wait \
--set azure.subscriptionId=$AZURE_SUBSCRIPTION_ID \
--set azure.tenantId=$AZURE_TENANT_ID \
--set azure.clientId=$AZURE_CLIENT_ID \
--set azure.clientSecret=$AZURE_CLIENT_SECRET

Be aware, it can take a while for the pods to become ready.

All pods up and running

Working with Services

At this point, your environment should be ready to go. It takes a little while for dispatch to sync with the service catalog, so be patient. Also, don’t be alarmed if you see more or less services and plans. The Azure service broker is under heavy development.

Listing available service classes

In this example we are going to provision and bind a Postgres database which we can use from a function. The real trick is understanding what parameters are required to properly create a service instance. In the case of the azure services, the documentation is your friend. The OSBAPI includes optional schemas for provisioning and binding, but the Azure service broker does not include these fields.

Create a Postgres service instance

$ dispatch create serviceinstance azure-pg azure-postgresql \
basic50 --params '
{
"location": "westus",
"resourceGroup": "demo",
"firewallRules": [
{
"startIPAddress": "0.0.0.0",
"endIPAddress": "255.255.255.255",
"name": "AllowAll"
}
]
}'
Created serviceinstance: azure-pg

What the above Dispatch command will do is provision and bind a Postgres database via the OSBAPI. What does this mean? Provisioning means creating a new “Azure Database for PostgreSQL server” as well as creating a new database on the server. There are also ways of re-using existing servers and even existing databases, but that’s outside our scope right now. Binding means creating credentials (username/password) for the database and storing those credentials securely. Now, these credentials can be injected into the function similarly to how secrets are injected.

Provisioning takes some time ~10 minutes. When completed you should see the following:

A provisioned and bound postgres database

We can easily verify exactly what happened. If you log into your Azure console and navigate to the “demo” resource group, you will see an “Azure Database for PostgreSQL server” resource with a UUID for a name. Within Dispatch, this UUID corresponds to both the azure-pg serviceinstance’s ID, and the name of the Dispatch secret which stores the binding information.

The secret which stores the binding data

Using the service from within a function

Of course the whole point of all this is to be able to use services from within functions. We have now deployed a Postgres server, created a database and credentials. Those credentials (binding) are stored as Dispatch secrets and can be made available to functions with a simple flag. First, let’s create an image which contains the Postgres driver dependency:

$ cd examples/python3/postgres
dispatch create image python3-pg python3-base \
--runtime-deps requirements.txt
Created image: python3-pg

Then create the function:

$ dispatch create function python3-pg pg-example postgres.py \
--service azure-pg --schema-in postgres.schema.json
Created function: pg-example

By adding the --service azure-pg flag we are telling Dispatch to inject the binding to the function at execution time. Credentials are not hard-coded anywhere.

The function we created is a very simple function which just creates a table (if it doesn’t exist) and writes a simple row from input. We do input validation via the schema defined with the --schema-in postgres.schema.json flag.

Finally, executing the function:

$ dispatch exec pg-example --wait \
--input '{"num": 1, "data": "hello everyone"}'
{
"blocking": true,
"executedTime": 1523469967,
"faasId": "e130c396-e6af-49f5-983a-e343299bd170",
"finishedTime": 1523469970,
"functionId": "281c05bb-2400-4573-b4d6-e39886b58b7a",
"functionName": "pg-example",
"input": {
"data": "hello everyone",
"num": 1
},
"logs": [
"num: 1, data: hello everyone"
],
"name": "9d8ee54d-a17b-4251-b661-d6ad1daac88b",
"output": [
{
"data": "hello everyone",
"num": 1
}
],
"reason": null,
"secrets": [],
"services": null,
"status": "READY",
"tags": []
}

We can see based on the output field that the data has been written to the database. If we run the same function again (with different input), we can see the function can retrieve the previous data:

$ dispatch exec pg-example --wait \
--input '{"num": 2, "data": "hello everyone again"}'
{
"blocking": true,
"executedTime": 1523470034,
"faasId": "e130c396-e6af-49f5-983a-e343299bd170",
"finishedTime": 1523470034,
"functionId": "281c05bb-2400-4573-b4d6-e39886b58b7a",
"functionName": "pg-example",
"input": {
"data": "hello everyone again",
"num": 2
},
"logs": [
"num: 1, data: hello everyone",
"num: 2, data: hello everyone again"
],
"name": "0ef780e8-8451-4146-a306-a1cc0ed621eb",
"output": [
{
"data": "hello everyone",
"num": 1
},
{
"data": "hello everyone again",
"num": 2
}
],
"reason": null,
"secrets": [],
"services": null,
"status": "READY",
"tags": []
}

Exposing an API endpoint

Now we can take our function and expose an HTTP API endpoint:

$ dispatch create api post-pg-example pg-example \
--auth public --https-only --method POST --path /postgres/demo
Created api: post-pg-example

Use the Dispatch config to determine the host and HTTPS port:

$ cat ~/.dispatch/config.json
{
"host": "192.168.64.55",
"port": 32312,
"scheme": "https",
"organization": "dispatch",
"cookie": "******",
"insecure": true,
"api-https-port": 32191,
"api-http-port": 32023
}

And it’s live!

$ curl -k https://192.168.64.55:32191/postgres/demo \
-H 'Content-type: application/json' \
-d '{"num": 3, "data": "hello from API"}'
[
{"data":"hello everyone","num":1},
{"data":"hello everyone again","num":2},
{"data":"hello from API","num":3}
]

Beyond Azure

This post demonstrated bringing Azure services to Dispatch, but because the services integration is based on the OSBAPI, any compatible service can be used. Azure was chosen because it offers a compelling list of services and comes with Kubernetes service catalog binding. In other words, it’s an easy integration. As we evolve this feature, we will make adding any OSBAPI compatible service easy directly via Dispatch.

Conclusion

The above is a very basic example of how you can now leverage external services which advertise via the OSBAPI. With Dispatch you now have the tools to link functions with stateful services like databases and present them via APIs. This is the basis of most web-services. The services support presented, is still very experimental, but we will continue to develop and refine.

If you are going to the Cloud Foundry Summit North America 2018, please come hear us talk about this feature in person.