An ongoing experiment about using the Elm platform to build a microservices/FaaS backend

A couple of weeks ago I was nerding around github.com when just came across this brand new FaaS (Function as a Service) system called OpenFaaS.

OpenFaas offers a very simple and powerful toolbox that leverages Docker for containerization, DockerSwarm or Kubernetes for clustering and a dedicated server written in go-lang named “the function watchdog” that can call your programs and pipe out the results to the http response.

To my eyes, the FaaS approach presents a great opportunity to test the constraints of the Elm platform in the backend space without much effort and without any sacrifice of performance or mental sanity.

Since functions are only required to communicate via standard input and output to be valid in this approach, then any nodejs program can be used as a function. Therefore, and since you can run headless Elm programs in nodejs already, then the only thing that would be required to set the experiment on is to feed the Elm program using some port and then spit out the responses using other port (Note that these are not http ports, but Elm ports).

The first thing we need to work with is a yaml file with some data in the format that faas-cli understand. Here’s how our elm-faas.yml looks like

provider:
name: faas
gateway: http://localhost:8080
functions:
elm-faas:
lang: node
handler: ./handler
image: function/elm:latest

If you are using kubernetes like me probably you will have to replace the address at provider.gateway with something like the output of

$ echo http://$(minikube ip):31112

The second thing we need is some boilerplate on the JavaScript world in order to achieve communication between out Elm function and the watchdog. This is an example of some possible handler.js that works for this experiment

'use strict'
// Note: polyfill needed to work with Http in Elm@nodejs
global.XMLHttpRequest = require('xhr2')
const main = require('./main')
// Note: calling worker once per process
const ports = main.Main.worker().ports
module.exports = (context, callback) => {
const respond = (err, res) => {
ports.success.unsubscribe(onSuccess)
ports.error.unsubscribe(onError)
callback(err, res)
}
const onSuccess = (response) => {
respond(undefined, response)
}
const onError = (error) => {
respond(error)
}
ports.success.subscribe(onSuccess)
ports.error.subscribe(onError)
ports.handle.send(JSON.parse(context))
}

This could be easily transformed to a connect-like middleware with the well-known signature (req, res, next) but that would be the subject of another experiment.

Also, having many different middleware operating on a single request would mean Elm ports being called for every middleware/request, loosing the ability to pass union types between middleware thus making the amount of boilerplate code needed accidentally bigger.

Therefore, from the point of view of developer happiness, I’d rather model the middleware approach inside Elm in the future than just plug an Elm program into some middleware-capable framework like connect/express/restify just to have some middleware abstraction in Elm.

Well, we also need some more boilerplate code in the Elm world. Here’s how our basic Main.elm file looks like

port module Main exposing (..)

import Json.Decode as Decode

port handle : (Request -> msg) -> Sub msg
port success : Response -> Cmd msg
port error : Message -> Cmd msg
type Msg
= Input Request
| Success Response
| Error Message
type alias Request =
{ query: String
}
type alias Response =
{ data: String
, counter: Int
}
type alias Message =
{ message : String
}

handler : Model -> Request -> Cmd Msg
handler model request =
success { data = request.query, counter = model.counter }
type alias Model =
{ counter : Int
}
model : Model
model =
{ counter = 0
}
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
Input request ->
let
effects =
handler model request
                newModel =
{ model | counter = model.counter + 1 }
in
( newModel, effects )
        Success payload ->
( model, success payload )
        Error message ->
( model, error message )
init : ( Model, Cmd msg )
init =
( model, Cmd.none )
subscriptions : Model -> Sub Msg
subscriptions =
always <| handle Input
main : Program Never Model Msg
main =
Platform.program
{ init = init
, update = update
, subscriptions = subscriptions
}

That’s it. This program should do what is needed to echo some input back to the user-agent including in the response a number that gets incremented each time our handler function gets evaluated

Now we have to deploy our experiment. I’ve been using a script that looks like this

#!/usr/bin/env bash
./scripts/build_elm.sh && \
eval $(minikube docker-env --shell bash) && \
faas-cli build -f elm-faas.yml && \
sleep 1 && faas-cli remove -f elm-faas.yml && \
sleep 1 && faas-cli deploy -f elm-faas.yml && \
sleep 3 && ./scripts/patch_local_deployment.sh

Where build_elm.sh does this

#!/usr/bin/env sh
cd handler && elm-make Main.elm --output=main.js

And patch_local_deployment.sh does this

#!/usr/bin/env sh
cat <<EOF | kubectl patch deployment elm-faas --type merge --patch "
spec:
template:
spec:
containers:
- name: elm-faas
image: function/elm:latest
imagePullPolicy: Never
"
EOF

Now, every time we call ./scripts/deploy_local.sh we will have the latest version of our function up and running after a few seconds.

Let’s test our function via http using a simple curl call like this

$ time curl -v -H "Content-Type: application/json" --data "{\"query\": \"hello world\"}" http://$(minikube ip):31112/function/elm-faas

At this point you’ll notice that no matter how many times you call it you will receive the same counter value after a few hundreds of milliseconds, meaning that the watchdog is starting up our Elm program with each request and is not reusing the process across multiple http requests

*   Trying 192.168.64.3...
* TCP_NODELAY set
* Connected to 192.168.64.3 (192.168.64.3) port 31112 (#0)
> POST /function/elm-faas HTTP/1.1
> Host: 192.168.64.3:31112
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 24
>
* upload completely sent off: 24 out of 24 bytes
< HTTP/1.1 200 OK
< Content-Length: 35
< Content-Type: application/json
< Date: Wed, 13 Sep 2017 23:21:51 GMT
< X-Duration-Seconds: 0.254634
<
{"data":"hello world","counter":0}
* Connection #0 to host 192.168.64.3 left intact
0.29 real 0.00 user 0.00 sys

Let’s make something useful than a simple echo-like program. Let’s do some http requests!

Just modify the Main.elm program so it looks like this (you can copy and paste it)

port module Main exposing (..)
import Json.Decode as Decode exposing (field)
import Json.Decode.Extra exposing ((|:))
import Http
import Task
port handle : (Request -> msg) -> Sub msg
port success : Response -> Cmd msg
port error : Message -> Cmd msg
type Msg
= Input Request
| Success Response
| Error Message
type alias Request =
{ query : String
}
type alias UserData =
{ id : Int
, first_name : String
, last_name : String
, avatar : String
}
type alias UserResponse =
{ data : UserData
}
type alias Response =
{ data :
{ user : UserData
, name : String
, counter : Int
}
}
type alias Message =
{ message : String
}
handler : Model -> Request -> Cmd Msg
handler model request =
let
getUserData =
Http.get "https://reqres.in/api/users/2" decodeUserResponse
|> Http.toTask
getFavorite =
Http.get "http://swapi.co/api/people/1/" decodeName
|> Http.toTask
decodeName =
Decode.at [ "name" ] Decode.string
decodeUserResponse =
Decode.succeed UserResponse
|: (field "data" decodeUserData)
decodeUserData =
Decode.succeed UserData
|: (field "id" Decode.int)
|: (field "first_name" Decode.string)
|: (field "last_name" Decode.string)
|: (field "avatar" Decode.string)
onSuccess userRes lukeName =
{ data =
{ user = userRes.data
, name = lukeName
, counter = model.counter
}
}
onResult result =
case result of
Ok res ->
Success res
Err err ->
Error { message = (toString err) }
in
Task.attempt onResult <|
Task.map2 onSuccess
getUserData
getFavorite
type alias Model =
{ counter : Int
}
model : Model
model =
{ counter = 0
}
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
Input request ->
let
effects =
handler model request
newModel =
{ model | counter = model.counter + 1 }
in
( newModel, effects )
Success payload ->
( model, success payload )
Error message ->
( model, error message )
init : ( Model, Cmd msg )
init =
( model, Cmd.none )
subscriptions : Model -> Sub Msg
subscriptions =
always <| handle Input
main : Program Never Model Msg
main =
Platform.program
{ init = init
, update = update
, subscriptions = subscriptions
}

That’s it! if you hit the watchdog again with some requests you’ll see something like this

$ time curl -v -H "Content-Type: application/json" --data "{\"query\": \"hello world\"}" http://$(minikube ip):31112/function/elm-faas
* Trying 192.168.64.3...
* TCP_NODELAY set
* Connected to 192.168.64.3 (192.168.64.3) port 31112 (#0)
> POST /function/elm-faas HTTP/1.1
> Host: 192.168.64.3:31112
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 24
>
* upload completely sent off: 24 out of 24 bytes
< HTTP/1.1 200 OK
< Content-Length: 183
< Content-Type: application/json
< Date: Wed, 13 Sep 2017 23:42:34 GMT
< X-Duration-Seconds: 1.442548
<
{"data":{"user":{"id":2,"first_name":"Janet","last_name":"Weaver","avatar":"https://s3.amazonaws.com/uifaces/faces/twitter/josephstein/128.jpg"},"name":"Luke Skywalker","counter":0}}
* Connection #0 to host 192.168.64.3 left intact
1.46 real 0.00 user 0.00 sys

The next post will cover:

  1. How to re-use the same nodejs process using a new feature of the function watchdog called fast_fork.
  2. How to pass headers from Elm to the watchdog. Only available in fast_fork mode at this point.
  3. What would be required to move most of the boilerplate currently in the nodejs world to Elm?

Stay tuned!