How I ‘cloned’ AWS Lambda in a day.

Necessity they say is the mother of invention. And necessity in this case was a corporate Serverless platform.

A bit short on time to write more than a gist. But am sure it could help a couple of engineers out there. So here we go.

Been working the past months building an infrastructure in a highly restrictive corporate environment. Having introduced a Container cluster and a CI pipeline we were then told that the developers would not be able to actually directly access any tools to manage the container deployments.

On the other hand the developers needed to be launching short and long running jobs which could rapidly grow in number and become an operational support nightmare.

When I heard this the first thing that came to mind was ‘AWS Lambda would be a great fit’. That was not to be hence this DIY Serverless solution. Using the power of Jenkins and Docker Swarm.

So how does it go?

The ingredients are

  1. A Docker Swarm — This is the base platform on which you host the containers.
  2. Jenkins — which serves as the management platform providing the secure remote access, authentication and management
  3. The Docker Remote API — that makes all of this possible in combination with the Jenkins SSH plug-in.
  4. Some set of standardised images into which to launch your Lambda function — such as a Python3 environment, or GoLang or NodeJS.

Once your Docker Swarm is up and running, you can add Jenkins as container on the Swarm. The next step is to create a Jenkins Pipeline. Just as in other pipeline it is likely you will push some files over from Git to the file system that your docker runs from.

Most important is the last step of the pipeline which looks similar to this.

docker -H tcp://your_docker_host:2376 — tlsverify run your_base_container

The above command is run on your docker host and launches your base container.

Now all you need to do is to extend it with a couple of parameters to get your AWS Lambda Clone

  1. The entry-point script of the container
  2. A JSON packet which contains the parameters that are passed to the entrypoint within a single argument.

So now the extended script looks something like this

docker -H tcp://your_docker_host:2376 — tlsverify run — rm your_python3_image python3 $SCRIPT_TO_RUN $MY_PARAMETERS_IN_JSON

The $SCRIPT_TO_RUN and $MY_PARAMETER are parameters defined for the Jenkins job which are then passed to the Docker engine. In order to make it more flexible we have defined the $MY_PARAMETERS_IN_JSON as a JSON Package which contains the variables that the scripts requires to be sent to it.

The next step is to enable the Trigger capability of Jenkins to create a URL that can be called to trigger the job from outside Jenkins. Jenkins not only provides a user authentication for the trigger but also a job specific token enabling a level of security and trackability.

AWS Lambda also provides monitoring through cloud watch. You can also have your own 5 minute solution by launching a fluentD container with a fluentDUI to route your console log output ( You can try my lightweight fluentD / fluentdUI container here https://hub.docker.com/r/balajibal/fluentd-ui/ ).

So once you have launched your fluentD container plugging in fluentD logging is really easy just add the logging driver directive to the launch commandline.

docker -H tcp://your_docker_host:2376 -log-driver=fluentd — log-opt fluentd-address=tcp://yourfluentd_host:24224 — tlsverify run — rm your_python3_image python3 $SCRIPT_TO_RUN $MY_PARAMETERS_IN_JSON

Once you have the basic platform working there are a number of features of Jenkins you can leverage to make things even fancier. One of them is attaching a schedule to jobs. So for instance you can run a specific container at specific times in a a day to handle an thumb-nailing job.

Thats all I have time for today. Hope it is of us to someone.

Like what you read? Give balaji bal a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.