Porting Node/Express K8 Service to Lambda with Serverless: A Few Extra Lessons Learned

Bo Motlagh
United Effects™
Published in
6 min readOct 16, 2018

I have very much enjoyed running my own Kubernetes cluster on GKE and it has served me well. I’ve created many services for both my own projects and my clients, quickly spinning up docker images with all of the traditional aspects of load balancing and horizontal scaling of my systems simply handled by the platform. One aspect that has become less enjoyable over time is the cost. As my interests and projects grow, so do my resource needs, and consequently the number of nodes required. At 33 services and counting, the price is getting high.

I started looking serverless solutions like Google Functions or Lambda to see if I could begin porting my node services over to one of the platforms. This of course lead me to serverless.com and ultimately a great little package called serverless-http.

I’ve had a lot of fun playing with the serverless framework and would highly recommend it to anyone getting into FaaS systems. I’m not going to provide an overview of serverless.com, they do an amazing job of that themselves. Specifically, here are the relevant posts from their site that guided my exploration:

https://serverless.com/framework/docs/providers/aws/guide/quick-start/ https://serverless.com/blog/serverless-express-rest-api/

One thing you’ll notice in both the example articles is that the targets they use are understandably simple services. For my POC I chose a slightly more advanced service and this post will cover anything beyond the simple cases in the examples above that I needed to do in order to achieve the same results they advertise.

Target

I have a basic text, html and image content management microservice that I use for a lot of my solutions. The service is a Node service using Express.js middleware and a MongoDB backend in Google Cloud Platform. For the POC I decided I was going to convert this service to a FaaS solution and deploy to either Google Functions or Lambda.

Lambda vs Google

I’ve invested quite a lot in Google Cloud, so my first thought was that I’d use the serverless-http package to deploy the service there. Maybe I missed it in the documentation (or just got excited and didn’t read very carefully), maybe it wasn’t clear… but this wasn’t going to work. Very quickly I learned that the serverless-http package and the subsequent function it creates is not compatible with Google Functions. My next option was Lambda, which for me is fine as I have some experience with the platform. Honestly, I find the Lambda experience to be more user friendly than Google’s anyway. The only reason I went to Google first was because I didn’t want to make calls from AWS to GCP for my database. But then I decided it wasn’t a big deal. The serverless-http creates a Lambda compatible function very easily and without issue.

Startup and Mongo

The next step was to wrap my express app but make sure that I run my startup scripts first. If you’re like me and first learned node all those years ago using the express generator, you may still use the bin/* pattern for your startup script to actually start your service. For me that’s bin/start.js and this is where I make sure my connection to Mongo is correct before initiating the server. The example in the serverless documentation is a bit more simple than this, showcasing a basic single page express app that is wrapped by the serverless package. Additionally, when you are wrapping your app with serverless, you are simply creating an interface between the lambda function handler and your app service code. In other words, you aren’t actually starting your service anymore. I didn’t want to completely remove the ability to run my service locally or in a container, so instead I just created two startup files, slsapp.js and start.js. In my serverless yaml file I define slsapp.js as my handler which in turn exports my serverless wrapped express app. Otherwise I can still just start my service as before.

Custom Domain

The beauty of serverless is that once you’re done the above two steps, you’re pretty much able to deploy code. In my case, it wasn’t going to work until I did a few more things to handle environments, parameters, and the gateway, but it’s still pretty cool that the basic conversion is so quick. One thing you’ll notice is that your Gateway will be spun up in AWS using a root path named after your yaml stage environment. If you follow the example from the links, that will probably be yourgatewaypath.com/dev/{proxy+} where proxy represents all of the paths your service expects and will handle. Now in express you can probably make this work by accounting for the root path as a parameter. But that seemed non-trivial and I felt it was easier to just add a custom domain, which removes the issue all together, meaning you won’t have to deal with the stage name as root. I won’t go into the details but you can find the instructions here:

https://serverless.com/blog/serverless-api-gateway-domain/

Content Uploads

If you recall, the particular target I chose is a CMS. This turned out to be a bit more of a headache than I anticipated. In my particular service, I allow image uploads using the multipart/form-data content type. For a number of reasons that we don’t really need to dive into too deeply, the AWS Gateway does not handle this content type as is. This resulted in corrupted images being uploaded to my S3 bucket. I’m going to skip ahead to the fix:

  1. You need to set your mime types on the gateway to binary. You can do that manually or by using the “serverless-apigw-binary” plugin and configuring your yaml as follows:

2. You need to upload the file in your code directly from the buffer and not persist it locally first. I do this using the multer package as middleware and configuring it for in-memory storage: https://github.com/expressjs/multer

Environments and Environment Variables

If you look at my serverless.yaml file you’ll notice that I’ve abstracted most of it into a generic file that is populated by parameters derived from a file under /.env.

I learned this approach from the below serverless document and expanded upon it to handle my own environment variable such as my mongo connection string and other secrets required.

https://serverless.com/framework/docs/providers/aws/guide/functions/#environment-variables

Specifically, within my .env folder, I maintain env.some-environment.json files where “some-environment” is the name of an environment I want to configure. I then pass another environment variable to my serverless commands called SLS_ENV where I specify which file to read from in order to populate the yaml. For example, I could generate an env.qa.json file with the following:

I can then access it by typing SLS_ENV=qa sls deploy. This would populate my variables into the yaml file and deploy my lambda function and gateway using those values.

SUCCESS

That’s pretty much it! I was able to get my service running in an environment and I’ve started moving my projects from my QA environments over to it. Is already allowing me to decrease my Kubernetes cluster size! I’ll likely be porting more of my services over. If you’d like to review the code feel free to take a look:

https://github.com/UnitedEffects/UE-Content_Srvc

Disclaimer

While I may sometimes discuss the work I do, both on my own and for my employers, my views are my own and are not intended to represent my employers in any way.

--

--