Microservices setup with AWS API Gateway, Serverless Framework and GitLab — Part 1

Iliya Iliev
limehome-engineering
7 min readJun 17, 2020
Photo by James Pond on Unsplash

Iliya Iliev | Senior Developer at Limehome.

Recently, I was assigned the task to set up all of Limehome’s services under one API — to set them up as ‘microservices’ (I will explain this in the later part of this article).

All the different services should have remained independent from one another. They should have 3 stages — production, staging and develop that are properly deployed using the CI/CD of GitLab.

In this article, I will demonstrate how this setup was executed from scratch so that it is easy for anyone to replicate it.

We will create two separate services, an Elastic Beanstalk Application and an AWS Lambda function. We will hook those two services to a single shared AWS API Gateway. This will be set up on a third project in order to have better control over the whole infrastructure using the Serverless Framework.

What are ‘Microservices’?

Wikipedia: Microservices is a software development technique — a variant of the service-oriented architecture (SOA) structural style — that arranges an application as a collection of loosely coupled services.

Simply put — a single ‘Microservice’ is a resource on an API (for example /users) deployed as a service (on a separate server). I hope this is as understandable as possible. The graph below explains it visually:

So, let’s do begin!

Step 1: Setting up an AWS Elastic Beanstalk Application:

Let’s create the server that deals with the ‘/users’ requests for example. This will be our Users Service. We will use some boilerplate Express project for the sake of demonstration:

  1. Install Express Generator + Generate A Project
$ npm install -g express-generator
$ express --view=ejs

2. Let’s edit the template a little and run it.

  • Open views/index.ejs — edit it like this:
  • In the base folder run the commands:
$ npm install
$ node ./bin/www
  • If you haven’t modified process.env.PORT variable somewhere — open browser at http://localhost:3000. This is what you should see:

3. Now that we have our basic server setup we need to deploy it to AWS Elastic Beanstalk. You can find full documentation on how to do it here and here.

I will not go deeper into explaining how to deploy this to AWS Elastic Beanstalk. I will just give you a quick summary.

  1. In AWS Console create Elastic Beanstalk Environment (you can also do that with CLI)
  2. Install AWS EB CLI on your local machine
  3. Set up AWS Credentials (if you haven’t already)
  4. In the base folder of the Express Application run
$ eb init

5. Follow the steps and run

$ eb deploy

After deploying is completed at the link of your environment, which should look something like this {environment-name}.{region}.elasticbeanstalk.com, you should see our basic server.

NOTE: For the different stages — you need to create different environments to deploy to. If you have develop, staging, production, 3 different environments with different URLs should be created.

Step 2: Setting up AWS Lambda function with Serverless Framework

Alright, now that we have our Users Service ready, let’s assume you want your ‘orders’ service to be handled by a Serverless Application.

You can check out this article by our colleague Matthias Werndle on how to set it up with NestJS +Serverless Framework

We won’t be setting up a whole application here, just a single lambda function for the purpose of demonstrating how to deploy the AWS Lambda.

For this we need just two files — the lambda handler and serverless.yml.

  1. In a new project folder, create the two files:

2. lambda.js— should look like this:

3. serverless.yml

A little explanation on how we did it at Limehome. We set up the stage default value to be ‘develop’. Basically, if we do not specify --stage={stage}, the Serverless Framework will deploy to the API Gateway for stage ‘develop’.

If the Serverless Framework Application stack for this stage has not been created already it will create one and deploy it automatically. (Serverless Framework usually deploys an API Gateway by default but we will modify that later.) This flag is very useful when you deploy with your CI/CD to different stages.

In the functions, we proxy everything to our AWS Lambda who executes the handler in lambda.js.

4. Set up Serverless Framework on your machine:

You can find information on how to set up Serverless Framework here.

  • Install the package and check if it is installed:
$ npm install -g serverless
$ serverless

5. Deploy the Serverless Stack

$ serverless deploy

After the stack has been deployed in the console you will see the endpoints that were created. In our case it will be something like:

And if we follow this link (replacing {proxy+} with anything), we should see our service.

At this point, Serverless Framework has created a stack with a default API Gateway just for this function. Later on, we will edit the setup so that we use the shared gateway between all microservices.

Now that we have both of our microservices ready — let’s create the main gateway.

In a new folder run:

$ npm init -y
$ npm install serverless serverless-apigw-binary

We will need serverless in the project (not only on global level) as well for the CI/CD later on and serverless-apigw-binary to configure our gateway for binary data.

The whole folder setup should look like this:

config.yml — we will use this file to set up constants (f.e. URLs to microservices)

Here we specify the URLs of our Elastic Beanstalk Applications for the different stages of our User Service.

lambda.js:

This Lambda function is basically the same as the one we created for our Orders Service. I will add it to the base route so that we know everything is configured properly and we are at the right destination.

serverless.yml:

Let me explain in detail:

I could not find a way to reuse the default serverless API gateway created by the framework. This is why in serverless.yml in the ‘Resources’ section, we create a new API Gateway, a new resource with methods at /users path and give the path from the config file we set up earlier.

custom:    config: ${file(./config.yml):${self:provider.stage}}...     Uri: "${self:custom.config.USERS_SERVICE}"

In the end of the file in Outputs. we output the restApiId and the rootResourceId with respect to the current stage names. Later on, we will use these outputs to instruct other functions to use our new gateway.

As we do not want to use the default API Gateway, but the custom one we created in Resources, we specify in this in the provider options.

Now if we run:

$ serverless deploy

and go to the URL given by the serverless CLI, we should see:

and if we go to /users:

One thing remains, we need to configure our Orders Lambda function to use this API gateway at /orders.

Let’s go back to our Orders Service serverless.yml and set it up.

We import the exported ‘Outputs’ from our main gateway serverless deployment in provider -> apiGateway and then edit the event path. Since we want what orders to be at ‘/orders/something’

$ serverless deploy

This will now delete the initial API Gateway created by serverless framework and use the one we specified.

Now let’s test it our:

Well, that’s it for now! We now have working reused API Gateway with 2 independent of each other services.

In Part 2 — I will show you how to set up the Gitlab CI/CD pipelines and configurations so that deployment to this setup is seamless.

Thanks for reading.

Best regards,

Iliya Iliev, Limehome

--

--