Let’s Build A Gateway

Ray Kahn
5 min readMay 19, 2018

--

In monolithic applications, application components communicate in memory. In a micro-services application, that communication happens over the network. This means that we have to create an umbrella to bring all of our services under a single point of entry that can access all of the exposed API endpoints.

Unbalanced deployment that must be fixed

As you recall I stated that I will use an API gateway to create a single point of entry to the application. This gateway does not handle any requests except to proxy them to other micro-services, although it could be enhanced to handle services such as authentication, caching (Redis is a better approach) or handling of static requests/resources.

We previously covered the following :

A Word Of Caution!

An API gateway is not risk free. In fact building one means another point-of-failure that we must monitor and manage. And if not properly designed and speced it may very well become our biggest bottleneck.

The Gateway

Our gateway will need to be aware of the end-points created thus far by looking at our micro-services. To do that the gateway application will need a NodeJS package that can query Docker and receive that information. dockerode.js is the npm package that will help us interact with Docker.

dockerode.js needs to be able to read meta data of a container, which is set using -l flag at the container’s creation time. This implies that we need to recreate all of our containers since we did not set any label flag for them. The following commend will need to change:

docker run --add-host manager1:192.168.99.100 --name doctors-service -p 3000:3000 -d doctors-service

To one with -l flag (showing doctors-service):

docker run -l=msRoute='/provider' --add-host manager1:192.168.99.100 --name doctors-service -p 3000:3000 -d doctors-service

You may name your label anything you like. The msRoute is the base URL of this particular micro-service, and for services-service I will use ‘/service’ and for my uber-service ‘/uber/provider’. This implies that I will need to change all the declared apis in my code as well.

As you recall when we created our API endpoints for the micro-services we used different base urls: /provider/:pid, /providers/:types/:…, /service/:pid/:sid, /services/:pid, etc. Consolidating the api endpoints to ‘/provider’, ‘/service’, and ‘/uber/provider’ is necessary since we will have a single ‘-l’ label to declare the base URL (as a meta-data that docker.js will read to create the proxy end-points) for the micro-services.

provider.js — API for doctors-service
service.js — API endpoints for services-service

Get Ready

Our Dockerfile is also different from the other ones we have seen before.

FROM node:latestENV HOME=/home/nuppCOPY package.json npm-shrinkwrap.json $HOME/app/COPY src/ $HOME/app/srcWORKDIR $HOME/appRUN npm install --productionCMD ["npm", "start"]

The Code

Lets’ look at our index.js which is the entry point to our application.

Really the only thing new here is the docker object that returns all the paths that we declared using -l flag. Let’s look at docker.js code next. YOU MUST HAVE RESTARTED ALL YOUR SERVICES WITH “-l” FLAG AT THIS POINT.

Using ‘dockerode’ module we look at the services that are running, avoiding mongo and api services. I use a good old associative hash as a repository for the routes. You may use any old object you find suitable.

Our config.js is a bit different, with docker config entries added, and no mongoDB connect since the gateway does not need to connect to a repository.

Run It Already

To test the code I will run the service as a stand-alone local server and not as a containerized micro-service at this time to ensure that everything is working as speced.

> node src/

Should result in the following output to your console:

[HPM] Proxy created: /  ->  http://192.168.99.100:3002[HPM] Subscribed to http-proxy events:  [ 'error', 'close' ][HPM] Proxy created: /  ->  http://192.168.99.100:3001[HPM] Subscribed to http-proxy events:  [ 'error', 'close' ][HPM] Proxy created: /  ->  http://192.168.99.100:3000[HPM] Subscribed to http-proxy events:  [ 'error', 'close' ]Connected to Docker: 192.168.99.100Server started succesfully, API Gateway running on port: 8080.

And I check to make sure my proxies are working correctly:

Port 8080 responds

As you can see my API gateway is working. The next task is to containerize it.

eval `docker-machine env manager1`docker build -t api-gateway .docker run --name api-gateway -v /Users/raykahn/.docker/machine/machines/manager1:/certs --net='host' --env-file env -d api-gateway-service

My start-up script is different from the other start-up scripts that we have developed thus far.

  • -v flag: we permit api-gateway container to have read access to Docker certificates directory on the virtual server at ‘/Users/raykahn/.docker/machine/machines/manager1’. The second part of this string is ‘:/certs’ (you may add a third optional part separated by ‘:’ — see docker volume flag). ‘:/certs’ effectively says mount that directory to ‘/certs’ label so that within the container when the ‘/certs’ is referenced anywhere it is accessing the directory instead.
  • The — net flag will bind the docker-machine ip to our api-gateway container. It attaches the container to the ports on the host machine, in this case ‘manager1’.

And starting the container will result in the following:

> ./start-service.sh
http://192.168.99.100:8080 is out API gateway

What’s Next

So far we have created multiple micro-services, as well services that can communicate. We just finished our API gateway; not bad so far. The last piece of this “puzzle” is to make sure that all of our docker machines, VirtualBoxes, have all the micro-services deployed and running, ie balanced — unlike the image at the beginning of this blog. We will cover this last piece in the next blog.

--

--