Kontena Blog
Published in

Kontena Blog

Rule your Microservices with an API Gateway: Part II

In the previous part of this blog post series I talked about Microservices and the purpose of API gateways.

We’re rather obsessed with microservices here at Kontena, so here are some other microservices related articles that may interest you: Event Sourcing Microservices with Kafka and how to implement Event-Driven Microservices with RabbitMQ.

Now, I’m going to show how to run API gateways in practice and set up and configure services.

As stated in my previous blog post, an API gateway provides a single, unified API entry point across one or more internal APIs. Rather than invoking different services, clients simply talk to the gateway.

The gateway enables support for mixing communication protocols and decreases microservice complexity by providing a single point to handle cross-cutting concerns such as authorization using API tokens, access control enforcement, and rate limiting.

API gateways can be divided roughly into three categories:

Using Kontena Load Balancer as an API gateway

Kontena is the developer friendly container and microservices management platform. Kontena Load Balancer is a L7 proxy based on HAProxy. It’s a powerful and fully automated load balancer for any number of Kontena Stacks and Services.

You can install it as a ready made Kontena Stack with Kontena CLI:

kontena stack install kontena/ingress-lb

or use it with your own stack:

services: api-gateway: image: kontena/lb:latest ports: - 80:80

The Kontena Load Balancer does not do anything unless there are some Kontena Services linked to it. Any Kontena Service may be linked to the Kontena Load Balancer simply by adding a link variable with the name of the Kontena Load Balancer. Load balancing options for this Kontena Service may be configured via the environment variable.

image: kontena/lb:latest
- 80:80
image: nginx:latest
- api-gateway
image: nginx:latest
- api-gateway

Typically with L7 proxies you can configure only load balancing rules, and other advanced configuration options are quite limited. For example, with Kontena you can configure Basic Authentication for the Kontena Load Balancer.

When using other container management tools, you can check Traefik for Docker Compose and NGINX Ingress Controller for Kubernetes.

Using Kong as an API Gateway

When a L7 proxy is not enough, traditional API gateways stand out. They provide more functionality and configuration options for securing microservices. One out-of-the-box solution is Kong.

Kong is a microservice API gateway. Some of the popular features deployed through Kong include authentication, security, traffic control, serverless, analytics & monitoring, request/response transformations and logging.

With Kontena you can install Kong with a single command:

kontena stack install kontena/kong

The installer will deploy the Kong API and optional PostgreSQL.

For other installation methods, please refer to the Kong installation guides.

Service Configuration

With the Kontena Load Balancer, the configuration is very straightforward. You can just register services to a load balancer and define load balancing rules in Kontena Stack files and Kontena will update the HA proxy configuration on the fly when changes are made. Unfortunately, this is not possible with Kong at this moment. For Kubernetes there is an on-going process to provide the official Kong Ingress Controller, which should automate lot’s of things. Meanwhile, to configure APIs and Plugins, you need to use Kong’s Admin API. But you can still automate this.

With Kontena, you can create a post start script that will configure all required settings when a service is deployed. With Kontena, Kong’s Admin API is not exposed to the outside world but services can still utilize it by using Kontena’s internal network.

Those scripts can be, for example, basic bash scripts that execute curl requests to Kong Admin API. However, bash scripts can easily become too messy and complicated, so I have created the Kong API client library for Ruby that makes things easier.

We are internally using Ruby a lot, so we use Rake tasks to run db migrations, so why not also register Kong APIs, too?

namespace :kong do  
desc "Register Kong configurations"
task register: :environment do
api = Kong::Api.find_by_name('images')
unless api
api = Kong::Api.new(name: 'images')
api.uris = ['/images']
api.upstream_url = "http://#{ENV['KONTENA_STACK_NAME']}.#{ENV['KONTENA_GRID_NAME']}.kontena.local:3000}"

rate_limiting_plugin = api.plugins.find {|p| p.name == 'rate-limiting' }
unless rate_limiting_plugin
rate_limiting_plugin = Kong::Plugin.new(name: 'rate-limiting')
rate_limiting_plugin.api = api
rate_limiting_plugin.config = {
minutes: ENV['RATE_LIMIT_PER_MINUTE'] || 10

This task will register the images API to Kong and configures a rate limiting plugin for the API.

Then we can have a post start hook in a stack file:

- name: register Kong APIs
cmd: bundle exec rake kong:register
instances: 1

So every time when we deploy a new version of the service, it will update the Kong configuration related to that service.

You can check the complete example from GitHub repo

Want to learn more?

Register for my upcoming webinar: Why Do Microservices Need an API Gateway? to find out how an API gateway offers the possibility to provide a uniform interface and a central connection point for the various microservices behind it and how they can then be handled dynamically.

Image Credits: Gate Road by Waqutiar Rahaman.

Originally published at blog.kontena.io on March 13, 2018.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lauri Nevala

Lauri Nevala

Cloud-native full-stack developer. Core developer of https://k8slens.dev