Introduction to Docker, AWS ECS, WSO2 APIM and WSO2 EI

Chinthaka Thennakoon
8 min readApr 17, 2018

--

Hello geeks! This will be the first part of a continuous blog series of application containerization with docker and deploying in AWS EC2 Container Service (ECS). For this blog, I am going to use two WSO2 products (WSO2 API Manager and WSO2 Enterprise Integrator) as the containerized services. In other words, these two products can be equivalent to your two application services which need to be communicated with each other. This can be elaborated as following diagram.

1. Deployment Overview

This blog will include all the necessary concepts of containerization, docker, AWS ECS, WSO2 APIM and WSO2 EI, which need before going into any practical/hands on session. Next blog post will be a complete user guide of containerizing and configuring these containerized products in AWS ECS. Let’s learn some concepts first. Shall we?

1. Containerization

If you need to run your application anywhere (physical, virtual or cloud), any machine by using minimum resources and without bothering about the host OS, then containerization is the best solution. Do not misunderstand this concept with virtualization. Virtualization is changing the mindset from physical to logical. What virtualization means is creating more logical IT resources, called virtual systems, within one physical system. VMs provide an environment with more resources than most applications need.

Containerization essentially virtualize an operating system so applications can be distributed across a single host without requiring their own virtual machine. This is done by giving an app access to a single operating system kernel. All containerized apps running on a single machine will run on the same kernel.

2. Virtualization vs Containerization

Let’s talk about what makes containerization better than virtualization. Containerization make applications portable by virtualizing CPU, memory, storage, and network resources at OS-level, which create isolated and encapsulated systems that are kernel-based. Application containerization works with micro-services and distributed applications, as each container operates independently of others and uses minimal resources from the host. So, we can run our application without dependencies or entire VM. Containers need few resources power of space and memory to run.

2. Docker basics and commands

Docker is the most popular and common container engine in the tech world. There are some other containerization technologies like CoreOS apart from docker. So, shall we learn some basic terminologies and commands of docker?

Image: A read-only template for creating containers. Most of the time one image is based on another image. Eg:- we can build an image which is based on anubuntu image and then install web server and our application, as well as the configuration changes needed to make the application run.

Container: A container is a runnable instance of an image. We can create, start, stop, move, or delete a container using the Docker commands. We can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

Docker Daemon: Builds, runs, and manages Docker containers on a host machine. A daemon can also communicate with other daemons to manage Docker services.

Docker Client: The user interface for the Docker daemon. It accepts commands from the user and communicates with the Docker daemon.

Docker Registries: Repositories of images for users to upload or download. Registries can be public or private. The public Docker registry is called the Docker Hub.

Dockerfile: A Dockerfile is a text document that contains all the commands a user need to call on the command line to assemble an image. Docker can build images automatically by reading the instructions from a Dockerfile.

For more information, please follow the official docker documentation [1]. If you don’t know basic docker commands, I would recommend to read the following post. It contains main docker commands like pull, push, run, delete, start, stop, etc in one page. These commands will be needed in my next blog post which guide you through hands-on session. Please take few minutes and read this.

3. Introduction to AWS ECS

Okay, now we know what containerization is and how we can achieve that by using docker. These days, cloud is the new norm for businesses from small to large scale. We can manage our containerized application in the cloud easily. For this purpose, AWS provides AWS EC2 Container Service(ECS). AWS ECS is a highly scalable, high-performance container orchestration service. This service eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines. You can read more on AWS ECS from their documentation [2]. Now let’s cover basic components in AWS ECS and then we can visualize how these components incorporate with each other.

Cluster: Logical group of AWS EC2 instances that we can place containers to run.

Task definition: This a blueprint that describes how a docker container should launch. In another word, this is a point in time capture of the configuration for running image or else we can say it is the recipe that ECS use to run tasks in a cluster. It is a text file in JSON format that describes one or more containers (up to ten) and their configurations. We can describe the CPU, memory requirement, link between containers, networking, port setting and data storage in a task definition.

Task: This is the actual container running service. This may have one or more containers. Simply saying it is an “instance” of a task definition.

Service: We can define long running tasks in a task definition as services, which need to available all the time such as containerized back end services. If any of tasks should fail or stop for any reason, the Amazon ECS service scheduler launches another instance of your task definition to replace it and maintain the desired count of tasks in the service. A service includes task definition, number of tasks and how the tasks should be distributed. Apart from these, services allow us to run tasks behind a load balancer which allows to distribute traffic.

ECS agent: Manages the state of the containers in a single EC2 instance. Every instance in cluster have an agent. Centralize ECS control point communicates with docker daemon on each EC2 instance via this agent.

ECR: Fully managed docker container registry. We can store, manage and deploy container images with this service. This is a private, redundant, encrypted and high-available registry service which act as a repository for docker images.

Now, let’s get a better understanding about these components with a diagram.

3. ECS Components Interaction

Steps:

1. Build and tag your docker image after performing all the necessary configurations and push that image to AWS ECR registry.

2. Create a task definition by describing the docker image and container configurations that should run.

3. Create an ECS service by defining how much tasks should run with the configurations in step2 and load balancer configurations.

4. Send task starting information to each ECS agent in cluster (agent per EC2 instance) by the ECS service.

5. Pull docker images defined in the task definition (step2)

6. Start the number of tasks defined in step3 in each EC2 instance in the cluster. Each task will include all containers defined in step2.

4. WSO2 Enterprise Integrator(EI) and API Manager

WSO2 EI/ESB

You may have already heard or are already familiar with the term Enterprise Service Bus (ESB). For the benefit of those who don’t know what it is, an ESB,

is a software architecture model / middleware solution used for designing and implementing the interaction and communication between mutually interacting software applications in Service Oriented Architecture(SOA) [3].

If your company or organization has to communicate with several other third-party systems which are totally different from each other, it is difficult to connect directly each and every individual system. Message transformation and talking to different transport protocols can be tedious. Don’t worry, ESB is here to rescue you from this. Sounds good right? There are many ESB products from different companies like WSO2, IBM and Oracle.

WSO2 ESB is widely used ESB because of its high capabilities and trusted by large cooperates across the globe. WSO2 ESB promotes asynchronous message mediation, message identification and routing between applications and services, allow messages to flow across different transports (HTTP/S, JMS, TCP) and protocols (SOAP, REST), message transformation, allows secure, reliable communications and enable the service chaining.

Now, this WSO2 ESB product is pre-packaged as an integration profile inside the WSO2 Enterprise Integrator (EI). This EI comprised of few other WSO2 products as profiles like Analytics profile (WSO2 DAS), Message Broker profile (WSO2 MB) and MSF4j profile for microservices.

Let’s take a look at what WSO2 EI’s ESB Profile can do for us in the diagram below.

4. WSO2 EI (ESB/Integration Profile)

WSO2 APIM:

API management is the process of designing, publishing, documenting, and analyzing APIs in a secure environment. Through an API management solution, an organization can guarantee that both the public and internal APIs they create are consumable and secure.

There are API management solution in several companies like Apigee, Oracle, IBM, WSO2 etc.

WSO2 API Manager(APIM) is open source and it helps create, publish, store, manage life-cycle, version, govern, secure etc of your APIs. You can read more [5] and quick start with WSO2 API [6] from WSO2 official documentation.

WSO2 APIM comprised with following components, API gateway(worker/manager), API store, API publisher, traffic manager and key manager. Now let’s have a look at a high-level diagram of how these components interact each other.

5. WSO2 APIM Components Interaction

Conclusion

Today, cloud is the new norm in almost every industry. Companies move towards cloud solutions because of its server-less infrastructure provision to the organization through which they benefit by eliminating, server and data center maintenance costs, more power consumption, managed licenses, effectively resolve location requirements to meet regulatory compliance and no data center lease expiration.

Application deployment can be tedious because of high resource demand and applications should also be able to run in any given environment. Containerized applications are the solution for these issues because of their scalability.

Docker is the most popular containerization engine which developers use due to its high-performance and compatibility. AWS ECS provides a highly scalable, high-performance dockerized container orchestration cloud service.

In my next post, I wish to write more about containerization with AWS ECS and how to implement two WSO2 applications in a containerized environment. This should serve as a guide for you to implement the concepts above in practice.

Stay tuned!

[1] https://docs.docker.com/engine/docker-overview

[2]https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

[3]https://www.innovativearchitects.com/KnowledgeCenter/business-connectivity/ESB-EAI-SOA.aspx

[4]https://docs.wso2.com/display/EI611/Introducing+the+Enterprise+Integrator

[5]https://docs.wso2.com/display/AM200/Key+Concepts

[6]https://docs.wso2.com/display/AM200/Quick+Start+Guide

About the Author

Chinthaka Thennakoon is a Software Engineer working in the software products development and enterprise integration solutions.

Chinthaka Thennakoon

--

--