Software Architecture and Docker
Software architecture is not something that is visible for the user and that makes it unnoticeable. But a poor software architecture can lead to the growing of a program’s cruft and makes the program hard to be modified. High internal quality, in this case a solid software architecture, leads to faster delivery of new features.
What is software architecture?
Software architecture of a program is the structure of the program system, which consists of software elements, properties of those software elements, and relationship between those two. Therefore, we can say that a software architecture of a program defines program’s structure and behaviour.
Though, it is not concerned with defining all of the structure and all of the behaviour. It is only concerned with those elements that are deemed to be significant. Significant elements are those that have a long lasting effect, such as the major structural elements, those elements associated with essential behaviour, and those elements that address significant qualities such as reliability and scalability. In general, the architecture is not concerned with the details of these software elements.
There are many kinds of software architecture:
- Client-Server Architecture
- Object-Oriented Architecture
- Domain-driver Architecture
- Onion Architecture
- Aspect-Oriented Architecture
- Service-Oriented Architecture
- Microservices Architecture
- Lambda Architecture
- Component-based Architecture
- Event-driven Architecture
Two-tier and Three-tier Architecture
Two-tier Architecture
This is where you have direct communication between a client and a server with no intermediary. It‘s divided to two parts, client application and the database. The client sends a request to the server, where it then processes the request and sends back the data. Meaning, the client handles both the Presentation layer (application interface) and Application layer (logical operations), while the server system handles the database layer.
Three-tier Architecture
Three-tier architecture is divided to three parts; presentation layer (client tier), application layer (business layer), and database layer (data tier). The presentation layer
The Presentation Layer is the topmost layer of an application. This is the layer seen when using the software, or is called the interface. This layer passes information which is given by the user in terms of keyboard actions and mouse clicks to the application layer.
The Application Layer is called business layer because here we find logic controls and functionality that processes data received from the presentation layer and database layer.
And finally, the Database Layer is the layer that stores data. It contains methods that connect to the database and performs the required actions needed.
So what’s the example of two-tier and three-tier architecture in real life?
E-commerce that you access on web is an example for two-tier architecture. When you access e-commerce on web, you send request to the server directly, and it sends back the data. E-commerce that you access on web is an example for two-tier architecture.
On the other hand, three-tier architecture is e-commerce that you access on your smartphone. When you access it, you are dealing with the presentation layer. Then, when you take actions with mouse clicks, it is passed to application layer. And the ones who stores the data of your actions is database layer.
What is Docker?
Docker is an open source for developer to build, deploy, and run applications by using containers. Docker is used as a operation system’s virtualization, server, or even a web server.
Docker wraps the application and its library and dependency on containers. Containers then are built into images that can be copied and shared to anywhere. That makes the application/web can be opened on all PCs.
Docker vs Virtual Machine
But first of all, what is virtual machine? Virtual machine is a system that acts exactly like a computer. It makes it possible to to run what appears to be on other computers and hardwares on one computer. It has to have the underlying operating systems.
Since they both are system’s virtualization, what differs Docker and Virtual Machine?
As explained above, Docker can use only containers to store many applications and that makes it more effective and faster to access. Different with Docker, virtual machine will be way heavier and difficult to maintain.
This explains why developers usually use Docker on their development process. Then, we can implement server on each application on the production phase. Although….. practically it’s also enough to just use Docker!
These are other difference between those two: their operating system support, security, portability, and performance.
- Operating System Support
As you can see on the image above, virtual machine has its guest operating system, meanwhile Docker share the host operating system. It makes virtual machine heavier than Docker.
- Security
Since virtual machine doesn’t share operating system, it has strong isolation in the host kernel. It makes it more secure compared to Docker. When the attacker gets access to one of the containers on Docker, it means that he has overcome all containers.
- Portability
Docker containers are easily ported because they don’t have any seperated operating systems. It can start immediately when a container is ported to different OS.
On the other hand, virtual machine has separated OS so it would be difficult to port and takes a lot of time because of its size.
- Performance
The lighter weight of Docker makes it run faster than virtual machine.
So why should we use Docker?
- Docker enables more efficient use of system resources. Docker uses less memory than virtual machine.
- Docker enables faster software delivery cycles. Docker makes it easy for developers to implement new versions of software or even roll back to previous version.
- Docker is lightweight, portable, and self-contained.
- Docker is excellent as microservices architecture. Containers are perfectly suited to the microservices approach and to agile development process.
Docker Architecture
Docker daemon is a service that runs on our host operating system. The Docker daemon itself exposes a REST API. Many different tools can talk to the daemon through this API. We use Docker CLI as a command line tool to talk to the Docker daemon.
Terms on Docker
To understand Docker, you should understand these terms first:
- Docker Daemon
Docker daemon is a service that is in the host on the OS. It’s used to build, deploy, and run Docker’s container. User can’t immediately use Docker daemon. To use Docker daemon, user use docker client as a middleman or CLI.
2. Docker Images
Docker image is a read-only template. This template is actually an OS or installed OS in many apps. We can make many Docker containers just by using one Docker image.
3. Docker Container
Docker container can be considered as a folder. It is made with Docker daemon. This container later can be built so it results a Docker image that can be used to make a new Docker container.
4. Docker Registry
Docker registry is a set of Docker images that is private or public. We can access Docker registry in Docker hub.We can push or pull our own image here.
5. Docker/Container Orchestration
Docker/Container Orchestration is an activity of container’s life cycle, especially on a greater and dynamic environment. Developer can use many container orchestration to control and automate many activities like
- The start and the stop process of a container.
- Exposing functionality from a container to a user or other containers.
- Deploying the new version of container with zero-downtime.
- Auto or manual cleaning.
Docker is a client — server architecture
We can say that Docker is a client — server application. The daemon is the server, and the CLI is one of many clients. There are also many third party clients. We can use the client to manage many different components of the daemon such as images, containers, networks and data volumes.
After we understand terms on Docker, we can now take a look on how the client talks to the Docker host
- Client on the left is where we run various Docker commands. It can be installed on our OS.
- Docker host is a server that runs Docker daemon. The client and the daemon doesn’t have to be in the same OS.
- Docker registry to find and download Docker images.
Container Orchestration
Container orchestration is the automatic process of managing and scheduling the work of individual for applications based on microservices on multiple clusters. It focuses on managing life cycle of containers and their dynamic environments.
Why do we need container orchestration?
Container orchestration is used to automate these tasks:
- Configuring and scheduling for containers.
- Deployments of containers.
- Availability of containers.
- The configuration of applications in terms of the containers that they run in.
- Scaling of containers to equally balance application workloads across infrastructure.
- Allocation of resources between containers.
- Load balancing, traffic routing and service discovery of containers.
- Health monitoring of containers.
- Securing the interactions between containers.
How does container orchestration work?
First, configurations files tell the container orchestration tool how to network between containers and where to store logs.
Then, the orchestration tool schedules deployment of containers into clusters and determines the best host for the container. After a host is decided, the orchestration tool manages the lifecycle of the container. Container orchestration tools work in any environment that runs containers.
Tools to manage, scale, and maintain containerized applications are called orchestrators. The most common orchestrators are Kubernetes and Docker Swarm.
Docker Swarm vs Kubernetes
Docker Swarm can run applications as containers find existing container images from others, and deploy a container on a laptop, server or cloud (public cloud or private).
On the other hand, Kubernetes allows container clustering via a container orchestration engine. Kubernetes has declarative management so it’s not complex at all. It also is open source so it can be run anywhere.
When do we use Docker Orchestration?
Simple containerized app/web that is used by small amount of users doesn’t really need orchestration. But if you have a complex app with a number of functionalities, orchestration comes to the rescue. Here’s when we use orchestration:
- When our app is complex. If our app involves more than two containers, it’s best to use orchestration.
- When our app has high demands for scaling and resilience. Orchestrators let us balance loads and spin up containers to fulfill the demand. It determines the state of the system instead of coding all the reactions to changing conditions manually.
- When you want to make the most of modern CI/CD techniques. Orchestration systems support deployment patterns for apps using blue/green deployment or rolling upgrades.
How to Set Up Docker
- Download and install Docker Desktop
2. Test Docker version
Open terminal and run
docker --version
3. Test hello world Docker image
4. List the hello-world image that you downloaded to your machine
docker image ls
5. List the hello-world container
Here are some of the Docker commands you can use:
Conclusion
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. By using Docker, you can significantly reduce the delay between writing code and running it in production.
References:
- https://www.guru99.com/docker-tutorial.html#8
- https://docs.docker.com/get-started/
- https://nickjanetakis.com/blog/understanding-how-the-docker-daemon-and-docker-cli-work-together
- https://geekflare.com/docker-vs-virtual-machine/
- https://martinfowler.com/architecture/
- https://www.ibm.com/developerworks/rational/library/feb06/eeles/index.html