Overview of history
Previously, we deployed our application using multiple physical servers. To make things clearer, when we have an application that needs a web server, database and application server to run, we use three separate hardware to deploy it.
problems with the 1st generation
- Proper maintenance is difficult and It costs.
- More space is required to keep the hardware.
- Need separate Network.
- We’ll need separate operating systems for each hardware box, which will cost money, and we’ll have to maintain them (ex: patches, updates)
- Wastages, We may not be able to use all of the facilities that each physical box provides. (ex: Application may not use 100% of the processing power of the webserver)
The majority of the issues from the first generation were addressed in the second generation.
We moved to Hypervisor in the 2nd generation. We used a single piece of high-performance hardware in this case. On top of the hardware, we install a hypervisor. On top of the hypervisor, we create multiple virtual machines. Then, on top of virtual machines, we install the various operating systems we need. Our program can then be run on top of the operating systems. Look at the image below,
Advantages with hypervisor
- Maintenance is easy because we use single hardware.
- Less space required.
- No separate network needed.
- Fewer wastages, because multiple virtual machines together may use maximum performance of the single hardware box.
problems with the 2nd generation
- Since we are using multiple virtual machines we should install multiple operating systems, which will cost money, and we’ll have to maintain them (ex: patches, updates)
- If we wish to add a new service, we must first construct a new virtual machine, which requires a lot of configuration, and also we can’t run the service instantly because the operating system takes its booting time.
We moved to a container-based architecture to resolve the issues with the 2nd generation.
We have a single hardware box in container-based design as well. Then, on top of the hardware, we install single operating systems. Then, on top of the operating system, we install docker-engine. Then, on top of docker-engine, we create numerous dockers or containers. Then, on top of the docker, we run our applications. Look at the picture below,
Advantages of containers (Docker)
- Since we using single operating system maintenance is easy.
- We can add new services by simply adding a docker and we can run the service instantly because no booting time needed. (because already the operating system is up and running).
The difference between docker and container is same as the difference between coke and soft drinks 😃. Docker is the product name, whereas container is the exact term for technology.
Docker is a project, created by a company named dotcloud which is later renamed as Docker, Inc.
Docker is a open source project which is developed by Google’s GO language.
What is Docker Engine?
The Docker Engine is the core component of a Docker architecture. Docker Engine manages containers, images, builds, Orchestration and security etc. A Docker Engine follows a client-server architecture and made up of the following sub-components.
- The Docker Daemon: This is the server that is running on the host computer. It manages Docker images, Containers, networks, etc. This is also known as Dockerd.
- Command Line Interface (CLI): It is a client which is used to enter Docker commands.
- A REST API: It supports interactions between the client and the daemon.
What are registries?
Registries are the location where the Docker images are stored. It can be a public docker registry or a private docker registry. The default registry of public Docker images is the Docker Hub. You can also create and run your own private registry. You can pull Docker images from the registry and you can push your customized docker images back to the registry.
What is the relationship between Dockerfile, Images, and Containers?
- Docker File: A Dockerfile is a text document that contains all the commands that a user can call on the command line to assemble a docker image.
- Docker Image: Docker image can be compared to a template that is used to create Docker containers. We can create a docker image by build the appropriate dockerfile.
- Docker Container: Docker container is a running instance of a Docker image as they hold the entire package needed to run the application. When we run a docker image it creates the docker container.
What is Orchestration?
Orchestration is the process of bringing all of the containers together to achieve a common goal. The deployment, management, scaling, and networking of containers are all automated via container orchestration. Container orchestration is useful for businesses that need to deploy and manage hundreds or thousands of containers and hosts. Orchestrators are tools that do Orchestration for containerized applications, and the most popular examples are Kubernetes and Docker Swarm.
Do Dockers persistence?
Yes by nature dockers are persistent. Data and configuration we stored in the docker will not destroy if we shut down or restart docker.
Can we migrate the legacy application to docker?
Whether it’s a new or legacy application, it should fit microservice architecture in order to get the full-fledged feature of the dockerization. Therefore if we can convert the legacy application to fit microservice architecture by modifying or re-writing, we can migrate the application to docker and get the full-fledged feature of the dockerization.
Even if the legacy application does not fit the microservice architecture, you can still migrate it to Docker. However, you might not be able to achieve a fully functional feature.
What is ‘Open Container Initiative’ (OCI)?
Following the launch of Docker Inc’s docker, some other organizations have adopted the docker concept. However, they discovered that it did not fulfil all of their requirements and specifications and also it has few architectural flaws. As a result, they began to implement a framework named “Rocket” which is similar to docker. Due to the fact that two different firms were taking two different ways to achieve almost same initiative, they decided to form a mutual agreement named “OCI”. The OCI was founded in 2015 by Docker and a few other industry experts. The OCI will define the specification for container development. Therefore now all the containers of an application should comply with OCI standards. As a result, container-based development became platform-independent and vendor-independent
Keep Learning ❤️
Docker Explained - An Introductory Guide to Docker - DZone Cloud
Docker has gained immense popularity in this fast-growing IT world. Organizations are continuously adopting Docker in…
Docker Architecture and its Components for Beginner
Semrush is an all-in-one digital marketing solution with more than 50 tools in SEO, social media, competitor research…