High-Level Overview of DevOps Tools:

Sania Iftikhar
7 min readFeb 4, 2023

--

Hey! This is Sania Iftikhar — Today I’ll talk about DevOps tools. Let’s start! OK! When we hear about DevOps we come across a number of tools like Docker, Terraform, Ansible, Jenkins, CloudFormation, Kubernetes, Git, etc, and it might be overwhelming to learn tons of tools and technologies to learn.

And in this article, I'll try to demystify all these tools by relating them to a real-world problem.

It all starts with an IDEA! Let’s you have an Idea and you think it’s going to change the world! So you are going to build a Website that “Books tickets to Mars” in advance. So that people don’t have to travel, wait in a queue and pay too much. So what would you do?

An intelligent developer opens his/her favorite editor and starts implementing his/her idea. And after a few hours, the first version of the product is ready and it’s time to share it with the world. By this time it runs on your local environment and it didn’t accessible when you shut down the computer. So for this, you have to Upload it on a system like a physical server in the Data Center or Virtual Machine in the cloud that is never turned off.

You copy the code and place it on Server. Here you need to configure the system like you need to install all the required/exact versions of packages and libraries and configured in the same way.

Development Environment: Where you build your application.

Production Environment: The server where you host your application.

Now the application is accessible through IP but it’s not the correct way to access through IP. For this, you have to purchase the domain name.

In the future, your website is getting famous and there are thousands of requests. For this, we have the following Workflow!

  1. Development -> where you write your code

2. The code in text format is not good enough to run as an application by the end user, for this, we have to convert it into an executable format in windows and in Linux Binary format. “Build the Code” — The tools “Maven, Gradle”.

Build Script is:
$ ./build.sh

3. And the executable file is moved to Production Environment which is called the “Deploy Stage”.

And run: 
$ ./app

And by this time you have a team that needs to collaborate in code. For this there is:

Git vs GitHub:

Git helps all developers to work on the same application at the same time and collaborate efficiently. Everyone easily “pulls the latest code” from the central hub using the git pull command adds their own changes and “pushes it back” using the “git push” command. The central hub is the cloud-based platform that serves as a central location for all the code. So!

Git is the underlying technology(Version Control) and GitHub is the publically hosted git-based central Repository entry of code,

where you configure projects, organizations, and users and define different access to different users. Other similar platforms are also available like Bitbucket and GitLab.

Now the development issues are sorted by Git/GitHub but we have to manually upload the code to the production environment.

As everyone contributed the code needs to be built with the changes, So building on the local system no longer works as an individual’s laptop may not have all the latest changes. So for this “We move the Build Operation to a dedicated build server that gets the latest execution of code(Manually moves the code from development) and builds the code to executable format before moving it to production.”

It’s good to analyze the bugs/errors before moving it to the production stage so for this, we have a Test Environment and manually copy the code to the test environment. And manually move the executable to Production Environment. Everything takes place manually So to sort out this we have!

CI/CD — Continous Integration & Continous Delivery:

Tools like Jenkins, Github Actions, and GitLab CI/CD helps to automate the whole manual workflow and build the pipeline with one of these tools configured. Every time, when you push the code it is automatically pulled from Github Repository to the Build Server and create the build, and automatically moved to Test and then to the Production Environment.

As with Git/GitHub and CI/CD pipelines in place, we have enabled our team to make changes to applications and get them to production seamlessly however, it is still not all seamless.

Remember we talk about dependencies, libraries, and packages that need to be placed in the same exact way and have configured the latest versions on Build, Test, and Production Server. Every time a new version needs to be updated, configured manually. And at this place:

Containers come:

Containers help package the application and its dependencies into an image that can be run on any system without worrying about the dependencies, So during the build, you build a container image with the application and its dependencies packaged into it and all other Servers can now simply run a Container from that image without worrying about installing and configuring the libraries.

Docker:

Docker works with Container, through Docker developer can create a Docker file “vi Dockerfile” which specifies all the libraries and dependencies, and this file is used during the Build “$ docker build” and build the image and that image is run “$ docker run” on other environments.

$ docker build

And for run on Test and Production Environments:

$ docker run

The major functionality of a container is that it enables isolation between processes so each container is isolated and allows to run multiple containers and each has its own instances of an application on the same server.

Let’s focus on the Production Side: With the passage of time, users are increasing and we need more servers and run applications on all of them so, we have containers at this time and run on all servers. How do we do the right way that containers scale up when the users increase and scale down when the traffic is decreased? And how do we ensure that a container is destroyed and automatically brought back up?

At this point we have:

Container Orchestration — Kubernetes:

Kubernetes is a popular container orchestration platform that helps to configure how the container should be deployed and ensure it is always in a running state. Helps in automatically scaling up and destroying the container, managing the resources, and ensuring optimal resource utilization.

Managing the underlying infrastructure is a very big challenge at this time. So, every time a new server needs to be provisioned it needs to be set up in the exact same way means the right resources, right versions, and storage attached to it probably, and maybe other settings like docker runtime that need to be pre-configured necessary Kubernetes packages. If we do this through Cloud Console it going to be very time-consuming and may lead to some errors. So here!

Terraform come:

Terraform automates the provisioning and configurations of servers irrespective of the cloud platform in the same exact state. If someone does changes manually in the servers, not through Terraform it changes it back to Terraform, So make sure the states defined are preserved. The state is defined in Terraform manifest file where lists of servers and their configurations, storage buckets, and VPC are defined and stored in the code repository and look like Code. That’s why it’s called “Infrastructure as Code (IaC)”. Changes are made to the code and then run the:

$ terraform apply
Terraform Template:

resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"

tags = {
Name = "Project VPC"
}
}
variable "public_subnet_cidrs" {
type = list(string)
description = "Public Subnet CIDR values"
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}

variable "private_subnet_cidrs" {
type = list(string)
description = "Private Subnet CIDR values"
default = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
}

Ansible:

As Terraform is more of an infrastructure provisioning tool, Ansible is an Automation tool that helps configure this infrastructure once provisioned. Ansible and Terraform may overlap both are used to provision and automate the infrastructure and has benefits in their own way. Ansible is used for post-configuration tasks like installing software and configuring them on servers.

name: Ansible template example 
hosts: myserver
remote_user: ubuntu # Using Remote host as ubuntu
tasks:
- name: Create the app.conf configuration file
template:
src: "~/ansible_template_demo/app.conf.j2"
dest: "/etc/app.conf"
become: true

Let’s Talk about the Maintenance of Servers — Prometheus:

Like we want to be able to monitor the infrastructure and take preventive steps like we need to see CPU utilization, and memory utilization, and identify the process that causes high cost, and at this stage we use Prometheus. It collects the information from the server and stores it centrally.

Grafana:

We not only need to collect the metrics of servers but also want to visualize them graphically. Grafana helps to visualize the data into charts and graphs collected by Prometheus.

DevOps
Fig: DevOps Tools!

Summarize:

We start with:

  1. Idea
  2. Building it
  3. Deploying it
  4. Getting it out to end users fast
  5. Getting feedback
  6. Reviewing them and Brainstorming
  7. Coming up with new ideas and implementing them

So any code pushed now goes to the pipeline that we defined above and it’s automatically built, tested, deployed, and possibly multiple deploys to production every day, and after deployment, it is monitored and feedback gets from end users and this cycle repeats many times.

“And that is what DevOps!”

“DevOps is a combination of people, processors, and tools that work in collaboration from the very start of an idea to the execution and deliver high-quality software consistently!”

DevOps Pipeline
“Automation is key to unlocking the full potential of DevOps.” — Jez Humble

Thank you..:)

--

--