Red Hat Blog: Internship Process and What I Learned During the Training Phase

Khyati Soneji
Your Tech Intern
Published in
8 min readMar 7, 2021
Red Hat logo

Hello Everyone, as a part of my final semester study (which is an industrial experience), I’m interning at Red Hat. My internship began from the first week of January.

It’s been 2 months of my internship and in this blog, I’ll cover the following:

How the training period at Red Hat works

Things I learned as part of the training

Note, I’ll not be going into detail about the training period; I have only covered the overview of how the training period is like and what I learned as well as what I did.

How the training period at Red Hat works

There are two different training periods, first the general training to different tools and technology, and the second deep dive into the tools and technologies related to your project/team.

In the first month, i.e. January, there was general training for all the interns under the same domain, i.e. all the interns under the DevTools team were given the same training.

General Training period

As, Red Hat is one of the companies heavily contributing to Opensource, we were given training on what is Open-Source, Git, Golang (as most of the projects use Golang), Javascript, ReactJs (frontend languages to get knowledge on UI), Docker, Podman, Kubernetes, Openshift, Tekton Pipelines, Logging, DevTools, Agile Practices (important to follow for teams to work more efficiently).

There are different teams in Red Hat such as: Storage, Middleware, Agile, Developer, Fuse engineering. I was in the Developer team and hence in the general training, we were only trained to the technologies and tools used by all the teams under Developer Engineering.

Links (Resources)to these topics are shared at the end

This training period was for us to familiarize ourselves with the technologies used by different teams in Red Hat under Developer engineering (or DevTools team). After which, you’re asked to share the sub-team/area you’re interested in, based on which the team you’ll be working with is selected.

Team specific training period

I chose to be in the DevSecOps team (also called the Tekton Pipeline team), which works on Tekton Pipelines.

Note that multiple interns are selected for a project and all of them work together as a team.

During this training period, I was trained on the following topics with practical implementation to understand how it works.

Docker

Kubernetes

Openshift

Tekton Pipelines

Things I learned as part of the training

In this section, I’ll be sharing my understanding of different tools, their needs, and as well as my resources that helped me understand them.

I’ll be covering, what are CI/CD, Tekton Pipelines, and the need for Tekton Pipelines in the next blog.

Before I share what is docker, let’s understand what are containers and why do we need them?

What is a container?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

Why do we need containers?

Containers exist because they solve an important problem: how to make sure that software runs correctly when it is moved from one computing environment to another.

The problem that can arise here is, for example, you’re running the test on Python 2.7 and while on production the code is running on Python 3, the results will be unexpected!. Not just that network topology, security policies might also be different.

How do the containers solve this problem?

A container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.

What is docker?

Docker is an open-source project that makes it easy to create containers and container-based apps.

Docker Architecture

Now, what are the different components to create a docker containerized application?

Dockerfile: it is a text file written in an easy-to-understand syntax that includes the instructions to build a Docker image. A Dockerfile specifies the operating system that will underlie the container, along with the languages, environmental variables, file locations, network ports, and other components it needs — and, what the container will actually be doing once we run it.

Docker image: After the docker file is created, you build this Docker file to create the Docker image, hence we can say that the Docker file contains the set of instructions to tell build on how to create an image.

Docker image is a portable file containing the specifications for which software components the container will run and how.

Docker run: Run utility is used to launch a container. This container will be an instance of the Docker image.

You can learn more about Docker, Dockerhub, Virtualization vs Containerization here.

Another container engine is Podman, which is developed by Red Hat. You can learn about Podman, Podman vs Docker here.

Now, let’s understand what is Kubernetes and why do we need Kubernetes?

What is Kubernetes?

Kubernetes is an orchestration tool for containerized applications. Starting with a collection of Docker containers, Kubernetes can control resource allocation and traffic management for cloud applications and microservices.

SImple K8s cluster for Hello World Program

Why do we need Kubernetes?

Containers do solve the problem of making sure software runs correctly on different systems/environments. But in the real world, there are multi-containers and hence that arises different problems like:

How would all of these containers be coordinated and scheduled? How do you seamlessly upgrade an application without any interruption of service? How do you monitor the health of an application, know when something goes wrong and seamlessly restart it?

This shows that Containers are not easy to manage at volume in a real-world production environment. Hence, Containers at volume need an orchestration system.

What does an Orchestration system do?

An orchestration system serves as a dynamic, comprehensive infrastructure for a container-based application, allowing it to operate in a protected, highly organized environment while managing its interactions with the external world.

Kubernetes is well-suited to the task and is one of the reasons it has become so popular.

Other orchestration systems are: Docker swarm, Mesos, and of course Openshift.

You can learn about Docker Swarm, Mesos, and how it differs from K8s and from each other here.

Now let’s understand what is Openshift, what issues it addresses that are not covered by Kubernetes.

What is Openshift and how is it useful?

OpenShift is a cloud development Platform as a Service (PaaS) hosted by Red Hat. It’s an open-source, cloud-based, user-friendly platform used to create, test, and run applications, and finally deploy them on the cloud.

OpenShift can manage applications written in different languages. One of the key features of OpenShift is extensible, which helps the users support the application written in other languages.

OpenShift helps organizations move their traditional application infrastructure and platform from physical, virtual mediums to the cloud.

OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster.

How to build and deploy on Openshift (simple application)

Why do we need Openshift when we have Kubernetes?

Security

Kubernetes lacks built-in capabilities for authentication and authorization. This necessitates the manual creation of authentication procedures, such as token bearing for security. Moreover, traffic within a Kubernetes cluster by-default also lacks encryption.

Whereas, OpenShift has strict and well-defined security policies, and it offers an integrated server for easier authentication and authorization.

User experience

Another, issue/difference is the ease of use; Kubernetes UI is complex and hence difficult to use, and you have to install Kubernetes dashboard add proxies whereas Openshift has a Web Console which lets the user easily modify the clusters and also visualize them.

Continuous Integration/Continuous Delivery

Kubernetes does not have a comprehensive solution CI/CD solution. You have to pair it with tools like automated monitoring, testing, and CI servers to help you create an entire CI/CD pipeline.

While OpenShift has an integrated, certified Jenkins container that acts as a CI server

I have not included all the differences between Openshift and Kubernetes. You can find more differences between Kubernetes and Openshift here.

During the second training period, we made a simple app that is able to add and retrieve data from the database. We ran this app using containers which helped us understand networks (for communication between different docker containers) and docker-compose (in case there are multiple networks it gets hard to manually create and maintain each network).

We also used the same app and deployed it on Kubernetes Cluster, which helped us understand services, deployment, secrets, Persistent Volume, Persistent Volume Claims, and then used the same app to build CI using the Tekton pipeline. I’ll be going through those topics in my next blog.

Moreover, we were also given 150 questions to help us understand the different components and features of Kubernetes. You can find the list of commands and files to enable different features of Kubernetes here. I have included the links to different topics like multi-containers, Cron jobs to help understand what are these and why are they used.

If you want to get started with Docker/Podman, Kubernetes, and Tekton pipelines you can also use my code for reference. Please note this is just my understanding of different topics and I’m not going into detail about different components and features; there are plenty of resources available online to help you understand the work of these tools and also their components.

You can go through the videos on Docker and Kubernetes of this channel; there are videos on courses for beginners and which contain examples to help you understand while you build yourself. It also has videos on advanced topics. You can also buy Red Hat University Subscription which contains many practical courses to help you understand topics while you implement them. But please note there are plenty of free resources available online.

I’ll soon be writing another blog to cover other topics. Meanwhile, go through the links provided to understand these topics in detail and if you have any questions/doubts can drop them in the comments or you can also comment the links that you found useful. Hope you find this helpful🤗

--

--