Red Hat Internship Experience / Part 1 of 2
New Journey at Red Hat
Working at a tech company for the first time, learning OpenShift, and making personal projects
I had the privilege of being an intern at Red Hat as part of my third year Industry Placement module and I wanted to write about my experience working there. In this article I will be talking about my first two months at Red Hat which involved getting used to working at a tech company for the first time, learning how to use OpenShift, and making some personal projects along the way.
This is Part 1 of 2. In Part 2 of 2, I wrote about my last four months at Red Hat, you should check it out:
The First Day
All the interns started the first day in the board room where we were given a rough outline of what we would be doing for the next six months. We were also told all the important information like how to get our security clearance, when our one-to-one meetings would be etc. The speakers were very nice, and you could tell they were compassionate and driven people. Afterwards, all the interns introduced themselves one-by-one and we all got to know each other.
One aspect I really liked was that we got to talk to last year’s interns (now full-time employees) and ask them any questions we had. This was helpful since they were in our position not that long ago and were glad to share any advice or experience they had. Their group consisted of people working in several different areas, so when a question was asked it would usually be answered with a variety of unique perspectives, and it felt like a friendly environment that anyone could chip into.
Shortly after, we were led to the desks where we received our work laptops and some free merch. The past-interns helped us install Fedora 38 (or RHEL) and linked us to a set of onboarding documents containing answers to many common questions e.g. recommended courses. These were very informative and I would go back to them many times throughout the internship.
If you’re curious, we were given ThinkPad P1 Gen 3’s. These come with Thunderbolt 3 and worked seamlessly with the ThinkVision monitors at the office, all you had to do was plug-in a single cable to your laptop and you would get access to the internet, charging, and be able to see your work on a much larger screen. Unfortunately, the multi-monitor support was extremely finnicky (not necessary Lenovo’s fault, probably due to Linux/Wayland) so every morning was a struggle if you wanted that 😅.
Once we had everything set-up, we got to meet our personal mentors. My mentor was Ahmed Abdalla, a Principal Software Engineer. He is a very nice guy and we would both be working on the same team, however it was recommended that I first complete an introductory course to Red Hat OpenShift since it was a critical part of the team’s assigned project. We decided that the “Red Hat OpenShift Administration I: Managing Containers and Kubernetes (DO180)” course would be perfect (it was also recommended by the onboarding docs) (Red Hat graciously provided me with free access to their courses and labs).
Training
During the DO180 course I learned all the basics of using OpenShift: using the oc
CLI, deploying and troubleshooting pods, provisioning storage, etc. The course was well designed and I had a smooth learning experience especially thanks to the provided lab environment and exercises. I am a big advocate for hands-on learning so this was a welcome feature of the course.
Kuby
I personally get bored just reading and doing exercises though, so I decided to work on a small project that involves Kubernetes/OpenShift. My mentor suggested that I make a simple CLI for displaying Kubernetes resources so that’s what I decided to do. I wrote the prototype in Kotlin first since I previously worked with it in college, however decided to rewrite and continue making it in Go since it is commonly used across Red Hat. I only had a tiny amount of experience using Go before but thought this would be a great way to get more familiar with it.
Being a CLI for managing Kubernetes resources written in Go, this project was very much inspired by K9s which is a great tool I would recommend you use. My project did take a slightly different approach when it comes to UI though, opting to use the excellent bubbletea
and lipgloss
packages.
Thanks to Kubernetes being written in Go, I was also able to use the official client-go
package which was a nice bonus. I thought that if I ever decide to write an operator in the future this would hopefully help me gain some experience working with Kubernetes code.
At the time, my team was also considering using Apache Kafka for what we were working on. I’ve always wanted to learn more about it, so I decided to complete the “Developing Event-driven Applications with Apache Kafka and Red Hat AMQ Streams (AD482)” course which was well written and included labs just like the OpenShift course.
One of the reasons I put off learning Kafka until this point though was because it is very intimidating to someone who knows nothing about it. There are all these concepts you must wrap your head around like Topics, Partitions, Brokers, Producers, Consumer Groups, etc. I refuse to believe anybody can look at one of these diagrams and have any idea of even remotely what is going:
Thankfully, the course explains all these complicated concepts step-by-step using many diagrams and videos which makes it much more beginner friendly. Getting to actually use each of the concepts while working on Quarkus applications over the span of the course also makes the information stick in your head a lot easier.
Webmon
I wanted to work on an actual project (again) after completing the course and use my newly acquired Kafka skills, so after some deliberation with my mentor we decided on a basic uptime tracker for websites. This also involved some other tech I wanted to try out such as Grafana (with data from PostgreSQL).
I decided to write the application in Kotlin since I have previous experience with it (is this déjà vu?), and because a large part of the Kafka source code is written in Java. This also gave me access to Kotlin’s coroutines which provide a great way to write asynchronous code, something that would definitely be essential when working with a platform for handling real-time data such as Kafka.
The code is split into two parts: the producer and the consumer code. The producer code just loops over the URLs you pass in, sends a GET
request to each URL, and sends the response’s status code and timestamp to a Kafka topic (this loops infinitely). Meanwhile, the consumer code subscribes to that topic, continuously polls for new records, and inserts those into a database. Finally, there is a Grafana dashboard that displays the data using some cool looking graphs.
To Be Continued
That’s pretty much it for my first two months at Red Hat. There are a lot of things I couldn’t include in this article for the sake of brevity, such as all the great people I met during the two months, smaller personal projects, etc.
However, I will be writing about the actual work I did at Red Hat in a second article. There, I will explain my involvement in the Open Data Hub AI Edge PoC, including my contributions to the documentation and pipeline code (using OpenShift Pipelines).
Thank you for reading this article, here’s a cute cat :)