Recently I began a project where I wanted to do local development on my Intel-based workstation and also deploy it to Kubernetes on my local Raspberry Pi cluster. After about 20 minutes, I got really tired of switching
Dockerfile configs around to deploy to the different platform architectures to test small changes. I knew that some larger projects had some sort of magic going on that figured out your platform and handed you the proper image. How could I do that for myself? It turns out that it’s not that hard to accomplish and automate. …
This is my last post on this Medium account. I’ve decided to move my tech writing back to a more traditional blog platform. I’ve settled on using Hugo and hosting it on Netlify.
If you’re interested you can find my continued work at https://jduncan.io.
Edit: I decided to bring tech content back to Medium. My farm/fun stuff will stay on https://jduncan.io. I decided I liked the idea of splitting the content off.
I’ve recently begun doing more work on ‘upstream’ or ‘standard’ Kubernetes as opposed to OpenShift. I’ve long had ways involving Ansible, bash, and probably dark magic to get OpenShift up and running for research purposes. But I didn’t have a lot of experience deploying ‘just’ Kubernetes. This post walks through what I needed to figure out to get a repeatable process up and running for my lab.
To help me get up to speed on the various processes I picked up a handful of Raspberry Pi 4’s (the 4GB models) and the accessories needed to run them. …
I recently moved to a Macbook Pro for my primary work laptop. I keep a Fedora 31 laptop handy, and I have a decent-sized home lab to doing Linux-y things. For browsing and surfing thoughs, a Macbook is a pretty good experience.
My home lab is what caused me to have to dig this up. My home network is Ubiquiti hardware, and it automatically manages my internal DNS zones and hostnames. I ❤ it. On my Linux laptop, I would quickly configure
dnsmasq and be done with it.
But my work laptop automatically connects to my work VPN. Because my…
Note: This is me walking through my learning experience from a joint effort with Jared Hocutt.
Right around the time Red Hat was acquiring them, CoreOS announced the release of a new framework and development kit around a concept called “Operators”. Since then, Operators have evolved into one of the best ways to effectively manage a kubernetes cluster as well as the applications deployed within it. But defining what an operator is and how they improve your cluster can be a little hard to put your finger on. They do a great job of abstracting out the kubernetes objects they…
Note: This post goes hand in hand with the Red Hat Summit breakout session I’m doing on May 9 at 1 pm.
The 2019 edition of Red Hat Summit is all about OpenShift. OpenShift 4.0, Operators, new container-based storage. All sorts of massive improvements for what I think is the best option in the industry to manage your applications for the next 5–7 years.
Our session (with Chris Green from the Microsoft Azure Government team) is about how to successfully deploy a production-grade instance of OpenShift in the Azure Government regions.
To accomplish this quickly we used the self-managed OpenShift…
Red Hat Summit is just over a week away. Amazing keynotes, great breakout sessions and panels, and more interactions between Red Hat, the associated vendor ecosystem and their customers. Also, if you’re not prepared, it can wear you down into a quivering puddle of what used to be an IT professional. There is a great introvert’s guide to Red Hat Summit this year by Joe Brockmeier. But for people who don’t mind socializing, it’s way too easy to over-do it. Morning sessions when you’ve been out until 3 AM are just no fun at all.
So how do you balance…
Now that you’ve gone through the learning curve, and figured out to build, deploy, patch, and upgrade kubernetes on your own. That’s a lot of work, and you’ve undoubtedly built up a mountain of knowledge that’s going to be helpful for years to come. But are you done?
Joe Beda, one of the original developers of kubernetes, calls kubernetes a toolbox. That means it’s not a finished product. That means that you need to bolt components onto and integrate components into kubernetes before you’re ready to call your infrastructure a true application platform. But what products? …
I talk a lot about technical debt; how paying it off is both a smart long and short term decision. I often focus my conversations around automating old and manual workflows (I work with a lot of my customers to bring Ansible successfully into their environments).
But technical debt can take on many forms:
Technical debt can be summarized as any workflow, process, code, or piece of hardware that consistently pulls attention away from fulfilling you and/or your team’s mission. That’s easy enough to understand. But it’s hard…
I’ve managed a small Python project for several years now call soscleaner. I created it when I was a TAM at Red Hat to make it easier for customers to upload sosreports and data sets to Red Hat support without having to include potentially sensitive information like hostnames and IP addresses. For lots of companies and government agencies, sharing information like this to outside vendors can be a violation of security standards, industry standards, or even federal law.
Linux Pro. F1 Geek. Dad. Wannabe Farmer. App Transformation & GCP @Google. Formerly VMW & Red Hat.