This series will allow me to share some history and experiences with you. I’ve been in and out of the development trenches for over 20 years. In my current role as a Distinguished Engineer at Capital One, I get to move up and down the gradients between hands-on tactical development and strategy work. My knowledge is fresh and pretty wide and I want to share with you some thoughts and observations to help you make decisions from a well informed platform.
Today, I wanted to do a writeup on tech leadership and helping your teams choose the right programming language…
In The Redis Swiss Army Knife - My Favorite Tool in My Engineering Utility Belt we covered Redis’ database, cache, and message broker capabilities, which are foundational to what makes Redis a great tool. Here, I want to build some recipes - cookbook style - to show common patterns or uses that I’ve leveraged in my work with Redis. This is all simplified code and diagrams, while I wouldn’t run it in production without some proper error handling and review, it should be enough to get the point across.
I’ve wanted to write these two articles for a while because Redis is a tool I use over and over again. As a software engineer with over 20 years experience I’ve used Redis continually since 2011, you could consider me a fanboy, if you will.
What is Redis? Per the Redis documentation, Redis is an “open source in-memory data structure store, used as a database, cache and message broker”.
A friend of mine recently introduced me to the game Factorio. In order to continue stroking my addiction, I’m going to use Factorio as a tool to continue teaching about scaling technology.
“Yes honey, I swear it’s work!” — me
I’ve previously written about data pipelines and message routing. In one post I used a fun example with a Gorilla Hairstylist in order to demonstrate routing messages through Kafka, a dumb (intentionally) message bus. One reason we use a tool like Kafka is to enable scaling of applications. …
As data and applications continue to get larger and faster, sometimes we need to make the data readily available. Depending on the need, we may store, or cache, that data in different ways.
Today, I want to bring you, my readers, together and talk about the concept of extremely fast data access, using caches to back up high traffic APIs and message consumer/producers… to make cash. (Get it? Cache? Cash? Yeah, I did that.)
A primary reason to set up caching outside of your database is to reduce load within your database engine. While scaling is easier than ever in…
The year was 1998 and our VB 4.0 applications connected to databases using JDBC to handle a small amount of data. Since 2003, our web applications and today’s API-based application layers continue to follow fairly simple patterns to process low volumes of data. In 2018 we’re building microservices to handle streaming data at velocities and volumes that we couldn’t imagine 20 years ago. This is enabled by an extremely fast message bus that enables throughput that has only been previously available via batch/ETL tools. But all of this data can cause problems with our databases. …
The Big Data world is moving to large distributed systems of message passing along a message bus. Sure, we’ve been making API calls and using enterprise service buses for what feels like forever; but by introducing Kafka we’ve changed the game. Kafka flips the enterprise service bus model inside out. Historically, we’d put the logic on the bus, and the code and the routing logic that was deployed on the bus knew how to pass messages around.
technologist, speaker, blogger, podcaster