I am fully aware that birthdays are just rituals. Functionally, they don’t really mean much at the end of the day; they just happen to be easy enough to use when we want to represent milestones and impactful events. And yet, I love these rituals. They are an opportunity to reflect and meditate on what has already passed — and in a way, they allow us to find some stillness in preparation for what is to come.

I’m turning thirty tomorrow, and I certainly don’t expect that I’ll feel much different tomorrow than I do today. But cumulatively, I feel like a much different person than I was when I started my twenties. I feel much more confident, sure-footed, firm, and grounded. It’s hard for me to pinpoint how this transformation came about; there was no single event that perpetuated this change. Instead, I have to believe that it is the sum of the experiences over the last decade and what I learned from them that incrementally changed who I am. …


Image for post
Logical time and Lamport clocks (part 2)

Throughout the course of this series, we’ve been learning time and again that distributed systems are hard. When faced with hard problems, what’s one do to? Well, as we learned in part one of this post, sometimes the answer is to strip away the complicated parts of a problem and try to make sense of things simply, instead.

This is exactly what Leslie Lamport did when he approached the problem of synchronizing time across different processes and clocks. As we learned in part one, he wrote a famous paper called “Time, Clocks, and the Ordering of Events in a Distributed System”, which detailed something called a logical clock, or a kind of counter to help keep track of events in a system. These clock counters were Lamport’s invention (and solution!) to the problem of keeping track of causally-ordered events within a system. …


Image for post
Logical time and Lamport clocks (part 1)

Over the course of this series, we’ve seen many instances of how things can more complicated than they seem. We saw this with failure, and we saw it with replication. More recently, we discovered that even the concept of time is more complex that we might have originally thought.

However, when the things that you thought you knew seem more convoluted than ever, sometimes the answer is to keep it simple. In other words, we can keep a problem simple by stripping out the confusing parts and trimming it down to its most essential parts. …


Image for post
Ordering distributed events

One of the hardest things about distributed systems is that we often find ourselves needing to approach them very differently than other problems in computing. Distributed computing forces us to reevaluate how we’d approach even the simplest obstacles in a single system model.

We recently began exploring one such example of this when we took a closer look at ticking clocks and how unreliable they are in a distributed system! As we learned, there is no single global clock in a distributed system, which makes it hard to ever agree on what time it is. …


Image for post
Ticking clocks in a distributed system

We often spend a lot of our lives blissfully unaware of how something works. Most of the time, this ends up being an okay thing, since we don’t really need to know how everything around us works. But there are some times when we realize just how much complexity been hidden from us, abstracted out, tucked away neatly so that we never have to think about it.

I had one of these realizations recently when I discovered that I had never thought twice about a simple thing that I work with on an hourly basis: time! I’ve been using computers for the better part of my life, first as a consumer of technology and, later, as a creator of it. It was only until I began learning about distributed systems (for this series!) …


Image for post
Parsing through partitions in a distributed system!

When it comes to tech jargon, one thing seems to always hold true: everyone has a different opinion about what certain words mean. I realize this fact every once in awhile; most recently, I came across it while trying to learn a new distributed systems concept.

The term “partition” is used a lot in distributed system courses and books, but there are also a slew of other terms that get lumped into this category as well. Until recently, I thought I knew what the term meant in the context of a distributed system, but as it turns out, there was more to the story — and the word — than I ever realized! …


Image for post
Redundancy and replication: duplicating in a distributed system

When it comes to programming, there are certain conventions, idioms, and principles that we run into and reference as a community quite often. One of those principles is the idea of “Don’t Repeat Yourself”, or DRY for short. I encountered this idea early on in my programming career, and it seemed pretty straightforward to me at the time: in order to maintain clean, concise code, it was important to ensure that one didn’t repeat the same lines or logic in our codebase.

But over the years of my career, I’ve learned and seen more, and realized that that repetition is not so cut and dry —no pun intended! Sometimes, it actually does make sense to repeat yourself at the risk of over-engineering or over-abstracting something unnecessarily. Sometimes, it makes sense to just duplicate a function or piece of logic and just “copypasta” it into another file. …


Image for post
Foraging for EVEN MORE fallacies of distributed computing!

So much of what makes distributed systems hard to content with is the fact that, as a system grows, it changes. Furthermore, the things around the system — parts of the system itself, its dependencies, and the people who maintain it — are also each capable of changing as well.

In part one of this series, we looked into the first four of the famous eight fallacies of distributed computing. Conveniently, those four fallacies all centered around the network, and the misconceptions and falsehoods that many developers can fall prey to when they are dealing with a distributed system. …


Image for post
Foraging for the fallacies of distributed computing!

So much of computing is based on assumptions. We design systems operating on a set of assumptions. We write programs and applications assuming certain parts of their systems will work a certain way. And we also assume that some things can potentially go wrong, and we (hopefully) attempt to account for them.

One big issue with building computer-y things is that, even though we’re often dealing with complex systems, we aren’t always capable of reasoning about on a big-picture level. Distributed systems are certainly a great example of this (you knew where I was going with this, didn’t you?). Even a “simple” distributed system isn’t so simple, because by definition it involves more than one node, and the nodes in the system have to communicate and talk to one another through a network. …


Image for post
Weeding out distributed system bugs

As we’ve learned more and more about distributed systems, we’ve seen the many ways that things can wrong. More specifically, we’ve seen that there are just so many possible scenarios in which a portion of a large or system can fail.

Since failure is inevitable in any sized system, we ought to do ourselves a failure and better understand it. So far, we’ve been talking about different kinds of faults and failures in a fairly abstract sense. It’s time for us to get a little more concrete, however. Sure, we vaguely understand the different flavors and profiles of how things can go wrong within a system, and why they are problematic (yet inevitable) within our system. …

About

Vaidehi Joshi

Writing words, writing code. Sometimes doing both at once.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store