Ticking Clocks in a Distributed System

Vaidehi Joshi
Oct 3, 2019 · 8 min read
Image for post
Image for post
Ticking clocks in a distributed system

We often spend a lot of our lives blissfully unaware of how something works. Most of the time, this ends up being an okay thing, since we don’t really need to know how everything around us works. But there are some times when we realize just how much complexity been hidden from us, abstracted out, tucked away neatly so that we never have to think about it.

I had one of these realizations recently when I discovered that I had never thought twice about a simple thing that I work with on an hourly basis: time! I’ve been using computers for the better part of my life, first as a consumer of technology and, later, as a creator of it. It was only until I began learning about distributed systems (for this series!) that I understood that I didn’t know as much as I thought about telling time, much less how the computers that we use everyday end up deciding what time it is.

When it comes to distributed computing, time is a completely different beast. But before we can really get into time in distributed systems, we’ll first need to understand how individual machines track time. So let’s start learning about what exactly makes time so darn tricky.

Tracking time

Image for post
Image for post
Real-time clocks: a definition

The answer is: with a clock! Now, this might seem obvious at first, but it gets a little tricker as we go on. A computer that needs to be able to tell the time will often have an internal clock that is built right into its hardware in the form of an integrated circuit. This circuit is often built right onto the motherboard. This small piece of hardware is known as a real-time clock, or RTC for short.

RTCs are particularly interesting because they come with an alternate power source (like a battery), which allows them to continue working even if a machine is powered off! This might seem obvious to us today, but the use of RTCs was a pretty significant milestone in the history of computing; early personal computers didn’t actually come with RTCs built-in, and they were added in later on. Now, we’ll find refrigerators and microwaves that have these little clocks built right in! That’s pretty wild if you think about it.

The RTC is responsible for keeping track of the current time, and we can think of it as one individual machine’s system clock. It’s important to note that this clock is specific to the “system” of the machine; in other words, any process or tasks or work that the machine does that relies on the time will inherently rely upon whatever time the system clock says it is.

So, how does this physical clock work, exactly? As it turns out, deep inside the integrated circuit is a crystal, which vibrates or oscillates; it’s known as a crystal oscillator.

Image for post
Image for post
Every computer has its own physical clock

Without getting into the nitty-gritty (read: physics) of this works under the hood, all we really need to know for our purposes is that the vibrations of the crystal are captured and counted by the clock. As the crystal vibrates, the clock keeps count of each vibration in the form of ticks, and as it counts one tick after another, it keeps track of the time. Of course, this begs the question of where it keeps count of each tick that it records. The physical clock uses a binary counter circuit, a simple circuit that does nothing more than count in binary, in order to store these ticks. Conveniently, the binary counter circuit (the system clock) is where a machine derives its system time from. Indeed, just as a machine has a system clock, it also has its own notion of time that is based on that clock!

Image for post
Image for post
System time: a definition

An interesting thing about system time is that it always is calculating time based on when the system clock began counting. This means that if we set the system clock on a new computer to be five minutes slow or two days fast, then our starting point for our system time or the “zero” of where we start our counting of time will end up being either five minutes behind or two days ahead. In other words, we’ll be measuring “ticks” based on whatever date or time we set — not on what the actual time is.

Image for post
Image for post
Understanding how Unix time works

To help avoid some of the confusion of this problem, most machines follow some kind of convention when it comes to deciding what that starting point should be. For example, machines that have Unix operating systems have standardized around Unix time, which is a way of deciding what the “zero” of where we start counting time should be. In the case of Unix time, the starting point or “zero” is the start of the Unix epoch, or January 1st, 1970, at 00:00:00 UT (Universal Time). An epoch is meant to be arbitrary; it’s nothing more than an agreed-upon starting date and time for when we should start measuring time. My personal favorite epoch is the one for Microsoft Excel programs, which is 0 January, 1990! (For the epochly-curious, check out this extensive list.)

Out of sync, out of control

Image for post
Image for post
Computer clocks are not consistent

Since every machine can have its own notion of time, we can assume that two different machines will each have their own concepts of what time it is. But this is where things start to get a bit complicated: if two different machines have two different ideas of what time it is, how can we be sure that they are the same?

Unfortunately for us, computer clocks are not consistent.

We already know that every clock has a different idea of when it started counting and its “zero”, and that one clock could be inconsistent with another. However, it’s also worth mentioning that not all clocks are precise, and some are more precise than others. Over time, the preciseness of each “tick” of a clock really starts to have an impact on how that clock determines its time.

Image for post
Image for post
Every system clock counts differently!

For example, a typical quartz clock will drift quartz clocks will lose or gain approximately one second over the course of 11 or 12 days. This is due to a tiny imprecision that occurs as the clock measures a single second as it ticks. A single second may not seem like much, but over time, slight imprecisions really start to add up! The preciseness of a clock could be affected by temperature, location, the clock’s source of power, and even just how well it was constructed.

Image for post
Image for post
Clock drift and clock skew

This phenomenon of limited clock precision causing two clocks to to count time differently is known as clock drift. Unfortunately, it is just a reality of any machine that needs to keep track of the time on its own. Because clock drift is so common, we often also find ourselves comparing two clocks that display two different times. This is known as clock skew, and it is the difference in time between two clocks.

In a perfect world where two clocks agreed on the time, we wouldn’t ever run into either of these concepts! In such a utopia, both clock drift and skew would be zero.

Alas, we do not live in such a world, and so we must think of both of these things when comparing two different machines and their times. Instead, we live in a world where things are messy and…distributed.

No one clock to rule them all

Image for post
Image for post
Clocks and distributed systems

As we already know, in a distributed system, all the individual components are called nodes, and they are each autonomous, capable of performing their own work. We also learned that each node has its own notion of time, and keeps track of its own time internally. When we combine these two facts together, we arrive at one singular conclusion:

There is no one, true global clock in a distributed system.

Every node in the system has its own concept of time, and there is no centralized place for the nodes in a distributed system to figure out what time it actually is. And perhaps this wouldn’t matter if we didn’t care about time so much! We use time so frequently in computing, particularly in order to figure out when an event occurred, and which event happened before another.

Image for post
Image for post
How does having no global clock impact the way a system works?

So what happens when we don’t have a global clock in a distributed system? Well, for starters, we can’t actually know the real time that any two evens occurred, or the order that they occurred in. This makes is very hard to figure out how two events might be scheduled in the future. It also makes a distributed system really difficult to debug, since we don’t know for sure whether one event occurred before the other!

We’ll be talking more about time and the ordering of events in upcoming posts, and hopefully we’ll find some tried and true solutions for this problem. Until then, try not to look at your watch or think too much about what time it is.


  1. How do Computer Clocks work?, Network Time Foundation
  2. Time in Distributed Systems, Brian Nielsen
  3. Clocks and Time, Behzad Bordbar
  4. Synchronizing Physical Clocks, Srinivasan Seshan
  5. Distributed System Foundations, Ronald LeRoi Burback


Exploring the basics of distributed systems, every…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store