What every developer should know about time

Davide Briani
23 min read2 days ago

--

Photo by Aron Visuals on Unsplash

What time is it? It’s 9:30 in the morning.

But what does that mean? This isn’t true everywhere in the world, and who decided a day has 24 hours? What is a second? What is time?

What is time, really?

Time, as we understand it, is the ongoing sequence of existence and events that occur in a seemingly irreversible flow from the past, through the present, and into the future.

General relativity reveals that the perception of time can vary for different observers: the concept of what time it is now holds significance only relative to a specific observer. On your wristwatch, time advances at a consistent rate of one second per second. However, if you observe a distant clock, its rate will be influenced by both velocity and gravity: a clock moving faster than you will appear to tick more slowly.

Ignoring relativistic effects for a moment, physics provides a practical definition of time: time is “what a clock reads”. Observing a certain number of repetitions of a standard cyclical event constitutes one standard unit, such as the second.

Time units

Even in ancient times, two notable physical phenomena could be observed cyclically and helped quantify the passage of time:

  • The Earth’s orbit around the Sun results in evident, cyclical events, and defines the “year” as a unit of time.
  • The Sun crossing the local meridian, i.e. the time it takes for the Sun to reach the same angle in the sky. The interval between two successive instances of this event identify the solar “day”.

Note that the solar day, as a time unit, is affected by both the Earth’s own rotation and its revolution around the Sun: since the Earth moves by roughly 1° around the Sun each day, the Earth has to rotate by roughly 361° for the Sun to cross the local meridian.

This is different from the “sidereal day”, which is the amount of time it takes for the Earth to rotate by 360°, since it is computed from the apparent motion of distant astronomical objects instead of the Sun.

However, the solar day is typically the unit of interest for us humans and our daily lives.

How many days in a year?

Although the Earth doesn’t rotate a discrete number of times in its year, and hence the latter cannot be subdivided into a discrete number of days, humans have devised numerous calendar systems to organize days for social, religious, and administrative purposes.

Pope Gregory XIII implemented the Gregorian calendar in 1582, which has become the predominant calendar system in use today. This calendar supplanted the Julian calendar, rectifying inaccuracies by incorporating leap years, which are additional days inserted to synchronize the calendar year with the solar year. Over a complete leap cycle spanning 400 years, the average duration of a Gregorian calendar year is 365.2425 days. While the majority of years consist of 365 days, 97 out of every 400 years are designated as leap years, extending to 366 days.

Despite the Gregorian calendar being the de-facto standard, other calendars exist:

  • Islamic Calendar: A lunar calendar consisting of 12 months in a year of 354 or 355 days.
  • Hebrew Calendar: A lunisolar calendar used by Jewish communities.
  • Chinese Calendar: A lunisolar calendar that determines traditional festivals and holidays in Chinese culture.

How many seconds in a day?

We all know that there are 60 seconds in a minute, 60 minutes in an hour, and 24 hours in a day, right? As it can be expected, these magic numbers aren’t a fundamental property of nature, but rather a result of two historical choices:

  • Dividing the day into 24 hours: This stems from ancient civilizations where early cultures divided the cycle of day and night into a convenient number of units, likely influenced by the phases of the moon or other natural patterns. Ancient egyptians used sundial subdivisions and split the cycle into two halves of 12 hours each: they used a duodecimal system already, likely influenced by the number of joints in our hands, excluding thumbs, and the ease of counting them. Twenty-four then became a common choice for counting hours.
  • Subdividing hours into 60 minutes and minutes into 60 seconds: This preference for base-60 systems comes from even earlier civilizations like the Sumerians and Babylonians. Their number systems were based on 60, again possibly due to the ease of counting on fingers and knuckles. This base-60 system was later adopted by the Greeks and eventually influenced our timekeeping.

Historically, dividing the natural day from sunrise to sunset into twelve hours, as the romans also did, meant that the hour had a variable measure that depended on season and latitude.

Equal hours were later standardized as 1/24 of the mean solar day.

Counting time

Historically, the concept of a second relied on the Earth’s rotation, but variability of the latter made this definition inadequate for scientific accuracy. In 1967, the second was redefined using the consistent properties of the caesium-133 atom. Presently, a second is defined as the time it takes for 9,192,631,770 oscillations of radiation during the transition between two energy states in caesium-133.

TAI and atomic clocks

Atomic clocks utilize the definition of a second for precise time measurement.

An array of atomic clocks globally synchronizes time through consensus: the clocks collectively determine the accurate time, with each clock adjusting to align with the consensus, known as International Atomic Time (TAI).

The International Bureau of Weights and Measures (BIPM) located near Paris, France, computes the International Atomic Time (TAI) by taking the weighted average of more than 300 atomic clocks from around the world, where the most stable clocks receive more weight in the calculation.

Operating on atomic seconds, TAI serves as the standard for calibrating various timekeeping instruments, including quartz and GPS clocks.

GPS time

Specifically, GPS satellites carry multiple atomic clocks for redundancy and better synchronization, but the accuracy of GPS timing requires different corrections, such as for both Special and General Relativity:

  • According to Special Relativity, clocks moving at high speeds will run slower relative to those on the ground. GPS satellites travel at approximately 14,000 km/h, which results in their clocks ticking slower by about 7.2 microseconds per day.
  • According to General Relativity, clocks in a weaker gravitational field run faster than those closer to a massive object. The altitude of GPS satellites (about 20,200 km) causes their clocks to tick faster by about 45.8 microseconds per day.
  • As the net effect, the combined relativistic effects cause the satellite clocks to run faster than ground-based clocks by about 38.6 microseconds per day.

To compensate, the satellite clocks are pre-adjusted on Earth before launch to tick slower than the ground clocks.

Once in orbit, GPS satellites continuously broadcast signals containing the exact transmission time and their positions.

A GPS receiver on the ground picks up these signals from multiple satellites and compares the reception time with the transmission time to calculate the time delays.

Using these delays, the receiver determines its distance from each satellite and, through trilateration, calculates its own position and the current time.

Precise timekeeping on GPS satellites is thus necessary, since small errors of 38.6 microseconds would result in big positional errors of roughly 11.4 km accumulated each day.

Notably, corrections must also consider that the elliptical satellite orbits cause variations in time dilation and gravitational frequency shifts over time. This eccentricity affects the clock rate difference between the satellite and the receiver, increasing or decreasing it based on the satellite’s altitude.

UTC and leap seconds

While TAI provides precise seconds, the Earth presents an irregular rotation: it gradually slows due to tidal acceleration.

On the other hand, for practical purposes, civil time is defined to agree with the Earth’s rotation.

Coordinated Universal Time (UTC) serves as the international standard for timekeeping. This time scale uses the same atomic seconds as TAI but adjusts for variations in the Earth’s rotation by adding or omitting leap seconds as necessary.

To determine if a leap second is required, universal time UT1 is utilized to measure mean solar time by monitoring the Earth’s rotation relative to the sun.

The variance between UT1 and UTC dictates the necessity of a leap second in UTC.

UTC has thus precise seconds and it is always kept within 0.9 seconds of the UT1 to synchronize with astronomical time: an additional second may be added to the last minute of a UTC year or June to compensate.

This is why the second is now precisely defined in the International System of Units (SI) as the atomic second, while days, hours, and minutes are only mentioned in the SI Brochure as accepted units for explanatory purposes: their use is deeply embedded in history and culture, but their definition is not precise as it refers to the mean solar day and it cannot account for the inconsistent rotational and orbital motion of the Earth.

Local time

Although UTC provides a universal time standard, individuals require a “local” time that is consistent to the position of the sun, ensuring that local noon (when the sun reaches its zenith) roughly aligns with 12:00 PM.

In a specific region, coordinating daily activities such as work and travel is more convenient when people adhere to a shared local time. For instance, if I start traveling to another nearby city at 9:00 in the morning, I can anticipate that it will be 11:00 AM after two hours of travel.

On the other hand, if I schedule a flight to a destination far away and the airline informs me that the arrival time is 11:00 AM local time, I can generally infer that the sun will be high in the sky.

Time zones

Time zones are designed to delineate geographical regions within which a convenient and consistent standard time is utilized.

A time zone is a set of rules that define a local time relative to the standard incremental time, i.e. UTC.

Imagine segmenting the Earth into 24 sections, each approximately 15 degrees of longitude apart. This process would roughly define geographical areas that differ by an hour, which is a time range small enough so that people living within each area can agree on a local time.

For this purpose, the Coordinated Universal Time (UTC), located at zero degrees longitude (the Prime Meridian), serves as the reference point. Each country or political entity declares its preferred time zone identified by its deviation from UTC, which spans from UTC−12:00 to UTC+14:00. Typically, these offsets are whole-hour increments, though certain zones like India and Nepal deviate by an additional 30 or 45 minutes.

This system ensures that different regions around the globe set their clocks according to their respective time zones.

There are actually more time zones than countries, even though time zones generally correspond to hourly divisions of the Earth’s geography.

At the boundaries of UTC offsets, the International Date Line (IDL) roughly follows the 180° meridian but zigzags to avoid splitting countries or territories, creating a sharp time difference where one side can be a whole day ahead of the other: i.e. when jumping from UTC-12 to UTC+12.

Some regions have chosen offsets that extend beyond the conventional range for economic or practical reasons. For example, Kiribati adjusted its time zones to ensure that all its islands are on the same calendar day. As a result, some areas of Kiribati observe UTC+14, hence why UTC offset range extends from -12 to +14.

Daylight Saving Time

Additionally, some nations adjust their clocks forward by one hour during warmer months. This practice, known as Daylight Saving Time (DST), aims to synchronize business hours with daylight hours, potentially boosting economic activity by extending daylight for commercial endeavors. The evening daylight extension also holds the promise of conserving energy by reducing reliance on artificial lighting. Typically, clocks are set forward in the spring and set back in the fall.

DST is less prevalent near the equator due to minimal variation in sunrise and sunset times, as well as in high latitudes, where a one-hour shift could lead to dramatic changes in daylight patterns. Consequently, certain countries maintain the same local time year-round.

Nigeria always observes West Africa Time (WAT), that is UTC+1. Japan observes Japan Standard Time (JST), that is UTC+9.

Instead, other countries observe DST during the summer by following a different UTC offset. That’s why Germany and other countries in Europe observe both Central European Time (CET) UTC+1 and Central European Summer Time (CEST) UTC+2 during the year, and that’s why CEST has such a name.

Interestingly, while the United Kingdom follows Greenwich Mean Time (GMT) UTC+0 in winter and British Summer Time (BST) UTC+1 in summer, Iceland uses GMT all year.

Time zone databases

Conceptually, a time zone is a region of the Earth that shares the same standard time, which can include both a standard time and a daylight saving time; these correspond to local time offsets with respect to UTC.

These conventions and rules change over time, hence why time zone databases exist and are regularly updated to reflect historical changes.

A standard reference is the IANA time zone database, also known as tz database or tzdata, which maintains a list of the existing time zones and their rules.

On Linux, macOS and Unix systems, you can navigate the timezone database at the standard path `/usr/share/zoneinfo/`.

Here’s a breakdown of the essential elements for navigating time zone databases:

  • The “time zone identifier” is typically a unique label in the form of Area/Location, e.g. America/New_York.
  • “Time zone abbreviations” are short forms used to denote specific time zones, often indicating whether it is standard or daylight saving time. For example, CET (Central European Time) that corresponds to UTC+01:00 and CEST (Central European Summer Time) that corresponds to UTC+02:00.
  • “Standard times” are the official local times observed in a region when daylight saving time is not in effect. For example, Central European Time (CET, UTC+01:00) is the standard time for many European countries during the winter months.

Time formats

For all practical purposes, UTC serves as the foundation for tracking the passage of time and as the reference for defining a local time.

However, all would be futile without a standardized representation of time that allows for consistent and accurate recording, processing, and exchange of time-related data across different systems.

Time formats are crucial for ensuring that time data is interpretable and usable by both humans and machines.

Some common formats have become the de-facto standards for interoperable time data.

ISO 8601, RFC 3339

ISO 8601 is a globally recognized standard for representing dates, times, and durations. It offers great versatility, accommodating a variety of date and time formats.

ISO 8601 strings represent date and time in a human-readable format with optional timezone information. It uses the Gregorian calendar, even for dates that precede its genesis.

Dates use the format `YYYY-MM-DD`, such as `2024–05–28`. Times use the format `HH:MM:SS`, such as `14:00:00`. Combined date and time can be formatted as `YYYY-MM-DDTHH:MM:SS`, for example `2024–05–28T14:00:00`. ISO 8601 strings can include timezone offsets, for example `2024–05–28T14:00:00+02:00`. `Z` is used to indicate Zulu time, the military name for UTC, for example `2024–05–28T14:00:00Z`.

ISO 8601 is further refined by RFC 3339 for use in internet protocols.

RFC 3339 is a profile of ISO 8601, meaning it adheres to the standard but imposes additional constraints to ensure consistency and avoid ambiguities:

  • Always specifies the time zone, either as Z for UTC or an offset. Example: `2024–05–28T14:00:00–05:00`.
  • Supports fractional seconds, such as in `2024–05–28T14:00:00.123Z`.
  • Fractional seconds, if used, must be preceded by a decimal point and can have any number of digits.
  • Does not allow the use of the truncated format of ISO 8601, such as `2024–05` for May 2024.

Both standards are not limited to representing only UTC times, as they can also express times that occurred before the introduction of UTC. Practically, they are most often used to represent UTC timestamps.

Since leap seconds are added to UTC to account for irregularities in the Earth’s rotation, ISO 8601 allows for the representation of the 60th second in a minute, which is the leap second.

For example, the latest leap second occurred at the end of 2016, so `2016–12–31T23:59:60+00:00` is a valid ISO 8601 time stamp.

Unix time

Unix time, or POSIX time, is a method of keeping track of time by counting the number of non-leap seconds that have passed since the Unix epoch. The Unix epoch is set at 00:00:00 UTC on January 1, 1970.

The Unix time format is thus a signed integer counting the number of non-leap seconds since the epoch.

For example, the Unix timestamp `1653484800` represents a specific second in time, that is 00:00:00 UTC on May 25, 2022.

For dates and times before the epoch, Unix time is represented as negative integers.

Unix time can also represent time with higher precision using fractional seconds (e.g., Unix timestamps with milliseconds, microseconds, or nanoseconds).

Note that Unix time is timezone-agnostic.

In addition, Unix time does not account for leap seconds, so it does not have a straightforward way to represent the 60th second (leap second) of a minute.

According to the POSIX.1 standard, Unix time handles a leap second by repeating the previous second, meaning that time appears to “stand still” for one second when fractional seconds are not considered. However, some systems, such as Google’s, use a technique called “leap smear.” Instead of inserting a leap second abruptly, they gradually adjust the system clock over a longer period, such as 24 hours, to distribute the one-second difference smoothly. As a result, certain Unix timestamps can be ambiguous: 1483142400 might refer to either the start of the leap second (2016–12–31 23:59:60) or one second later at the end of it (2017–01–01 00:00:00).

Note that every day in Unix time consists of exactly 86400 seconds. On the other hand, when leap seconds occur, the difference between two Unix times is not equal to the duration in seconds of the period between the two. Most applications however don’t require this level of accuracy, so Unix times are handy to compare and manipulate with basic additions and subtractions.

Clocks and synchronization

From atomic clocks to individual computers, time is synchronized through a hierarchical and redundant system using GPS, NTP, and other methods to ensure accuracy.

GPS satellites carry atomic clocks and broadcast time signals. Receivers on Earth can use these signals to determine precise time.

In Network Time Protocol (NTP), servers synchronize with atomic clocks via GPS or other means, and form a hierarchical distribution network of time information.

Radio stations, such as WWV/WWVH, can also broadcast time signals that can be picked up by radio receivers and used for synchronization.

Hardware clocks

Computers have hardware “clocks”, or “timers”, that synchronize all components and signals on the motherboard and are also used to track time.

Typically, modern clocks are generated by quartz crystal oscillators of about 20MHz: an oscillator on the motherboard ticks the “system clock”, and other clocks are generated from it by multiplying or dividing the frequency of the first oscillator thanks to phase-locked loops.

For example, the CPU clock is generated from the system clock and has the same purpose, but is only used on the CPU itself.

Since the clock determines the speed at which instructions are executed and the CPU needs to perform more operations per time than the motherboard, the CPU clock yields higher clock signals: e.g. 4GHz for a CPU core.

Some computers also allow the user to change the multipliers in order to “overclock” or “underclock” the CPU. Generally, this is used to speed up the processing capabilities of the computer, at the expense of increased power consumption, heat, and unreliability.

OS clock

Backed by hardware clocks, the operating system manages different software counters, still called “clocks”, such as the “system clock” which is used to track time.

The OS configures the hardware timers to generate periodic interrupts, and each interrupt allows the OS to increment a counter that represents the system uptime.

The actual time is derived from the uptime counter, typically starting from a predefined epoch (e.g., January 1, 1970, for Unix-based systems).

Real Time Clock

Modern computers have a separate hardware component dedicated to keeping track of time even when the computer is powered off: the Real-Time Clock (RTC).

The RTC is usually implemented as a small integrated circuit on the motherboard, powered by a small battery such as a coin-cell battery. It has its own low-power oscillator, separate from the main system clock.

The operating system reads the RTC during boot through standard communication protocols like I2C or SPI, and initializes the system time.

The OS periodically writes the system time back to the RTC to account for any adjustments (e.g., NTP updates).

Although a system clock is more precise for short-term operations due to high-frequency ticks, an RTC is sufficient for keeping the current date and time over long periods.

NTP

All hardware clocks drift due to temperature changes, aging hardware, and other factors.

This results in inaccurate timekeeping for the operating system, so regular synchronization of system time with NTP servers is needed to correct for this drift.

NTP adjusts the system clock of the OS in one of two ways:

  • By “stepping”, abruptly changing the time.
  • By “slewing”, slowly adjusting the clock speed so that it drifts toward the correct time: hardware timers continue to produce interrupts at their usual rate while the OS alters the rate at which it counts hardware interrupts.

Slewing should be preferred, but it’s only applicable for correcting small time inaccuracies.

NTP servers also provide leap second information, and systems need to be configured to handle these adjustments correctly, often by “smearing” the leap second over a long period, e.g. 24 hours.

Different OS clocks

The operating system generally keeps several time counters for use in various scenarios.

In Linux, for instance, `CLOCK_REALTIME` represents the current wall clock or time-of-day. However, other counters like `CLOCK_MONOTONIC` and `CLOCK_BOOTTIME` serve different purposes:

  • `CLOCK_MONOTONIC` remains unaffected by discontinuous changes in the system time (such as adjustments by NTP or manual changes by the system administrator) and excludes leap seconds.
  • `CLOCK_BOOTTIME` is similar to `CLOCK_MONOTONIC` but also accounts for any time the system spends in suspension.

For example, if you need to calculate the elapsed time between two events on a single machine without a reboot occurring in between, `CLOCK_MONOTONIC` is a suitable choice. This is commonly used for measuring time intervals, such as in code performance benchmarking.

Best practices

Storing universal time

For many applications, such as in log lines, time zones and Daylight Saving Time (DST) rules are not directly relevant when storing a timestamp: what matters is the exact moment in (universal) time when the event occurred, i.e. the relative ordering of events.

When storing timestamps, it is crucial to store them in a consistent and unambiguous manner.

Stored timestamps, as well as time calculations, should then be in UTC: i.e. without UTC offset. This avoids errors due to local time changes such as DST.

Timestamps should be converted to local time zones only for display purposes.

As per formatting, Unix timestamps are the preferred time format since they are simple numbers, are space-efficient, and are easy to compare and manipulate directly.

UTC timestamp strings as defined in RFC 3339 are useful when displaying time in a human-readable format.

Time zones and DST become relevant when converting, displaying, or processing timestamps in user-facing applications, especially when dealing with future events.

Here are some considerations for handling future events:

  • Store the base timestamp of the event in Unix time. This ensures that the event’s reference time is unambiguous and consistent.
  • Convert the stored UTC timestamp to the local time zone when displaying the event to users. Ensure that the conversion respects the UTC offset and DST rules of the target time zone.
  • Keep in mind that time zone rules change from time to time, for instance when a country stops observing DST, and the UTC timestamp would then be represented by a different future local time.

Time zone databases, like the IANA time zone database, are regularly updated to reflect historical changes. Applications must use the latest versions of these databases to ensure accurate timestamp conversions, correctly interpreting and displaying timestamps.

By all means, use established libraries or your language’s facilities to do timezone conversions and formatting. Do not implement these rules yourself.

Storing local time

Time zone databases must be updated because time zone rules and DST schedules can and do change. Governments and regulatory bodies may adjust time zone boundaries, introduce or abolish DST, or change the start and end dates of DST.

This is especially relevant if the application user is more interested in preserving the local time information of a scheduled event than its UTC time.

Indeed, while storing UTC time can help you track when the next solar eclipse will happen, imagine scheduling a conference call for 1 PM on some day next year and storing that time as a UTC timestamp. If the government changes or abolishes its DST then the timestamp still refers to the instant in time that was formerly going to be 1 PM at that date, but converting it back to local time would yield a different result.

For these situations storing UTC time is not ideal. Instead, calendar apps might store the event’s time with a day/date, a time-of-day, and a timezone, as three separate values; the UTC time can be derived if needed, given also the timezone rules that currently apply when deriving it.

However, it should be considered that converting local time to UTC time is not always unambiguous: thanks to DST some local timestamp might happen twice, e.g. when setting the clock backward, and some local timestamp might never happen, e.g. when setting the clock forward and skipping an hour.

Ideally, invalid or ambiguous times should not be stored in the first place, and stored times should be validated when new timezone rules are introduced.

Additional challenges arise with:

  • Recurrent events, where changes to timezone rules can alter offsets for some instances while leaving others unaffected.
  • Events where users come from multiple timezones, necessitating a single coordinating time zone for the recurrence. However, offsets must be recalculated for each involved time zone and every occurrence.

For exploring the topic of recurrent events and calendars further, RFC 5545 might be of interest since it presents the iCalendar data format for representing and exchanging calendaring and scheduling information.

For instance, it also describes a grammar for specifying “recurrence rules” where “the last work day of the month” could be represented as “FREQ=MONTHLY;BYDAY=MO,TU,WE,TH,FR;BYSETPOS=-1”, even though recurrence rules may generate recurrence instances with an invalid date or nonexistent local time, and that must still be accounted for.

Parsing and formatting time

Different time formats serve various purposes in timekeeping, incorporating concepts like date and time, universal time, calendar systems, and time zones. The specific format needed depends on the use case.

For example, using the naming convention from the W3C guidelines, “incremental time” like Unix time measures time in fixed integer units that increase from a specific point. In contrast, “floating time” refers to a date or time value in a calendar that is not tied to a specific instant. Applications use floating time values when the local wall time is more relevant than a precise timeline position. These values remain constant regardless of the user’s time zone.

Floating times are serialized in ISO8601 formats without local time offsets or time zone identifiers (like Z for UTC).

However, many programming libraries can make it too easy to deserialize an ISO8601 string into an incremental time (e.g. Unix time) aligned with UTC, leading to errors by implicitly assuming timezone information.

Programmers should carefully consider the intended use of time information to collect, parse, and format it correctly. Here are some guidelines for handling time inputs:

  • Use UTC or a consistent time zone when creating time-based values to facilitate comparisons across sources.
  • Allow users to choose a time zone and associate it with their session or profile.
  • Use exemplar cities to help users identify the correct time zone.
  • Consider the country as a hint, since most have a single time zone.
  • For data with local time offsets, adjust the zone offset to UTC before performing any date/time sensitive operations.
  • For time data not related to time zones, like birthdays, use a representation that does not imply a time zone or local time offset.
  • Since user input can be messy, avoid strict pattern matching and use a “lenient parse” approach: https://unicode.org/reports/tr35/tr35-10.html#Date_Format_Patterns
  • Be mindful of how libraries parse user input, for instance using the correct calendar: https://ericasadun.com/2018/12/25/iso-8601-yyyy-yyyy-and-why-your-year-may-be-wrong/

Distributed systems

Timekeeping and clocks are hardly reliable and consistent.

In theory, we have the means to agree on the notion of what time it is now: we know how to synchronize computer systems worldwide.

The real problem is not that information requires time to be transferred between systems either, instead it’s the fact that everything can break.

What makes it hard to build distributed systems that agree on a specific value is the combination of them being asynchronous and having physical components subject to failures.

Certain results provide insights into the challenges of achieving specific properties in distributed systems under particular conditions:

  • The “FLP result,” a 1985 paper, establishes that in asynchronous systems, no distributed algorithm can reliably solve the consensus problem. This is because, given an unbounded amount of time for processing, it’s impossible to determine if a system has crashed or is merely experiencing delays.
  • The “CAP theorem” explains that any distributed system can achieve at most two of the following three properties: Consistency (ensuring all data is up to date), Availability (ensuring the system is accessible), and Partition Tolerance (ensuring the system continues to function despite network partitions). In unreliable systems, it’s impossible to guarantee both consistency and availability simultaneously.
  • The “PACELC theorem,” introduced in 2010, extends the CAP theorem by highlighting that even in the absence of network partitions, distributed systems must make a trade-off between latency and consistency.

The net outcome is that in theory having a single absolute value of time is not meaningful in distributed systems. In practice, we can certainly plan for reducing the margin of error and de-risking our applications.

Here are some further considerations that can be relevant:

  • Ensure all nodes in the distributed system synchronize their clocks using NTP or similar protocols to minimize time drift between nodes. Use multiple NTP servers for redundancy and reliability.
  • Ensure that NTP servers and clients are configured to handle leap seconds correctly.
  • Consider also using GPS to provide an additional layer of accuracy.
  • Besides NTP, evaluate Precision Time Protocol (PTP) for applications that require nanosecond-level synchronization.
  • Account for network latency, for example by estimating round-trip times (RTTs), to avoid discrepancies caused by delays in message delivery.
  • Ensure timestamps have sufficient precision for the application’s requirements. Millisecond or microsecond precision may be necessary for high-frequency events.
  • Ensure that all nodes log events using UTC to maintain consistency in distributed logs.
  • Use distributed databases that handle time synchronization internally, such as Google Spanner, which uses TrueTime to provide globally consistent timestamps. This is mostly relevant for applications that require precise transaction ordering, such as financial systems or inventory management.
  • Consider using logical clocks (e.g., Lamport Timestamps or Vector Clocks) to order events in a causally consistent manner across distributed nodes, independent of physical time.
  • Consider Hybrid Logical Clocks (HLC) to provide a more reliable timestamp mechanism that combines physical and logical clocks.
  • Evaluate the use of Conflict-free Replicated Data Types (CRDT) to avoid the need of time or coordination for performing data updates.

Fun facts

  • UTC was formerly known as Greenwich Mean Time (GMT) because the Prime Meridian was arbitrarily designated to pass through the Royal Observatory in Greenwich.
  • The acronym UTC is a compromise between the English “Coordinated Universal Time” and the French “Temps Universel Coordonné.” The international advisory group selected UTC to avoid favoring any specific language. In contrast, TAI stands for the French “Temps Atomique International.”
  • Leap seconds need not be announced more than six months in advance, posing a challenge for second-accurate planning beyond this period.
  • NTP servers can use the leapfile directive in ntpd to announce an upcoming leap second. Most end-user installations do not implement this, relying instead on their upstream servers to handle it correctly.
  • Leap seconds can theoretically be both added and subtracted, although no leap second has ever been subtracted yet.
  • The 32-bit representation of Unix time will overflow on January 19, 2038, known as the Year 2038 problem. Transitioning to a 64-bit representation is necessary to extend the date range.
  • The year 2000 problem: https://en.wikipedia.org/wiki/Year_2000_problem
  • Atomic clocks are extremely precise. The primary time standard in the United States, the National Institute of Standards and Technology (NIST)’s caesium fountain clock, NIST-F2, has a time measurement uncertainty of only 1 second over 300 million years.
  • Leap seconds will be phased out in 2035, preferring a slowly drift of UTC from astronomical time over the complexities of managing leap seconds.
  • In 1998 the Swatch corporation introduced the “.beat time” for their watches, a decimal universal time system without time zones: https://en.wikipedia.org/wiki/Swatch_Internet_Time
  • February 30 has been a real date at least twice in history: https://www.timeanddate.com/date/february-30.html
  • Year 0 does not exist: https://en.wikipedia.org/wiki/Year_zero

Further reads

What was discussed here it’s just the tip of the iceberg to get a brief overview of the topic.

Some further resources might be interesting to deepen our understanding of timekeeping, its challenges, and edge cases:

--

--

Davide Briani

Passionate about innovation and bringing SaaS to the market. Curious, always learning, focused on bridging tech gaps. Love sharing knowledge and collaborating.