Living at the edge, safely

LightKone European Project
7 min readMay 6, 2019

--

The cloud was about server-based communication and content delivery. But, as Peter van Roy put in his recent post, “the edge is becoming the centre”. Edge computing promises fast response, high availability, and keeping private data where it belongs, in the users’ hands. With the growth of mobile, IoT and 5G, more and more interesting things will be happening at the edge.

The managed cloud infrastructure continues to have a core role. Data centres have high-bandwidth storage, communication, and computation capabilities. Beyond centralised data centres, the infrastructure comprises decentralised points-of-presence, micro-datacentres in 5G towers, and edge gateways. The latter forms the “infrastructure edge”, as opposed to the “far edge” of low-powered, unmanaged user devices and IoT sensors in the field.

Currently, the computing models offered at the different ends of the spectrum are vastly different. In the centre, applications are typically built around a shared database. At the far edge, the model to react to notifications of local events and from the cloud. They also need data, but event-driven frameworks typically don’t have a proper shared data model. Conversely, data-based programming frameworks often provide notifications as an afterthought. There is no good reason for this disconnect. Developers need access to the full power of distributed computing; they need a common programming model across the whole spectrum, forming a programming continuum.

The return of CAP

The most important ingredient of a continuum model is availability. An edge device needs to do its job on its own, without asking for permission from someone else: What if that other guy is busy? What if it’s actually waiting itself? What if the network is slow or disconnected? No, my device shouldn’t have to wait.

The next important ingredient is sharing; as explained in this interesting post, edge computing has a distributed data problem. Take the example of a multiplayer game: I want immediate feedback from my game device, but I also need to observe the game universe around me.

Unfortunately, availability and sharing don’t mix well. Remember the excitment around the highly-available NoSQL databases, like Riak or Cassandra? Some companies found out the hard way (data corruption!) that forgoing consistency is not such a great idea. Recently, the pendulum has been swinging in the other direction, and providers have been pushing for strong consistency, e.g., Spanner or Azure. This buys you peace of mind, but at a heavy price in terms of availability, response time, and resource consumption.

You probably know about CAP. To summarise: when the network can break (which is unavoidable), it’s impossible to provide at the same time strong consistency (everybody sees the same updates in the same order) and availability (the application can always read and write its data). Either the service is unavailable at times and users are unhappy, or data diverges at times and developers are in trouble: if I can’t know what is the current state, how can I develop a correct application? The ususal ad-hoc, trial-and-error approach is not an option: once your data has been corrupted, it’s too late for debugging.

So, here’s the conundrum: we need availability, but we need consistency. Is there a way out?

Just-Right Consistency

Let’s step back for a moment. What consistency means, is that the database provides help so that the application can maintain its data correct. To be a bit more precise, the shared data is expected to satisfy some application-specific properties, called its “invariants”.

Let’s take an example. Suppose Alice, Bob and Carole are roaming the streets, selling tickets to my rock concert. Let’s call the count of available seats x. When Alice or Bob sells a seat, the app decreases x by 1. If the theatre has 100 seats, then x goes 100, 99, 98, …, down to zero. The counter x must not become negative, otherwise I’d have unhappy customers and get in trouble with the fire department. Thus, the ticket application’s invariant is “x ≥ 0.”

Under strong consistency, this is easy. Before selling, the vendor checks x. If x=0, the last ticket has been sold; end of story. If x≥1, the vendor decrements x by 1, and we have a sale. To do this, the vendors might check with a central site storing the value of x; or each vendor might have her own copy or replica, and synchronise every updates with the others. Either way, it’s slow, and not available under partition.

At the availability end of the spectrum, we can let each of Alice, Bob and Carole operate independently on their own replica of x, and sell tickets without asking for permission. But then, both Bob and Carole might inadvertently sell the last seat twice, messing up the invariant.

The Just-Right Consistency approach is to be as available as possible, and to synchronise only when strictly necessary to enforce the invariant. Suppose x is to be increased, because the concert is rescheduled to a theatre with 150 seats. Since adding 50 to x will never negate x≥0, incrementing does not need to synchronise.

In the case of decrements, as long as x is far from zero, no need to synchronise either. Let’s equip Bob with a stock of 10 tickets (decreasing x by 10); as long as he has stock remaining, he can sell autonomously. If he sells out, only then does he need to synchronise again.

Components of Just-Right Consistency

To implement JRC, a number of components are helpful.

Conflict-Free Replicated Data Types (CRDTs)

To be available, the application must be able to update replicas independently. The system propagates the updates between replicas, and ultimately we want them to converge. How can we make this happen?

In the ticket application, Alice, Bob and Carole are all updating x concurrently. Fortunately, incrementing and decrementing a counter are “commutative” operations; as long as every replica eventually applies every operation, replicas converge to the same value; order does not matter. The concept of a CRDT generalises the commutativity insight to more advanced data types. A CRDT supports concurrent updates and merges their effects in a deterministic manner, without requiring synchronisation. For instance, a CRDT set supports concurently inserting and removing elements; a CRDT-based text editor merges concurrent edits from different users. CRDTs are the quintessential data model for availability.

Transactional causal consistency

Many invariants depend on the application performing operations in a certain order. Security is the typical example. Consider a social network where you can view photos of your friends only. Alice has a photo that she doesn’t want Bob to see. So she first unfriends Bob, then posts the photo. However, in many databases, the photo may arrive at Bob’s replica before the unfriend, and Bob sees the photo even though security says otherwise. To avoid this anomaly, the database should guarantee “causal consistency” (CC). Under CC, if a process can observe an update, it can also observe all preceding updates. Thus, any replica that receives the photo would also observe that Alice and Bob are not friends.

Other invariants require two or more updates to be performed “atomically”. For instance, consider a social network where friendship relations must be mutual. Then, when Alice makes Bob her friend, this also makes Bob Alice’s friend; and vice-versa for removing friendship. To make this work, the database must ensure that, if a process observes an update, it also observes those grouped with it.

Transactional Causal Consistency (TCC) combines CC and atomicity. A possible implementation of TCC is to maintain versions of every data item, and to let the application observe only mutually-compatible versions of different items. A replica can do this locally, without synchronising; therefore, TCC is compatible with availability.

Precondition stability

Let’s return to the concert example. Remember that to deliver a ticket, the app first checks the precondition that x≥1, before decrementing x. In a strongly-consistent system, this is sufficient to ensure that the invariant is never violated. However, in the absence of synchronisation, another replica might change x, negating the precondition check.

Note that incrementing x will never negate the precondtion. The converse is true: decrements do not negate the precondition of increments, because increments don’t have one! The precondion of decrement and increment are stable with respect to each other; therefore, these operations do not need to synchronise mututally.

On the other hand, suppose that Alice observes that x=1; precondition x≥1 is verified. Before Alice manages to decrement x, Bob also observes x=1 and delivers a ticket. Unbeknownst to Alice, precondition x≥1 is now false. The only way to stop this from happening would have been to synchronise decrements with one another.

We are not synchronising decrements on a whim, or according to some preconception of consistency. On the contrary, the application requirement x≥0 absolutely imposes it. Fundamentally, in the absence of local stock, decrements needs to synchronise and cannot be available.

This leads to another conundrum: how does the developer know when to synchronise? Too much synchronisation is costly, but not enough is dangerous. Fortunately, this kind of situation can be reliably detected at design time by CISE static analysis, removing opportunities for error.

Antidote

And finally, a sales pitch: our Antidote database and related tools natively support CRDTs, TCC and CISE. Antidote is designed for a geo-distributed cloud and for the infrastructure edge. It provides a flat shared database model. It implements operation-based CRDTs above a TCC networking layer, and is designed for full replication in the cloud.

A Continuum model

This Just-Right Consistency approach is essential to a continuum model, to support availability at the same time as correctness under partition.

As mentioned earlier, current frameworks are different in the core (database access) and at the edge (reactive programming). There is no good reason for these separate silos. CRDTs and TCC provide a clean and common model. Every process has an internally-consistent view of the world. If a process observes some event (an update or a notification for instance), it also observes all preceding and grouped events.

Thus, a Continuum model would be similar to Antidote, but extended to the far edge.
Providing the same abstractions at the far edge represents a significant challenge. For instance, a TCC communication layer is not realistic at far-edge scale. Then again, maybe it’s not necessary, with state-based CRDTs. This restricts the view of a process to a single object, but that may be OK, since a far-edge application is likely to have a small data footprint, and to be interested in a partial view only. Another challenge is ensuring that crossing the boundary between different implementaions remains seamless.

Implementing the continuum model, based on CRDTs, on TCC and on combining events with data, and providing the same abstractions seamlessly from the core cloud to the far edge, is a great challenge for future projects.

By Marc Shapiro, University of Pierre and Marie Curie

--

--

LightKone European Project

The LightKone is an European project started in 2016 addressing general-purpose computation on edge networks.