The Guifi.net network has grown over the past 15 years as a technological, social and economic project to provide Internet access nowadays to more than 80.000 people. This infrastructure was initially built with WiFi radio connections and, nowadays, also employs fibre optic links to reach thousands of households. The monitoring system currently in place is lagging behind the network evolution, requiring manual intervention and exposing a number of single points of failure. …


With the growing adoption of the Internet of Things (IoT), connected devices have penetrated every aspect of our life, from health and fitness, home automation, automotive and logistics, to smart cities and industrial IoT.

Thus, it is only reasonable that IoT, connected devices, and automation would find its application in agriculture, and as such, tremendously improve nearly every facet of it.

Farming has seen a number of technological transformations in the last decades, becoming more industrialized and technology-driven. …


One technology used by many partners in the LightKone project is Erlang and its standard library OTP, because of its excellent properties when it comes to developing distributed systems all the way down to the light edge. Even though Erlang/OTP has built-in distribution mechanics, those doesn’t always scale to the number of nodes required to build large scale edge computing solutions. In the LightKone project we therefore try to enhance and experiment with the Erlang/OTP distribution functionality on order to meet our requirements. …


Modern enterprises are realizing that their data is pure gold, but like the gold buried deep in the earth, data is not worth much if it cannot be accessed. Unlike gold, where scarcity increases value, quantity and completeness make data more valuable. Today’s challenge for data is to safely store vast quantities, to make it readily accessible and then keep track of it.

At the same time, global enterprises are dealing more and more with naturally geo-distributed datasets. This is the results of a number of trends:

  • Enterprises operate multiple data centers across the globe to improve user-perceived latency and ensure resilience and regulatory…


Despite advancements in connectivity for user devices by service providers, mobile devices will remain subject to frequent periods of disconnection by their very nature. In order for the user to be able to interact with an application during periods of disconnection, the application must store its data on the device. Users can then interact with their local copy. In addition, updates need to be maintained and delivered to the server once the connection is re-established.

But what happens if this data has already been modified in the meantime? Such modifications can stem, for example, from another user on some shared data or even the same user, but on a different device. Typically, implementing conflict resolution for concurrent updates on application data is left to the programmer. The demand for offline support in native, mobile, and web applications has therefore led to many ad-hoc solutions that often do not provide well-defined consistency guarantees. …


Data replication is a key technique for building highly available (distributed) systems. Unfortunately, as stated in the CAP theorem, in systems that are prone to network partitions (any distributed system), it is impossible to provide at the same time strong consistency (everybody sees the same updates in the same order) and availability (the application can always read and write its data). Furthermore, providing strong consistency requires coordination among nodes replicating data, which penalizes latency for executing operations. This tradeoff applies both to systems that run completely in the cloud or that extend to the edge.

As discussed in Marc Shapiro’s recent post, database consistency is just a mean for creating applications that are correct. What application developers need is that the system guarantees that application-specific properties, called “invariants”, hold at all times. Explicit consistency defined an alternative consistency model that instead of (only) restricting the allowed order of operation execution, stated that the system was free to execute operations in any order given that the application invariants are maintained at all times. …


The cloud was about server-based communication and content delivery. But, as Peter van Roy put in his recent post, “the edge is becoming the centre”. Edge computing promises fast response, high availability, and keeping private data where it belongs, in the users’ hands. With the growth of mobile, IoT and 5G, more and more interesting things will be happening at the edge.

The managed cloud infrastructure continues to have a core role. Data centres have high-bandwidth storage, communication, and computation capabilities. Beyond centralised data centres, the infrastructure comprises decentralised points-of-presence, micro-datacentres in 5G towers, and edge gateways. …


The “edge” of the Internet consists of the numerous heterogeneous, loosely coupled nodes situated at the logical extreme of the Internet. The edge is increasing in relevance with respect to data centers. The current situation, in which data centers are effectively in control of the Internet, is becoming unstable. The Internet is evolving toward a new equilibrium in which the role of the edge will be much more important. One could say that “the edge is becoming the center”. Here are five points that illustrate this trend:

  1. The edge is growing exponentially, in data generation, data storage, and computation abilities, and the cloud (i.e., data centers) does not have such growth. Many studies (easy to find by Googling) show that the number of Internet of Things devices, which are part of the edge, has been growing exponentially for the last decade and they predict that this will continue for the foreseeable future. While mobile phones are already flattening out now, at several billion (basically one phone per person on Earth), the number of edge devices continues to grow and is projected to reach hundreds of billions and more. A keynote talk at NetFutures 2017 predicted >1000 devices per person by 2027; this may be optimistic but the error is in the year and not in the number of devices! The bulk of the Internet’s storage and computation ability will in a few years be at the edge, and data centers will become small in relative size to the edge. …


Both distributed aggregation and replication for high availability (yes, I am thinking of CRDTs) are techniques that can help tackle geo-replication, offline operation and edge/fog computing. Distributed aggregation often shares many properties in common with CRDT style convergent replication, but they are not the same concept. I have witnessed this difficulty in separating the two concepts in many settings, and this prompted me to attempt a clarification.

The main diference is that in replication there is an abstraction of a single replicated state that can be updated in the multiple locations where a replica is present. This state is not owned by any given replica, but any replica can evolve it by applying operations that transform the shared state. This notion applies both in strong consistency and high availability settings. The diference being that in highly available replication the replicas are allowed to diverge and later reconcile. Another factor is that operations that lead to state changes are often the result of the activity an external user that interacts with the system, e.g. adjusting the target room temperature up by 2 degrees. …

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store