TL;DR > true decentralization is harder than you think.
The word organization just means for things to come together in some other manner than randomly.
Organization means that people or things have organized themselves to interact in one way or another. Marriage is an organization. Learning is an organization, sun and the moon are organizations. I’m writing this commentary, and you’re reading it, that is the organization we’ve agreed on and without which the exchange we’re interacting in would not be possible. The webster dictionary defines organization as “the structure or arrangement of related or connected items” for example the spatial organization of cells.
The most fundamental form of organization is language, and language is made of words. Let’s take the word “order” as an example. Webster dictionary definition says that “order” means “the arrangement or disposition of people or things in relation to each other according to a particular sequence, pattern, or method” for example when someone files a stack of business cards in alphabetical order. So we can see how words are made of other words, and if we check the dictionary definitions of those words, we’ll find that they too are made of other words, ad infinitum. If we test this in practice, we’ll find that there is never a point in time when a word is no longer dependent on other words. If you have doubt, try and find such a word that did not have its meaning dependent on other words.
If you are staring at a blank wall, it looks like a blank wall and like a thing that exist in itself as a blank wall. But if you look carefully enough, there are countless things on the surface of the wall. Dust, and smaller things than that. Also if you break it down, you can see that it’s made of other things. We can find that the case with things is the same as it is with words, that they are always made of other things and never under any circumstance can stand independently of everything else. Just like the word camera is made of other words, camera is made of other things.
Depending on the way you are looking at the thing, for example if you are using a high-powered microscope or not, you are going to see different things that it’s made of. Imagine that you woke up after being in coma and had total amnesia where you remember nothing, no language, no nothing at all. You’ve been in coma for a year, and then you suddenly wake up. The lights are bright and you have no recollection of anything at all. In that situation the blank white wall may seem pretty random.
Organization is very important, without its many forms we can not make sense of anything. As a by-product of organization, there seems to necessarily follow a degree of centralization.
Decentralization vs. centralization is mathematically the same problem as randomness vs. order is.
Let’s say we have 100 integers and we say they are 100 random integers or 100 randomly picked integers. That is an impossible thing to guess, in terms of which integers those 100 may be. Because there are infinitely many integers, it is not possible to even know the odds.
If we change this little, and now you know that the integers are all lesser to one hundred billion. It’s still very hard, but not impossible anymore. Also now we can very precisely figure out the odds of guesses we need to make.
If we do the same, but now the range is with numbers below 1,000, we could get quite a few right and we know we have pretty good odds to do that. Guessing all of them would still be hard.
To conduct this experiment, we’ve formed an “organization”. An organization that has one very simple function; guess sets of randomly picked numbers from given ranges as accurately as possible. The organization also has very simple structure, and outcomes (how good the guess can be) are dependent only on the extent of randomness presented in the challenge. The relation between the the total universe (N=) and the sample (n=) is the sole attribute to the degree of randomness in the problem. Once entropy (measure of randomness) comes to 1, meaning that in the case of guessing 100 numbers the range is any sequence of 100 integers, there is certainty about the answer.
What we call randomization, is actually based on taking a range of numbers and then picking one or more numbers out of that by some means. Both “range” and “picking” constituting a form of ordering (reduced entroy) away from randomness. This means that both philosophically and mathematically there is a necessity for the process of “randomization” resulting in the opposite of randomness i.e. order.
In short summary, the more there is order in the organization, more likely the organization is going to achieve its goals. Simply because it has a greater ability to predict outcomes. This is why organizations tend to gravitate towards increased order one way or another. This is also why individuals tend to form organizations in the first place.
The Secret History of Mathematics
Mathematics is an example of order and organization. So does using arithmetic create centralization in the world?
The origins of the arithmetic method is found on the market places of modern day Syria some 5,000 years ago. The base60 system was specifically devised to facilitate for trade. What kind of effect did introducing such a system have on the culture back then? Today we have far surpassed simple arithmetic operations. We have tools like regression modeling, which was famously devised for eugenics, the field of “science” studying the improvement of human species through genetic manipulation.
The most startling example of consequences that came in some way as a result of the original introduction of formal arithmetic is RAND’s Games of Strategy, where some of the leading thinkers of the time came together to device a “winning” nuclear-era strategy for US against the “Soviet threat”. Based on very simple mathematics, RAND’s mathematicians found that paranoia was the key to unlocking positive outcomes under difficult conditions. They suggested that by assuming the worst about the way others would behave in the situation, the resulting outcomes would be “optimal” against the set goals.
More than any other single theory, over the last 50 years Game Theory have helped to explain and influence behaviour in virtually every field of commerce and science. Bonus schemes, performance reviews, national healthcare, and so many other things influencing everyone’s lives today rely heavily on game theory models. These are all examples of non-cooperative games that are referred to as “zero-sum” games in Game Theory. In short, a game in which a party can win only at the expense of others. In the context of Game Theory, events are reduced to games, and decision makers to participants in the game.
Zero-sum games are psychologically speaking difficult for the participants. There is the idea that you have to win, and because you know winning takes place at the expense of others, there is a suspicion towards other participants. This is where participants become opponents, and the game becomes non-cooperative. When we evaluate these basic points, it is clear that under the cognitive conditions present in non-cooperative games, it is very hard to make decisions other than those that are focused on meeting individual short-term goals.
Right now industries, governments and people and too often even NGO’s are trapped in a cycle of conduct that aims towards short-term gain even if it was for the detriment of others. As a result, the way our computer algorithms work and the way our network topology is structured is more or less based on zero-sum game theory.
In contrast, with a cooperative (positive-sum) mental process, cognition is based on the idea that it’s ok to lose. In other words, the participant makes the notion of the possibility of a loss as one acceptable outcome. This should not be too hard, especially given that retrospectively speaking we are ok with not winning every time. The difference is that when our intention is based on the idea of being ok with losing, it allows the mental process to produce cooperative ideas and behaviours. Without the correct intention, it will be very hard or impossible for the marketing practitioner to hold proper conduct.
The Perils of (pseudo) Decentralization
In number theory there is something referred to as “pseudo-randomness”. This is because randomness turns out to be impossibly hard to produce.
In the light of what we have discussed here, it makes complete sense to argue that randomness is not possible to produce, as it is the anti-thesis of the idea of production. Any form of production requires organization, and when organization is added to a given set (of numbers), the set in question necessarily loses part of its randomness. Maybe this same distinction is important in terms of decentralization?
It seems that the only way for something decentralized to manifest, without any ‘pseudoness’, is to just let things happen without any intervention at all. In the words of I-Ching, the 5,000 year old Chinese book of wisdom we have to “let the nature take its way”. If there is a group of people in a highly isolated environment, and they all just let things happen without any intervention at all, then that in itself seems to be the only way for something decentralized to manifest. Even that it is wrong, because actually the decentralized state is the state which we don’t even observe. This relates to quantum entaglement.
With effort, the best we can achieve is a high degree of pseudo decentralization. Is this bad news? I don’t think so.
Take African internet routing table for example. Few years ago the ASNs were almost 100% owned by Orange if they are in french speaking areas and Tata if they are in english speaking areas. If instead of being centralized outside of Africa the ASNs were centralized inside Africa, it seems like great progress. Other example is how Facebook is courting the bottom of the pyramid with its internet.org “outreach” program, and how India is not buying to it as they clearly see the threat of outsourced centralization. This could make a huge difference in terms of how internet equality develops in India long in to the future. An example everyone should be able to relate with is how instead of buying the products we’re used to and feel comfortable with, we could actively find ways to vote with our money in a way that would support local production over highly centralized mass production by regional and global entities.
A lot of the most hyped pseudo decentralization efforst manifest in a form that would be better characterized as recentralization. It’s important to understand clearly, that because there is no absolute decentralization available for us to produce, it will always be a question of degree or recentralization we can avoid in our efforts to decentralize something. The simplest way to understand this theorem is that any change away from centralization constitutes decentralization.
There is definitely room for big audacious decentralization research and innovation. Larry Roberts, the principal scientist at DARPA in the 50’s and 60’s, and the father of the decentralized network topology, told how J.C.R. Licklider came to his office one day and told him about the intergalactic web he had been thinking about. Describing the function of such a web, not unlike the web we have today, Lick said he had no idea how to build it, but thought that it was important for Larry to get DARPA involved in building it. Based on Lick’s vision, and his seminal paper Man Machine Symbiosis from 1960, few years later Doug Engelberg in his “mother of all demos” showed the personal connected working space of 2017. Today we live in a world that is very much like the one envisioned by Lick, Larry Roberts and Doug Engelberg together with others over 50 years ago. Frankly speaking, it seems that we’re lacking in vision in comparison to our 1950’s computer science research counterparts. I think we might even be outright lost.
Going back to the original decentralized network design, why did things turn out so centralized? What can be changed in the original design in terms of topology and other fundamentals? Their money came from the US military, how much did that ultimately impact the design? Context is very important in technology innovation.
A fork (for eating) is a great example of technology. Or the way how fire is used for heating older houses, and electricity newer ones.
If I give you a fork and no spoon to eat a bowl of watery soup, it’s not very useful. If you live in a tropical location, you’d probably use fireplace for other reasons than heating, and electric heating not at all. It would be far more useful to just have one glass of clean water than to have electric heating. If we introduce forks to people who only eat watery soups, it will not be a successful technology innovation. Without the appropriate context in to where innovation is introduced, it has no relevance and therefore it will not be adopted or will be very poorly adopted.
Impact is a result that comes from the causes of function and context. For example, limited internet access, where your initial concern in a community would not be the latest in modern world internet technology (e.g. Snapchat) but basic function like messaging, weather, market-place and simple self-publishing. When I’ve talked about various features with people in remote Himalaya locations, for example a town of 30,000 people and many problems with telecommunications, being able to make phone calls and send messages for free (using SIPS) probably had more “votes” than all the other features combined. In the same area, in a stand-alone installation in school, the call and messaging feature would have no value at all. Sometimes a given feature might seem obvious, but the way it should be implemented varies greatly depending on the community. For example maps are something arguably useful for anyone. But while most of the map use in formal internet is tactical day-to-day nagivation, most of the map use in rural settings is strategic (planning a trip etc).
One of the hardest lessons to learn for technology innovators seems to be the difference between the “cutting” and “bleeding” edges. Where the first is mostly focused on self-gratification, both sensual (fame etc) and material (money etc), the second is solely concerned on improving the society. Unfortunately many innovators who do have a genuine will to improve the society, do not clearly understand this point and end up making contributions that are more cutting-edge than anything else.
One example of context is how decentralization solutions can be combined to create what could be referred to as super-decentralization:
To learn more about the intersection of autonomous machines and decentralization, I highly recommend reading Daniel Suarez’s two book series Daemon and Freedom.
Searching for Very High Degree of Pseudo Decentralization
Bitcoin is currently the best example of some of the known risks associated with pseudo decentralization.
Because people have limited cognitive energy, time and other resources, there can only be so many “decentralized” alternative currencies that masses can eventually adopt. If those currencies are the ones that have a low level of pseudo decentralization, but seemed great initially against the stark contrast of the old monetary system, then that is not a good outcome by any means for the world.
Mindful of the challenges, I’m very interested in finding ways to create organizations that manifest as very high degrees of pseudo decentralization. Maybe because I’ve “played” with computers since -82, I’m looking for such organizations from computer networks. Needless to say, networked computing environments have become a standard for any early scientific experimentation. With the help of the web browser, such an experiment is never too far from the rapidly growing 3 billion strong internet population of the world. If indeed it is possible to create something like that ie. no-hierarchy networked computer systems, at the present time I have no idea how the theoretical benefits of such a system would translate in to positive outcomes for people and society.
Such a high degree pseudo decentralized system must meet some seemingly simple, but mathematically very hard to construct structures:
- 100% void of any hierarchy
- 100% void of any form of discrimination
- 100% transparent
Because it is a computer system, it has to facilitate for always-on functioning with minimal technological (including energy) overhead and at all times remain sufficiently redundant. It has to be secure and the user’s privacy has to be guaranteed. I don’t think 100% redundancy such as is the case with bitcoin is necessarily a good idea for decentralization. It is just the simplest and dumbest way to make it appear as there is decentralization. As opposed to what it had been hyped as, proof-of-work based blockchain seems to be the most idiotic way to tackle the problem of decentralization. If a monkey would come up with a way to do it, bitcoin style blockchain is what it would come up with. Equality is not associated with having exact replica of what everyone else has, but having equal access with everyone to everything. 100% redundancy i.e. last node standing still has a full record is the dumbest way to “solve” this problem.
I hope that this article helps those working in the field to look harder, and to look deeper, while being (painfully) aware of the limitations we have in solving this problem.
I wish all of you the best of luck.
In case you want to continue the conversation, I suggest to do it here.
If you liked the post, why not hit the share button.
Or connect on twitter @mikkokotila