The Decentralised Digital World of the Future

Toby Simpson
13 min readMar 29, 2019

--

I wonder what sort of world is through the door? Well, let’s go and see.

If there is one thing that science fiction has taught us, it’s that future digital worlds are terrifying places where human beings go to waste away in some kind of hedonistic virtual playground. But that is because we always see them from a human perspective: they are places we go, they may permit “magic” (in that you can do things that the real world disallows, such as float, fly and teleport), but they are essentially three dimensional, a metre is a metre and if we see digital entities they are robots, droids and other artificial constructs that inhabit that space along with us.

The Fetch.AI digital world is inside out — a complete opposite to “human first”. Instead, we present a world that is “machine first” in order to better serve humans who need big, complex problems solved, by optimising it so that those machines can get things done on our behalf. In this kind of space we can afford to bend reality much further: we do not need to restrict ourselves to the obvious magic and special powers, a metre does not have to be a metre and we can show a world on many more than three dimensions. This is a space that can be explored and navigated semantically, geographically, or by economic features.

For the digital inhabitants of the Fetch.AI world, things that are close are not close because they are near, they are close because they are useful. We call our digital inhabitants Autonomous Economic Agents, or just “agents” for short, and we have created a number of technological building blocks designed to deliver decentralised navigation. Here are just three ways in which agents can navigate and make sense of their space in order to find what they want or deliver what they have:

  1. N-dimensional spatial exploration. The nodes on the Fetch.AI network are connected together on more than one layer. These layers provide an underlying geography of the network which allows agents to move from node to node to end up geographically where they wish to be. These layers also permit navigation by infrastructure decision points such as airports.
  2. Semantic Teleportation and Vision. Fetch.AI uses learning to dynamically assign nodes on the network to handle specific subjects. The models that govern this are certified by the underlying permanent ledger. Agents can use this to position themselves on nodes that cover related or relevant subject areas.
  3. Climbing or descending artificial chemical gradients. Similar to some path-finding algorithms, nodes are able to broadcast artificial “chemicals” that decay the further they get from their source, delivering gradients that can be climbed or descended to move towards or away from specific things. These are linked to semantic information, service availability (prediction markets, optimisation or planning functionality, etc.), agent population and more.

None of these navigational methods have to be used in isolation. Agents can combine them in different ways in order to effectively position themselves on the network to find the audience they need either to buy from or sell to. Collectively, we call the interface to this digital world the Open Economic Framework, or OEF for short. It is the OEF that provides the gateway through which agents connect to, explore, perform business and disconnect from the Fetch.AI environment. It is a unique space, that would look strange and alien to humans, but presents a highly optimised machine readable home for digital entities that adapts in real-time so that they are able to work effectively, alone or together.

In this article, we look at the three primary navigation methods provided to agents and touch on the exciting technologies that are used to deliver them.

1. N-dimensional spatial exploration

Care of, and thanks to, the awesome Threepwood at b3ta

Fetch.AI connects nodes together on several layers. Traditionally, nodes would connect to other nodes in a network based on network discovery, performance and other factors. There is no defined geography to this: whilst you can build an image of the network by looking at the various connections, it is not possible to walk north, or towards Canada, on such a structure in any meaningful way. Likewise, you can’t stand at one node and capture other nodes that sit in a cone bearing south from your location out to a range of 100km. When it comes to a network that is enabling agent discovery, these concepts become important. Many of the actions we take in life relate to position and direction and Fetch’s n-dimensional spatial organisation makes this possible.

Fetch’s node connections are layered. As well as the typical network based connections, there are also spatial ones based on declared location. It is possible to walk around the network in order to find nodes that are closest to a specific location. This means that if you’re in London and you wish to find agents that are in New York, you can take the minimum number of hops on a decentralised peer-to-peer network to get there. It also means that if you’re setting up a node on the network, you’re able to position yourself where there is agent demand: perhaps a second node in London, or one that focusses on the Heathrow Airport area, encouraging agents that represent value (or desire it) in that area to move in that direction.

A Fetch.AI network of black and white agents navigating the UK’s cities (red) and airports (yellow), without revealing their own location or their goal to the infrastructure nodes. Solid lines are the agent’s connectivity, dashed lines are their “secret” goal location: this is real, it works, and it’s a screenshot from running code.

Such a method of connection is dangerous if it is not done correctly. Indeed, if it is the primary method of network connection then it is easy to “island” vast areas of the network — cutting them out and isolating them without the individuals realising it. This is far harder in an n-dimensional world as there are several entirely independent layers of connection for entirely different purposes. In the case of Fetch.AI, we define network-based connections, geographic ones and infrastructure ones but there is absolutely no reason why there might not be many more of these, each providing a filter for movement to make one’s journey even more efficient.

2. Semantic Teleportation and Vision

Buckle up, folks, this is going to be one hell of a ride!

Fetch.AI offers agents the the opportunity to explore and view the world semantically, or, to put it another way, by content. There is a family of AI technologies which vectorises data: takes input data, such as words, text and images, and turns it into a mathematical set of coordinates. The result is referred to as an embedding, and by looking into this new reduced space, things that are close together tend to be related. One of the common examples of this is document-to-vector (doc2vec). In this case, pieces of text can be fed into the system and it can be seen which documents are similar, or related, by simply looking at the results. In this way, one can determine similarity and relationship without knowing the details, i.e., that two documents are semantically similar, but not having to know that the category is animals, or history. This is an extraordinary powerful concept as it requires no prior subject knowledge in order to find related material: you take what you’re interested in, vectorise, gather everything within range and voila, you have the things that are closely matched. In the same way, adapted variants of things like code2vec can be used in architectures like Fetch.AI in order to find similar smart contracts. You can, for example, take a token contract and find all similar token contracts on the ledger¹.

This technology is not restricted to just text. It can be used for images, too. You can take an image of a number, feed it into such a system and figure out what it is just by where it positions itself: “yes, this is probably a 4 but it does look a touch like a 9.”.

Well, that’s great, I can pop an image of a number in and find out what it is likely to be, but also which numbers I am close to in style and shape². But that isn’t all it does. This kind of technology means that instead of having a deep neural network that can tell you if an image is a dog, you can have a system that positions your drawing, picture or model of a mystery animal with animals that are similar. You can then capture a circle around your position to figure out what your image is likely to be and what it is similar to. So that poorly drawn zebra? Yes, it’s probably a zebra, but it is certainly very much like a horse, donkey, pony, a bit like a giraffe and so on. Or, if you start at zebra and look towards, say, monkey, what animals do you intersect? Effectively, we’ve built a really cool animal approximation machine that’s spectacularly good at telling you what yours is and when it misses, the ones close by will almost certainly be correct. Now extend this further: do the same thing with subject areas. Transportation. Hospitality. Healthcare. Interested in the markets that might be relevant to you? Look around you in semantic space and see. Interested in the business opportunities from transportation towards healthcare? Stand at one point, look towards another, see what you intersect. There is absolutely no reason why such a system needs to be restricted to numbers, animals or any other specific subject: you can pump it with information and end up in a reduced dimensionality world where you are near to things that are similar to you. And this is what we call our semantic dimension.

In a decentralised world, though, there’s a catch. If you distribute the semantic data across the entire network, you have a lot of data to synchronise. That’s all traffic and in the meanwhile, we’ve got transactional data, agents moving and doing their stuff and all the network management stuff going on that’s using the vast majority of these pipes for network critical operations. Synchronising a giant neural network and all its associated gubbins is going to cost. On Fetch.AI, we do this by assigning captured areas of the vectorised space to individual OEF nodes. Animals, and associated subjects, may be on OEF node X and transportation on node Y. When you generate your embedding, the OEF’s shared advanced semantic index will tell you which nodes you should be talking to: probably node X, W and T. You can then connect directly to these in order to capture the agents you’re interested in. This is an entirely dynamic, constantly shifting process, as the learning models that do this update continuously as the requirements of the network’s users change. This process of establishing where you sit in semantic space and the leaping to the node you should be at is a concept we call Semantic Teleportation. It can, of course, be combined with other network exploration methods to refine a search further (e.g., in the area of Paris, with the subject area of healthcare).

Fetch.AI performs this using partially or fully populated data models. It means that if you’re looking for a certain kind of weather data, but you’re not sure how it will have been advertised, you will be able to teleport yourself directly to a field full of people who are most likely to be providing information that is similar to you. As an advertiser of data, with value to deliver, this is also particularly exciting because you can create this position in semantic space based on the actual data you are delivering and not just the model that describes it. It also affords agents the ability to explore by subject: teleport to an approximate area and stroll around looking for other opportunities that might be relevant. But here’s the kicker: you’re doing this without revealing the data itself. Now clearly, if the OEF (i.e., the actual node itself) is performing this work and calculating the semantic position, then it is doing so with the data and this, particularly for a naughty node, is a potential privacy risk. Whilst the node can provide this service, in most scenarios, we see the agent itself performing this calculation to position itself in semantic space and delivering what we call “the dimensional reduction” directly to the node: i.e., “put me here, but I won’t tell you why”.

3. Artificial Chemical Gradients

Little program I knocked out the other night to show single-layer diffuse navigation. Drop us a line (info at fetch.ai, subject “Toby’s program thingie”) if you have a Mac and fancy a copy — it’s a small, native C++/Objective C app, no warranty, no polish, your own risk, blah blah blah…

In computer games, one of the many ways in which we achieve path-finding (how to find the best way from A to B in a complex environment) is through a mechanism where the desired destination acts as an emitter of a large number. This is like a tap turned on with water pouring out of it: the further away you get from the tap, the less water there is. You can see this literally in Minecraft, if you’re curious. In the meanwhile, some of the surrounding areas are more porous than others and absorb the water faster, whereas others do not and the water goes further. If you do this with numbers, you start with a big one and as you move away from the tap, the lower the number gets. In its simplest form, this is implemented by dividing the world into a grid, and each square calculates its current number by looking at surrounding squares. In the absence of something topping them up, the numbers decay to zero, but somewhere (either as a stationary or moving target) there are one or more emitters that have a nice large number attached to them pouring more into the network.

The net result of this simple mechanism, when it is gradually iterated out, is that you if you start anywhere in the grid and move towards an increasing number, then no matter what you do, each step you take puts you closer to where you wish to be — guaranteed. The larger the increase in number, the better the route you’re taking and by looking at adjacent grid squares you can figure out how secure, or stable, your route is: the thinner the path, the more likely a blockage is. It also works in real-time, too, because the emitters are attached to the destination and each square can change its permeability depending on whether it is suddenly passable or not. By treating each grid square as an autonomous cell (in a cellular automata type way), the whole system self-corrects and scales wonderfully. With these recalculations taking place over time, you get some amazing outwards behaviour, especially if, say, you’re trying to get a hoard of roman soldiers or a herd of stampeding buffalo across a bridge.

This kind of path-finding is not restricted to grids, of course, as cells do not need to be connected in any specific way. They can, for example, look more like the road network between population centres. And it is this where it gets interesting: if we treat each node on the peer-to-peer network as a cell, and give it some rules on how to update “its number” in the route-finding system, you get a novel way of navigating the network: you can walk towards or away from what you want, even if what you want is a moving target and cells vanish, appear or change their permeability with little or no notice.

So why do we refer to them as chemical gradients? One of the fields that we have experience with is using an artificial biochemistry as an analogue computer. Specifically, such technologies have been used to create interesting, consistent and believable behaviour in artificial characters in gaming environments but also agents for other purposes that are able to adapt to surprises. The outward appearance of such systems is a lot like chemical messaging in nature, and coupled with other metaphors from biology, such as reactions along with emitters and receptors to interface to the outside world, a powerful environment for intelligence is created. It is this kind of environment that we wish to be able to provide to our agents.

For gaming on the Fetch.AI network this opens a bunch of interesting doors, as it makes “hide-and-seek”, with staking, really exciting (armed with some cool smart contracts, it’ll cost you to be found, and reward you handsomely to either find, or be found last). But games³ are not why it exists: it’s there to provide a third way of navigating the network, that can be combined with n-dimensional spatial exploration and semantic teleportation to make even the most complex journey look easy.

Agents are not restricted exploring the digital world in these ways. Indeed, they do not even have to treat it as a world at all: they can merely connect, advertise what they have or want, and wait for suitors to be brought to them. For those that wish to actively explore the space around them looking for new markets and opportunities, then Fetch.AI provides interaction methods for just that. These, coupled with the ability to see, search and filter by content with no requirement for the network to have prior knowledge of that content is particularly interesting.

This is a new world. It’s one that provides machines with abilities that only movies can convey upon us mere mortals: imagine if you could teleport yourself magically to a room full of just the restaurants you like that have a perfect table available? Imagine if all you had to do was think of approximately what you wanted, and you’d suddenly be surrounded by relevant possibilities. If you could hop, skip and jump towards your destination, seeing interesting and relevant things along the way. Imagine if you could smell the optimal holiday and walk blindfolded right at it, touching car hire, hotel transfers and the perfect flights just by reaching out. And it is this, all of this, that Fetch.AI gives to its inhabitants: autonomous economic agents that couldn’t live in a more optimal world if you spent a month of Sundays trying to perfect the dream.

-

[1] — I hate to pop spoilers in, but you may find the next couple of months illuminating with regards to such things…

[2] — There are many applications in which this is used: if a card number or some other optical recognition fails (many of these types of numbers have magic check digits to establish errors), then such systems can be used to rapidly try the best-guess approach to auto-correction.

[3] — I’d be delighted to discuss Fetch.AI’s chemical gradients coupled with n-dimensional spatial organisation as a gaming environment, but there is a condition attached: it has to involve a bottle (or two) of decent red wine.

--

--

Toby Simpson

Opinions my own, yours may be different, and that’s cool.