Fetch AI: Autonomous agents connected within open economic framework on a novel DLT technology

Paradigm
Paradigm
Published in
47 min readFeb 25, 2019

Introduction

Fetch is a decentralised digital representation of the world in which autonomous software agents perform useful economic work. This means that they can perform tasks, such as delivering data or providing services, and are rewarded with a digital currency for their efforts — the Fetch Token. These agents can be thought of as digital entities: life-forms that are able to make decisions on their own behalf as well as on behalf of their stakeholders (individuals, private enterprises and governments for example).

Fetch’s digital world is exposed to agents via its Open Economic Framework (OEF) and is underpinned by unique smart ledger technology to deliver high performance, low cost transactions. The ledger delivers useful proof-of-work that builds market intelligence and trust over time — growing the value of the network as it is used. Fetch can be neatly interfaced to existing systems with minimal effort, allowing it to take advantage of the old economy whilst building the new: plug existing data in to Fetch and watch markets spontaneously form from the bottom up.

Technology

The scaling of distributed ledger technologies (DLTs) has been the subject of intense innovation ever since blockchains became established as secure, reliable mediums for economic exchange. It is clear that for any ledger to be adopted as a future standard, it must be capable of scaling to accommodate the millions of transactions that arise from widespread deployment of IoT-enabled devices.

The origin of the limited throughput of Bitcoin and other conventional DLTs is the sequential organisation of blocks in a chain. This means that all full processing nodes must keep a copy of the ledger and that blocks must be distributed across the peer-to-peer network in their entirety. The principal novelty of the Fetch ledger is that it allows transactions to be distributed and processed in parallel to enable scaling of its throughput in proportion to the computational power of the most powerful processing nodes on the network.

Although the serial nature of blockchains limits their throughput, it is also crucial to one of their most important features. While most abuses of distributed payment systems can be prevented by cryptographic techniques it is unclear how to prevent an attacker from modifying the transactions recorded on the blockchain. The serial ordering of blocks in a chain means that the famous double-spending attack, which essentially involves altering the temporal ordering of two conflicting events, is difficult to accomplish. To succeed in inserting a later conflicting event into the global consensus, the attacker must re-write the entire history of the ledger that has been recorded since the earlier event, which becomes more difficult as time progresses. To retain the security of Bitcoin, modern ledgers must therefore be able to ensure the strict and immutable temporal ordering of transactions that they record.

Ledgers based on directed acyclic graphs (DAGs) offer increased parallelism by allowing events, which can be individual transactions or transaction blocks, to be recorded with more general connectivity to existing events stored on the ledger. The advantage of these data structures is that they can scale to record an infinitely large number of events as they arrive asynchronously across a distributed network.

The disadvantage of using DAGs to record transactions is that they do not provide a clearly defined ordering of the events that they record. This makes them unsuitable as platforms for smart contracts, as it complicates detection of double-spending attacks and can lead to variability in the times that transactions take to reach finality. The Fetch ledger system uses a DAG as part of its consensus mechanism, but also maintains a strict ordering of transactions, thereby combining the advantages of blockchains and DAG-based ledger systems.

Double-spending attacks rely on submitting “conflicting” transactions; that is, transactions that attempt to read or modify the same resource at the same time. The presence of such a conflict is known as a data race condition. Double spending can be prevented by executing transactions in a strict, sequential order, thus ensuring that access to any given resource is strictly sequential. On a distributed system, this ordering of transactions must be identical across nodes that replicate the process: otherwise, the state of a resource can become inconsistent across nodes, which is the objective of a double-spending attack.

The Fetch ledger relaxes the requirement on sequential execution by partitioning resources into mutually disjoint resource groups. Transactions that affect resources from different groups are then handled by separate resource lanes. Resource lanes are a novel component of the Fetch ledger architecture. A complete ordering of transactions is defined within, but not across lanes.

The ledger enables transactions between resources belonging to different groups by entering each transaction that involves multiple resource groups into all of the appropriate resource lanes. This serves as a cross-lane synchronisation mechanism that is resolved by a novel block organisation algorithm. Transactions that do not belong to the same lane affect resources belonging to different groups, and thus can safely be executed in parallel. These features allow the ledger to scale its throughput to accommodate an arbitrary number of transactions.

Grouping of Resources into Lanes
The term resource is used to refer to any object that has a mutable state and a unique address. In the classical use case of monetary transactions, the resources hold integers that represent currency, analogous to a bank account balance. In any ledger, a transaction will affect at least two different resources. For accounting purposes, these would be the sender and recipient of the transaction. In this context, a pair of transactions that do not have a common sender or recipient cannot, by definition, arise from a double-spending attack and can therefore safely be executed at the same time (in parallel). This feature is exploited in the Fetch ledger by partitioning the set of all resources into mutually disjoint subsets, called resource groups.

The ledger enters transactions involving resources that are drawn exclusively from one of these groups into a novel architecture component that Fetch team refers to as a resource lane (RL). The ledger system defines a strict time-ordering of the transactions belonging to any given lane. Cross-lane transactions, which involve resources from two or more lanes, are recorded in all of the relevant RLs. These cross-lane transactions enable monetary exchange between distinct resource groups and provide a mechanism for synchronising events between the RLs. A diagram of transactions organised into RLs is shown in Fig. 1. This diagram also shows compatible transactions (i.e. not involving the same resource groups) arranged into block slices.9

Figure 1: Resource lane concept. Dashed horizontal lines represent RLs. Vertical cyan bars denote transactions that involve resources (magenta circles) from one or more lanes. Groups of compatible transactions are arranged into block slices, demarcated by vertical lines, and can be executed simultaneously. For example, in the first block slice, the transaction involving resource groups 1 and 2 can be executed at the same time as the transaction that involves groups 3 and 5. The bold vertical lines represent the putative boundaries of blocks that are to be entered into the blockchain. Each block contains a fixed number of slices, which they refer to as the slice number. The lane number, which specifies the other dimension of the block undergoes a doubling after the boundary of the second block, which leads to a concomitant doubling of the transaction throughput.

RLs serve a similar purpose as sharding in conventional databases, and reduce the minimum storage requirements on the smallest processing node on the network. An important difference, compared to conventional sharding schemes, is that a transaction may be entered into several RLs, depending on the resources it uses. An advantage of this design is that independent peer-to-peer networks can be created for each RL. This provides a means of scaling the ledger’s execution and transaction distribution rates, since the lane number, i.e. the number of RLs, can be adjusted according to the transaction load.

A strict temporal ordering of transactions, including simultaneous execution of non-conflicting events, is achieved by entering transactions into a novel block structure. These blocks are connected in series and the overall organisation constitutes a blockchain.

Architecture
We first introduce the block data structure used in the Fetch ledger, and then describe the software architecture that is run on Fetch ledger nodes.

Figure 2: Illustration of the Fetch Blockchain. The leaves of the Merkle tree each reference a list of transactions rather than a single transaction. These lists are referred to as block slices. For a block to be valid, it is required that no two transactions inside a given slice involve the same resource group.

Block Data Structure
The block data structure used in the Fetch ledger is similar in most respects to that used in Bitcoin and other blockchains. Blocks consist of a header and a body, where the body is a Merkle tree that references a number of transactions. The header contains the block hash and a hash pointer to the previous block, so that, collectively, the block headers form a cryptographically secured linked list (the blockchain). The block’s body is referenced from the header by the Merkle hash root. The only important way in which Fetch block headers differ from Bitcoin block headers is in the method used for representing and storing a proof of work. The only consequence on the block architecture is that the headers contain a “proof” field which holds a digital signature. Note that the Fetch ledger design is equally compatible with traditional proof-of-work and proof-of-stake protocols.

The novel aspects of the block data structure which are designed to make the ledger more scalable are found in the block body. Whereas the leaves of the Merkle tree used in a conventional blockchain each reference a single transaction, each leaf of the Merkle tree inside a Fetch ledger block references a list of transactions. Fetch team refers to these transaction lists as block slices. This is illustrated in figure 2.

In order for a block slice to be considered valid, it is required that no two transactions inside a given slice involve resources belonging to the same resource group. This guarantees that no race condition is present among the transactions inside the same slice, and it is therefore safe to execute these transactions in parallel.

Because no two transactions in the same block slice can affect the same resource group, the number of transactions in a slice can never exceed the number of resource groups. Each resource group is managed by a separate resource lane. The number of resource lanes is therefore necessarily the same as the number of resource groups. Fetch team refers to this number as the lane number. The lane number constitutes one of two key parameters that can be adjusted to relieve different bottlenecks as the transaction load increases. The second key parameter is the number of block slices per block, which Fetch refer to as the slice number. All blocks are required to contain this fixed number of slices, and are otherwise considered invalid.

The block size is defined as the product of the lane number and the slice number. This corresponds to the maximum number of transactions that can be packed into a single block. This maximum is reached if each transaction in the block involves resources from only one group, and each slice contains one transaction per resource group. Increasing either one of the two key parameters increases the expected number of transactions per block, but they have differing effects on throughput. The slice number controls the number of transactions that can be placed inside a given block in a more deterministic manner than the lane number. This parameter can therefore be tuned to the rate at which blocks can be generated by the consensus mechanism and then synchronised across the network. The lane number controls the number of transactions that can be executed in parallel to maintain the state database. In summary, the slice number will be tuned to the block creation rate, while the lane number will be tuned to transaction volumes.

Node Architecture
We will now describe how the different parts of the block data structure are stored and maintained across different components of the ledger node architecture. This architecture consists of three layers, illustrated in Figure 3. The lowest layer maintains the main chain, and, as such, is the layer responsible for maintaining temporal ordering. On top of this main chain layer, the storage and execution unit (SEU) maintains transactions and resource states. In this sense, it is the layer responsible for the ”contents” of the ledger system. Finally, the wallet interface provides a means for submitting new transactions to the ledger system in order to make changes to resource states.

Main Chain
The main chain layer is made up of components of two different types: The blockchain component maintains the linked list of block headers. Each header contains a Merkle hash root which references the list of block slices. These block slices are stored by the remaining components of the main chain layer, which Fetch refer to as the slice keepers. It should be noted that the block slices contain hash pointers to transactions, and that it is the next layer of the architecture that is responsible for recording the transactions themselves.

The number of slice keepers is equal to the slice number. Each slice keeper is associated with a certain slice index. For any given slice index j, the slice keeper associated with this index maintains a time-ordered list of slices with index j, each belonging to a different block.

Figure 3: Architecture of a Fetch ledger node.

Each of the components of the main chain layer is connected to a separate network: the blockchain component is part of a network together with all the blockchain components of all the nodes in the Fetch ledger system. And, for each slice index j, the ”slice j keeper” is part of a network involving all ”slice j keepers” from all nodes. The components of each type synchronise their data across Fetch ledger nodes using these networks.

Each slice keeper maintains two data structures: one to store the slices, and one to keep track of their ordering. The first of these is called the slice store. This is a map with slice hashes as keys and the actual slices as values. The latter is called the slice sub-chain, and consists of a linked list of slice hashes. Slice keepers rely on the linked list of block headers, stored by the blockchain component, to build their respective slice sub-chains.

Storage and Execution Unit
This layer manages the state of the resources by tracking and executing transactions. It is made up of components of two different types: resource lanes and contract executors.

Each resource lane maintains three data structures, which Fetch call the state shard, the transaction store, and the sidechain. Recall that each resource lane maintains the state of one of the resource groups. In fact, each resource lane stores the node’s local copy of the resource group whose state it maintains. This is the state shard: a key-value store with resource addresses as keys and the resources’ states as values.

The remaining two data structures are used to maintain the history of transactions affecting the resources in the state shard. The transaction store holds the transactions, indexed by their respective hashes. The sidechain is a linked list of transaction hashes: this is where the ordering of transactions belonging to the lane is defined. Resource lanes each build their respective sidechains based on the ordering of the block slices, which is defined by the main chain layer. Each resource lane with a given lane ID is part of a separate network that connects all resource lanes with this same ID, each running on a separate ledger node. The lanes use these networks to synchronise transactions and their ordering across nodes.

Transactions can be thought of as instructions to make specific changes to resources held in the state shards. These instructions are executed by the contract executors. Since only transactions belonging to different lanes can be executed in parallel, the number of contract executors is no higher than the lane number.

The executors work in parallel to execute all transactions referenced from a given block slice. The block slices themselves are processed sequentially, according to the complete ordering of slices defined by blocks’ Merkle trees and by the linked list of block headers. When a given slice is next in line, each contract executor pulls a copy of this slice from the corresponding slice keeper in the main chain layer. A simple index-based scheme determines which executor will execute which transaction. As the number of transactions referenced from a slice can be lower than the lane number (due to the presence of cross-lane transactions), some of the executors may be idle during the execution of a particular block slice. It remains to be decided whether contract executors will persist even when they are idle, or be spawned and retired as required.

Having been assigned a specific transaction, a given executor connects to each resource lane that the transaction belongs to. It pulls the actual transaction from the transaction store of one of these lanes. The executor then pulls the resources affected by the transaction from the respective state shards, and determines whether the transaction is executable (that is, whether it is consistent with the current resource states). If so, it executes the transaction, and then pushes the resulting modified resource states to the respective state shards.

Wallet interface
This topmost layer of the node architecture is responsible for providing a way of submitting new transactions to the Fetch ledger system. This involves three simple tasks: Exposing an Application Programming Interface (API), doing basic checks on incoming transactions to filter out invalid submissions, and routing transactions to the appropriate lane(s) in the SEU. The first of these tasks is carried out by the wallet API component, and the latter two are carried out by the pre-evaluation unit.

The wallet API consists of a standard HTTP interface with support for WebSockets. This enables the wallet interface to interact with a variety of different devices and applications that host users. These could include weakly powered devices such as Arduino boards and Raspberry Pi chips, as well as smartphones and desktop computers. The HTTP API also has the advantage of being programming-language agnostic, which facilitates third party application development.
Source

TECHNOLOGY IN THREE LAYERS

Fetch actively puts value-generating agents in contact with those that require it. The world is dynamically reorganised to remove friction from the process. Trust and reputation information is provided to allow users and agents to transact with the least risk. Underlying ledger delivers a digital currency and a decentralised transaction system capable of scaling more effectively than alternatives, so that it can support many tens of thousands of transactions per second at virtually no cost. Fetch enables improved utilisation of IoT devices and brings data marketplaces to life. To deliver this,three key layers of technology were developed:

Figure 4: Fetch’s three layers. Layer 1 is the autonomous economic agents, AEAs, which live in the environment provided by layer 2, the OEF. Underpinning the OEF is the ledger that ensures the integrity of the global truth on the decentralised network and feeds the learning that provides trust, reputation and network intelligence. Layer 2 and 3 form a node. Fetch’s peer-to-peer network is made up of many such nodes connected to each other in different ways.

Machine learning and intelligence is supported at all three levels.

OEF: DECENTRALISED LIFE-SUPPORT FOR AUTONOMOUS AGENTS

The Open Economic Framework (OEF) provides life-support for autonomous software agents. These can be thought of as digital entities that are able to make their own decisions. They exist in a digital world that dynamically reorganises itself to present the optimal environment for agents to operate in. Primarily, this means rearranging space to place agents that can provide value “close” to agents that are searching for that value. Each agent therefore “sees” a different world from its perspective: a world that is reorganised for its perceived and declared needs. The OEF is the high-level node functionality: the layer on top of the raw protocol and ledger that delivers this environment and all the other operations agents need in order to go about their day-to-day work.

Figure 5, below, shows the OEF’s internal organisation and structure:

Figure 5: Multiple OEF nodes collectively make up the decentralised environment. The two key APIs are shown — one for the AEAs to access their services and connect to/exist in the world and one for peer-to-peer protocol which carries both high-level and low- level commands. High-level commands include operations such as agent transfer, exploration, search and discovery, whereas low-level commands of operations for consensus, peer trust, digital currency, transaction and protocol control.

The OEF’s primary API, as exposed to agents, supports a number of base-level commands. Some of these commands are free, others require a small token cost. Other commands require what Fetch term a “trolley token” — a small deposit in Fetch tokens that is refunded if the operation is gracefully completed. Operation costs are decoupled from the Fetch token in a similar way to that of “gas” in the ethereum network. Thus there is a need to convert Fetch tokens to an operational fuel before commencing an operation. The node that performs the conversion receives the Fetch tokens in exchange for providing that service.

Agents need not restrict themselves to one node. Indeed, it is a wise move for an agent to have more than one footprint in the Fetch world. They can do this by registering on multiple nodes. This provides them with additional protection against failure, better trust information (as it comes from multiple sources) and higher performance as well as the ability to be in more than one place in the digital world at the same time.

Trolley tokens are managed in a smart contract on the system. These are refundable Fetch tokens that are required for some operations. The token is automatically refunded when the operation is complete or when the other party involved fails ungracefully. This is the “skin in the game” requirement for connecting to and using the OEF’s network features.

ENABLING INTELLIGENCE, DEPLOYING INTELLIGENCE
Machine intelligence and learning exist in all three layers. The OEF and Ledger form the primary Fetch protocol: they operate on each and every node in the system and provide the environment in which the agents live. The agent layer is up to third parties to dictate and create : Fetch present a digital environment exposed via an API which provides opportunity for economic gain if utilised effectively. This effective utilisation, for scale and cost reasons, is best done digitally and without human intervention. This incentivises digital intelligence at the AEA level. At the protocol level, intelligence and learning are used to provide four layers of trust information:

  1. Trust in how normal any given transaction is
  2. Trust in the information received from other nodes on the network
  3. Trust in the parties involved based on their history
  4. Evolving market and data intelligence

These collectively provide reassurance to users about the trustworthiness of any given transaction. The ledger’s useful proof-of-work (uPoW) generates this information as nodes place transactions and transaction information onto it. This data grows in value over time as it is built from a larger and larger sample size. Fetch team can say with confidence that Fetch is a truly intelligent protocol: it uses this wealth of information to restructure itself dynamically in order to best suit its users as well as provide those users with information that allows them to manage their risks and transact at the most cost-effective rates possible.

Market intelligence is a particularly interesting aspect of Fetch. Over time, it learns more about what kinds of markets interact with others, under what conditions and which ones overlap with others. This data has previously been held in proprietary silos by very large online markets such as Amazon and eBay, but for the first time will be available publicly. This hugely valuable information becomes accessible to participants in the marketplace as well as enabling smart market structure and an additional layer of information for agents to leverage in order to maximise opportunistic value use and increase utilisation of data and services.

Fetch supports, rewards and encourages individual agents’ intelligence whilst constructing a collective super-intelligence to support all users of the network.

ANATOMY OF AN AEA

The base level AEA (Autonomous Economic Agent) is a software entity that is able to perform actions without external stimulus. This makes it truly autonomous: it acts on its own behalf, not just on someone else’s behalf. All AEAs must contain a unique identifier which comes from the agent’s “wallet”. This allows it to send and receive Fetch tokens. All AEAs must also maintain a list of nodes they are registered with and some basic statistics, all of which is signed and publicly verifiable. This can be seen in Figure 6.

Figure 6: Here we see the agent’s relationship with the “other worlds”, that of the OEF that provides its life-support and environment, and the other that is the real world. In this latter world, agents can connect to sensors, data, represent people or provide connections between the old economy and the new,e.g., ticket booking system.

Agents are able to be connected to more than one node at once. They can do this for security and redundancy reasons and also to be in more than one place at one time (in any of the three dimensions: geographical, economical or network). Node registration is required in order to participate. Registration requires what Fetch team refer to as a “trolley token”: a token to be deposited in a smart contract that is refunded in one of two cases, 1) a graceful unregister with no pending/in-progress transactions or 2) the node fails.

Trolley tokens ensure that there is a token requirement in order to be part of the network and encourages good behaviour. It also attaches a cost to malicious agents: large-scale attacks cost tokens and all of these are likely to be lost.

AGENT TYPES

AEAs come in many flavours and the systems that support them deliberately encourage and reward improved digital intelligence. Whilst there is no such thing as a “typical” AEA, Fetch team envision at least five general application areas, although many AEAs will be made up of varying combinations of the types shown below as there is often significant overlap:

  1. Inhabitants — these are AEAs paired to hardware that exist in the real world. These may be cars, drones, sensors, cameras, mobile phones and computers. AEAs do not control these devices, they exist “inside” them as controllers/operators/drivers: a digital version of what would otherwise be the human component. E.g. an AEA in a self-driving car does not drive the car, it tells the car where to drive.
  2. Interfaces — these provide an interface between the old and the new economy. Fetch team call these API AEAs. They allow AEA entities to work with and leverage elements of the conventional economy such as ticket sales and can be thought of as facilitation agents.
  3. Pure software — these are pure AEAs that exist in the digital space only. They explore, negotiate with, and find new ways of serving their stakeholders. These are teams of entities that work to organise, schedule and arrange other AEAs attached to hardware and interfaces to provide complete solutions.
  4. Digital data sales agents — these are a specific class of pure software agents that attach to data sources in data marketplaces and go out into the Fetch world to extract value from that data. This is a solution to what is seen as the number one problem of the data industry: data does not sell itself.
  5. Representative — AEAs that represent an individual and act as their interface to the Fetch network acting as a “digital butler”. Their learning systems are involved with understanding preferences and tolerance for change whilst initiating autonomous requests to fulfil the requirements of the owner.

Figure 7: Five different major classes of autonomous economic agents: those that work entirely internally to the Fetch network and those that interface to external data, hardware entities r represent human users.

DOING BUSINESS

Fetch is an effective, active brokering agency for putting agents who have value in contact with those that need it. It allows those agents to passively or actively be involved in the delivery or search but particularly rewards active participation though its multi-dimensional decentralised virtual world. The simplest conversation that two AEAs, connected to the same node, can have with the OEF in order to do business with each other is illustrated in Figure 8, below. This shows the minimum message exchange between two agents in order to conduct a simple transaction. It does not show active exploration of the network or what happens when multiple agents are able to deliver the value to the agent that requires it.

Figure 8: In this example agent 1 is delivering data X to the agent 2 who requires it. It is a simple search with a simple escrow transaction via the node to ensure that data item X and the payment (T) are both provided before the exchange occurs. The ledger gets five key pieces of information: source, target, value, action and location (STVAL).

The agent that is searching can refine the search by either the geographic or economic dimension to reduce the number of hops (and thus the cost) of conducting a relevant search. Figure 9 shows three different searches: a simple network search, a geographic search or an economic search. The risk with simple, unconstrained network searches is that the search hops will exceed the available tokens for the search before finding anything relevant. The agent can make better use of their exploration tokens by restricting the search with geographic and/or economic conditions rather than just the network’s general organisation.

Figure 9: A-E represent individual nodes. If an AEA is searching for hill of beans, it helps to constrain the search by where this hill needs to be or by the beans (geographic or economic dimensions). This increases effectiveness and reduces the cost of the search and avoids endless explorations over irrelevant nodes.

THE FETCH SMART LEDGER

Fetch has specific requirements related to its unique architecture. Its ledger and underlying network needs to support the decentralised virtual world with a large volume of low-value transactions and the ability to compress the distant past without undermining integrity of data. It also needs to retain information about the transaction over and above that stored in traditional transaction systems due to the requirements of the useful proof-of-work (uPoW). Fetch’s unique approach is therefore neither a blockchain nor a DAG, it is a unique blend of both data structures and incorporates many other innovations.

The technologies presented here are the working foundations upon which they are building. Fetch are working on a number of further innovations in this field which will enable a further generational leap in the ledger’s capabilities, performance and stability. Furthermore, Fetch anticipate new ways in which they can structure the underlying digital world and present knowledge and insight to the users of the network. As these new developments continue to be prototyped, trialled and developed, they will be updating their technical documentation accordingly.

THE LIMITATIONS OF EXISTING METHODOLOGIES

Traditional blockchain architectures lend themselves to intelligence-driven decentralised environments for agents because the nodes are largely static, remain on-line for extended periods of time and are capable of processing significant amounts of information.

A major limitation of blockchains is that blocks are added sequentially, so that only a single block can reference the previous block in the chain. This serial architecture is inefficient and greatly limits the rates at which transactions can be processed. Payment channel systems such as Lightning or Raiden offer a way of providing scalability to blockchain solutions by taking large numbers of small payments off-chain. These systems can increase speed and decrease the cost of certain types of transactions but do not provide a general solution to the scalability issue.

In many unpermissioned ledger systems, proof-of-work serves as a means to control the block generation rate and to ensure significant computational power is associated with the generation of blocks, thus ensuring the fidelity of the network. The advantage of the conventional hash-based puzzle is that it is easy to verify and stochastic, so that a miner’s success is proportional to his or her computational power. The disadvantage of this approach is that large amounts of energy are used in calculations that have no purpose other than securing the integrity of the blockchain. The difficulty of solving the hash-puzzles also encourages formation of mining pools, which decrease the system’s decentralisation and therefore security. The Fetch ledger is designed to overcome these various limitations and meet all of the requirements of their digital economy.

LEDGER REQUIREMENTS

  • Scalability — many millions of agents will be working alone or in groups to provide solutions for themselves and for other stakeholders. Without unconfined scalability, this will not be possible.
  • Stability — for an economic system to be useful it is necessary to have a means of trading that ensures price stability. An important aspect of achieving stability is to separate fast moving tokens from slow moving ones. Fetch team believes that achieving this is crucial to the creation of a healthy marketplace.
  • Useful economic work — The original Bitcoin protocol uses proof-of-work to protect against consensus attacks such as double spends. This is a powerful idea, but Fetch team believes that rather than solving a puzzle with no other benefit, the computational power should be used solve relevant problems and thereby empower the economy.
  • Risk and trust information — the network should provide trust, reliability, reputation and network intelligence information to allow users of the network to access the information they need to conduct business effectively and efficiently.
  • No loss of individual transaction data on the ledger’s near past — the machine learning systems Fetch uses require transaction information to be stored without rolling-up: each individual transaction’s source, target, value, time, location and action from the near past is needed, but as time progresses, it is increasingly important to be able to compress proportionally to the age of the ledger’s data.

Solving these issues is the key value proposition of the Fetch’s ledger.

USEFUL PROOF-OF-WORK (UPOW)

Fetch’s useful proof-of-work will involve the packaging of general-purpose computing problems into proof-of-work packages. These problems allow processing nodes with less computational power to occasionally earn block rewards. Verification of the subproblems will be carried out by nodes that “lost” the race for solving the problem with some smaller reward provided for these verification steps. Fetch will also incorporate tuneable PoW difficulties in relation to transaction fee so that nodes with low computational power can earn rewards by registering low-value transactions into the ledger. This distributed computing platform will be used to train machine learning (ML) algorithms, and will ensure integrity of the network by, for example, assessing trust in the validity of transactions and the ledger itself.

INTELLIGENCE ABOUT THE MARKETS FOR THE MARKETS

The Fetch network builds market intelligence, trust information and node reputation information in order to provide the users of the network with access to the information they need to maximise either their ability to find value or to provide it to those that seek it. This, combined with the three dimensions of spatial organisation (geographical, economic and network) means that data can be applied in ways that have never before been imagined. With the network delivering general-purpose computing, the data that is available can be transformed to deliver new insights and knowledge. Collectively, the network’s inhabitants learn new correlations between requirements and how to deliver them. It is precisely this sort of emergent network intelligence that, for example, allows agents to establish climate and travel conditions from otherwise non-obvious sources such as the use of washers and wipers in all vehicles that are on the road.

MACHINE LEARNING AND INTELLIGENCE

The Fetch ledger has two key requirements — to provide the agents that operate and interact within the Fetch network with support and guidance and to be secure. For this support, the Fetch ledger uses one piece of key information — the past actions undertaken by the agents. Each action is stored alongside the value transacted and the public identities of the transacting parties. The action information can be used in novel and informative ways to support the trust in an agent’s actions, to search for agents that can serve their needs, to supervise and understand their behaviour, and to dynamically form new domains and markets. To unlock these properties, it is essential to understand the public distributed ledger system as a stochastic process, where all transactions and actors are modelled with AI methods. Most notably, the actors are not only the agents using the Fetch network, but also the participants that form the ecosystem of nodes. By using a probabilistic framework, the nodes propagate belief about state and can find consensus by using their own version of the ledger history. The nodes can also establish belief in each other and, further, develop dynamic strategies that ensure fast confirmation cycles.

Machine learning targets three elements that enhance the performance, enable access, and provides trust: 1) understanding history: through ML models that capture behaviour, 2) understanding and planning the future: to understand how to distribute workload and increase convergence and 3) understanding the present: to distribute current belief and information.

The power that is unlocked from recording agent actions is considerable. Real world cost and even proof of work (or stake) strategies can be adapted to suit the transaction and participants at hand. The consensus-building itself allows the agents to use information about history and reputation, and the users of the system can choose to engage on the basis not only of simple contracts, but also of trust. There are several new elements that are worthy of discussion. Here Fetch team focuses on several applications of machine learning to the Fetch ecosystem.

BELIEF PROPAGATION FOR LEDGER INTEGRITY

Fetch’s novel ledger system enables large transaction volumes by enabling many transactions to be added to the ledger at the same time (in parallel). The main problem faced by this highly parallel system is double-spend transactions by malicious actors in the network. The detection of these fraudulent transactions consumes resources in checking the ledger, and it is mitigated in most systems by introducing a delay time before a transaction is accepted. The Fetch system will use machine learning to determine the probability of any transaction being replaced by an alternative double-spend, by propagating beliefs about the current state of the ledger between processing nodes. This time can be reduced further by using other properties of the transaction such as the sender and the number of past transactions between sender and receiver. The system will provide a disincentive for fraudulent actions by reducing the trust that the network associates with a particular economic actor.

INTERACTIONS BETWEEN THE LEDGER AND EXTERNAL ECONOMIC AGENTS

Since the ML algorithms that provide automated control of the Fetch ecosystem are likely to consume just a small fraction of the available computational power in the network, autonomous economic agents will also have the opportunity to submit problems of a certain form (typically a standard ML problem, such as image or speech recognition), and then pay processing nodes in the network for their effort. This mechanism also enables rewards to be tuned to the computational power of the network with a pricing mechanism used to prevent external processing tasks taking priority over the network’s maintenance tasks. The probabilistic nature of transaction acceptance protocol also enables economic agents to, for example, immediately accept interactions with “trusted” economic agents and to accept the possibility of fraudulent behaviour if the transaction sums are small or are particularly time-critical.

ESTABLISHING TRUST

The Fetch ledger automatically discovers the context of agents’ actions and valuations and their relations with other agents. It does so by building up sequence models, i.e., models of temporal data that track agents’ history and compress this information into fixed length representations that preserve the similarities that were present in the original data. By using Deep Learning methods, such as recurrent neural networks (RNNs, LSTMs), this compact and low dimensional representation (also referred to as an embedding) allows an efficient comparison of different agents. With this information, one can establish a notion of an agent’s reputation and of the consistency of a particular action with its previous actions, which will serve to predict its validity.

Fixed size representations can also be used to map the free form description of the actions into a semantic space where similar expressions would be grouped together based on their meaning. For example, transactions related to different types of weather data would form a semantic cluster that could enable users to search for novel weather-related data sources.

SUPPORTING AGENTS

A computational agent is considered a full learning machine that could embody anything from a human interacting with the system via a user interface to a fully autonomous reinforcement learning system. From the outset, the Fetch ledger was purpose-designed to support agents in all of their activities: finding each other, recording information, establishing a clear transaction decision record and enabling them to engage with each other. An agent can take multiple actions in a particular sequence and dynamically decide on the best strategy for success. Not only is this important for effective operation, interactions between agents also have an all-important negotiation phase where the traded action is established. Since agent interactions can be complex, the proposed action may manifest itself in an API-style program which is to be proposed by an agent. The Fetch ledger will store such information and provide means for checking the suitability of specific programs on the basis of historical use. This again increases trust and unlocks interaction of agents from different markets and domains.

INTELLIGENT INTERFACES

For economic agents to be truly autonomous, they need an understanding of how to talk to each other that goes far beyond merely knowing the appropriate communication protocols. They need to have an understanding of the purpose that a given message exchange serves. Given a description of a particular purpose, they need to be able to learn what message exchange they need to engage in in order to achieve this purpose.

Agents are able to draw on the records of previous interactions between agents to learn how the processes in question work. Any process logs summarising exchanges between agents will be stored alongside a natural-language description of the exchange’s purpose. This could, for example, be an instruction that was given to one of the agents by a human user. Agents may also be programmed to provide these natural-language descriptions themselves in order to facilitate human-machine interactions and to help agents develop their understanding of the purposes of agent exchange sequences. This enables agents to develop genuine autonomy in the pursuit of goals that can be defined with the same flexibility provided by a natural human language.

The result is a novel integration of approaches drawn from two fields that have not previously been known to cross paths: Natural Language Processing and Process Mining. Only recently have machine learning techniques been brought to this field, and where they have been tried, they have proven very successful.

A breakthrough in Natural Language Processing is known as word embeddings. These are fixed-length vector representations of individual words. The process through which individual words are assigned to individual vectors is described as embedding the vocabulary in a linear (vector-)space. The embeddings uncover relationships between the words’ meaning.

SECURITY AND ATTACK RESISTANCE

Fetch will be releasing a separate paper outlining details of the network’s security and attack resistance. This paper includes results of Fetch’s detailed modelling, their internal simulations and results of the first versions of the network under various attack scenarios. Fetch believe this to be a key topic and have designed the system with security in mind: from the useful proof of work through the trolley token mechanisms to other feedback mechanisms that incentivise good behaviour whilst discouraging (economically and otherwise) bad behaviour. It should always be more profitable to behave well on the network for any of its users. It should also be noted that Fetch have done detailed economic modelling of the ecosystem with its economic advisors from Cambridge and other institutions in order to provide further support for the network’s performance and security.

Source.

Team

Leadership Team

Source: Fetch.AI website.

Humayun Sheikh — CEO, Co-founder.

An innovation entrepreneur, founding investor in DeepMind with a record in revolutionising trading in steel sector and now changing the way we transact and travel.

Toby Simpson — CTO, Co-founder.

Producer of the successful a-life Creatures series of games and early developer at Deepmind. His thirty years’ experience in software, ten as a CTO, are now focussed on crypto-economics.

Thomas Hain — CSO, Co-founder.
Professor at Sheffield and established scientist in advanced machine learning AI who bridges real world and academia and is inspired by the opportunities AI brings to modern society.

Jonathan Ward — Head of Research.
A researcher in machine learning, complex systems and blockchain technology. Excited by the challenge of deploying decentralized multi-agent systems in smart cities, supply chain and healthcare. PhD in Machine Learning from UCL.
Troels F. Rønnow — Head of Software Engineering.
A scientist and innovator, that benchmarked D-Wave Two, co-authored 35 patent applications who has been working more than two years full time building distributed ledgers.

Maria Minaricova — Head of Business Development.
Experienced in managing strategic program delivery, business development and the application of state-of-the-art ICTs. Worked at Oracle and at Europe’s e-infrastructure GEANT collaborated with flagship pan-European research groups and e-infrastructures, leading international teams across several countries, simultaneously delivering complex projects in challenging environments.

Arthur Meadows — Head of Investor Relations.

International experience in software start-ups and bringing disruptive, high-growth tech products to commercial success. MBA from Judge Business School, University of Cambridge.

Developers

Source: Fetch.AI website.

Attila Bagoly — Software Engineer.
An enthusiastic software engineer and trained scientist with an MSc in statistical and particle physics. He previously worked on numerical hydrodynamics, particle physics data analysis and various machine learning projects.

Khan Baykaner — Lead Software Engineer.
Research engineer in machine learning with experience developing deep reinforcement and generative models, solving problems in digital health, audio/video media and NLP. PhD developing computational auditory models from University of Surrey.

Peter Bukva — Principal Software Engineer.
Experienced software engineer and scientist. Previously worked at Bloomberg and Siemens Corporate Research. MSc in Solid State Physics and PhD in High Temperature Superconductors.

Joshua Croft — Lead Software Engineer.
NLP and data mining specialist in geolocation, natural language and IoT. Ex-PlayStation developer with an MSc in Advanced Computer Science from the University of East Anglia.

Robert Dickson — Senior Software Engineer.
A seasoned software engineer and architect with experience in dynamics simulations, low-level protocols, 3D graphics and virtual machines. He has a strong artificial life and mathematics background.

Ed Fitzgerald — Lead Software Engineer.
A engineer, researcher and technology enthusiast with experience of distributed ledgers, video codecs, networking and low level software optimisation. Strong engineering and mathematical background, he obtained his degree in Electronic Engineering from University of Surrey.

Nathan Hutton — Senior Software Engineer.
Experienced software engineer and FPGA specialist, previously worked at BAE in high speed communications systems. MEng from The University Of Edinburgh.

Katie Lucas — Lead Software Engineer.
Ex-Google SRE and Privacy engineer, she has also worked with Citrix, Hitachi and Grapeshot. Highly experienced in many fields including security systems, military simulations and real-time data-processing pipelines.

Aristoteles Triantafyllidis — Software Engineer.
A computer developer with specialisation in games’ production. He has a passion for programming and producing innovative projects. He has a BA in Informatics and Telecommunications and a Diploma with honors from SAE Institute in Game Production. He is passionate about programming and creating innovative projects by leveraging cutting-edge technologies and engines.

Pierre Wilmot — Software Engineer.
A software engineer with a background in video games and deep learning for image processing. He is motivated by the desire to find clean and elegant solutions to complex problems.

Researchers

Source: Fetch.AI website.

Marcin Abram — Lead Research Scientist.
Theoretical physicist and machine learning scientist. Marcin’s doctoral research explored topics on coherence and emergent behaviour in quantum systems. His current work focuses on machine learning applications for modelling autonomous economic agents in distributed ledgers.

Diarmid Campbell — Senior Software Engineer.
Senior engineer and technical manager with 20 years of industry experience. In 2000, he led the game programming team on the world-wide best selling title, The Thing. He then moved to Sony where he built up a research group developing computer vision algorithms, which formed the basis of a number of internationally successful augmented reality PlayStation games.

Marco Favorito — Machine Learning Engineer.
M.Sc.Eng. in Computer Science from Sapienza University of Rome, where he worked on a thesis in Reinforcement Learning. Software Engineer with solid programming skills, he’s working on the OEF.

David Galindo — Head of Cryptography.
Associate Professor in Computer Security at University of Birmingham with 15 years of experience in applied cryptography research, both in academy and industry. His work has been published in top academic venues in computer security and has been deployed by governments around the globe.

Daniel Honerkamp — Machine Learning Engineer.
Combining expertise in both machine learning and economics with an MSc Computational Statistics and Machine Learning from UCL and previous experience at the Swiss National Bank.

Ali Hosseini — Artificial Intelligence Researcher.
Expert on Multi-Agent Systems and logical modelling with a background in software engineering. PhD and MSc in Artificial Intelligence from King’s College London with excellent academic track record and publications in AI conferences and journals.

Jerome Maloberti — Lead Software Engineer.
Implementing large scale multi-agent systems with ML algorithms with nearly 20 years’ experience following a first degree in Software Engineering and a PhD in Artificial Intelligence.

Fred Moisan — Lead Economist.
Researcher in behavioural, experimental and computational economics, applying game theory to the economics of networks and the design of decentralized marketplaces. PhD in AI from the University of Toulouse.

Patrick Motylinski — Senior Research Scientist.
Scientific researcher with many years of experience in areas ranging from physics and mathematics, to blockchain technologies and cryptocurrencies. Holds a PhD in theoretical high energy physics from University of Amsterdam, and has several highly cited, peer-reviewed scientific publications, as well as a considerable number of patent applications.

Soren Riis — Research Scientist.
Research Scientist at University of Queen Mary London with the speciality of Multi-user Information Theory, Network Coding, Algorithmics, Complexity Theory and Cryptography. DPhil (PhD) in mathematical logic from University of Oxford.
Jin-Mann Wong — Research Scientist.
Worked previously as a researcher in theoretical physics and mathematics with an interest in technical aspects of blockchains. She holds a PhD in string theory from King’s College London.

Admin and Marketing

Source: Fetch.AI website.

Chris Atkin — Digital Marketing Coordinator.
A qualified journalist with a passion for social media. He has five years of experience at Sky News and BT Sport and has managed the social media accounts of Panama’s largest language school.

Lisa Condon — HR Generalist.
A HR generalist with extensive experience within the retail and tech sector after working for River Island and Nokia Technologies. A highly motivated problem solver and a passionate people person.

Catherine Moriarty — Chief Amazement Officer.
Dynamic people person with a wealth of commercial experience and a wonderful understanding of maximising peoples’ potential. Driven and focused, with humour and respect for everyone she meets. Principled and determined to make the world better. Now bringing a sprinkle of amazing to the Fetch team.

David Wood — Recruitment.
A recruiter with over 15 years’ experience in technology recruiting. Passionate about technology and who enjoys working closely with engineers to identify high calibre talent. MBA from The University of Miami.

Gary Wood — IT Manager.
A multi-talented IT Manager with over 15 years experience. Well-versed at supporting fast moving, dynamic development teams. Previously designed, implemented and managed an onsite data-centre running a massively multi-player online game for the BBC.

Advisors

Source: Fetch.AI website.

Melvyn Weeks
Assistant Professor in Economics at University of Cambridge, researching the application of Machine Learning to market pricing. Senior Economic Advisor to Ofgem, UK’s Energy regulator.

Monique Gangloff
Principal investigator/senior scientist at the University of Cambridge, Department of Biochemistry. Her research endeavours have resulted in more than 35 international peer-reviewed publications and 1 patent application.

Dr. Niall Armes
Dr. Armes is a world-leading biochemist, molecular biologist and entrepreneur. He received his PhD from the Imperial Cancer Research Fund in London for work on comparative genome structure. He subsequently founded TwistDx, serving as CSO and CEO prior to the company’s acquisition.

Jamie Burke
Founder and CEO of Outlier Ventures, Jamie has cultivated a powerful ecosystem of corporate partners, investors and government agencies to help companies scale. He also advocates for the professionalisation of the industry through international media.

Steve Grand
An inventor of complex autonomous agents for nearly 40 years, creator of the Creatures artificial life games and proud father to a small robot now in the Science Museum, Steve has held research fellowships in artificial life, psychology, biomimetics and creative technologies. He received a D.Univ from the OU and was made an OBE in 2000 for Services to Computing.
Source.

Partnerships

Academic partnerships

AI & ML. We are partnered with several UK universities including University of Cambridge and the University of Warwick’s AIIN group. We continue to develop new relationships to continue to develop their core AL and ML development.

DLT & Blockchain technology. Fetch.AI has already established relationships with UCL London, Warwick Business School and Imperial College, London.

Computational Economics. Given that Fetch.AI represents a dynamic marketplace, it is essential that we apply economic market design, game theory and marketplace
modelling to incentivise positive network dynamics and effectively eliminate bad actors. To date, Fetch.AI has already sponsored one post-doctoral researcher at University of Cambridge.

Biochemistry and biology. Fetch.AI’s computational platform has applications in drug discovery, genetics and other aspects of systems biochemistry. Fetch.AI are working with a number of academic partners to develop these opportunities.

Multi-agent systems. Two recognised experts in the field of multi-agent simulation and modelling are acting as Advisors to Fetch.AI: Steve Grand, the inventor of complex autonomous agents and Dr Niall Armes, a world-leading biochemist, molecular biologist and entrepreneur.

Network Development Program

Fetch.AI has developed a number of corporate partnerships: it is a member of the MOBI consortium, a collaboration of car and OEM manufacturers working on the implementation of blockchain in the transportation and mobility industry. More information can be found here.

Fetch.AI is one of the founder members of Artificial Intelligence Innovation Network (AIIN). More information can be found here.

In December 2018, Fetch.AI co-founded Blockchain for Europe, an association representing blockchain originating organisations in Europe, along with other thought leaders EMURGO/Cardano, NEM and Ripple. More information can be found here.

Source

They are collaborating with Clustermarket, an online sharing platform for scientific equipment and services. They are using hundreds of their autonomous agents to maximise asset utilisation and provide personalised predictions for users and providers. GE Healthcare Life Sciences are also working with them on this platform.
Source.

Use case

Applications

The applications of such technology are many. By bringing data to life, Fetch.AI solves one of the greatest problems in the data industry today: data can’t sell itself. With Fetch.AI, it can. Data is able to actively take advantage of any opportunity to exploit itself in any marketplace, in an environment that’s constantly reorganising to make that task as easy as possible. Internet-of-things (IoT) devices inhabited by Fetch.AI agents can increase utilisation by capitalising on short-lived opportunities to sell the information that they possess in existing, as well as novel, information services markets: an agent in a vehicle can provide weather and road conditions by simply relaying the activity of its windscreen wiper and washer activity.

Fetch.AI’s decentralised digital world enables and facilitates the emergence of new marketplaces and allows this “unreal estate” to place relevant markets near each other for ease of exploration. The ability of agents to serve as representatives for data, hardware and services enables a better coordinated delivery of highly or even loosely connected services such as transport and insurance. Fetch.AI creates a huge population of digital data analysts and sales agents who can work together, alone, or with human or corporate masters to reduce the cost of delivering complex solutions in our daily lives.

New opportunities

Fetch.AI’s autonomous agents actively push their value out to those who need it or who unknowingly need it. The Open Economic Framework provides a digital world for them to inhabit that grows in value as it is used: over time, the collective intelligence that is formed provides unparalleled guidance allowing for high speed, high reliability transactions. The network’s expanding computational power provides all agents with the ability to gain new insights and understanding from their data.

With machine learning technology integrated throughout the system, from the ledger to the agents themselves, it is a network that enables, encourages and deploys intelligence, and that actively creates new knowledge. Fetch.AI provides the node structure, the OEF API, and agent Development Toolkits to make agents easy to deploy.

Entire new industries can be built from the deployment of Autonomous Economic Agents as opportunities exist to replace human intermediaries with trusted digital agents. Previously unprofitable datasets become valuable with Fetch.AI, as the cost and friction of applying them is dramatically reduced. Data and hardware can now get up on their own two feet, get out there and sell themselves entirely free of human intervention.

Source.

Social metrics

Github metrics
Social media activity

Markets and volume

TBA

TA

TBA

Competitors

Fetch team doesn’t believe there is currently a scalable smart ledger project which allows virtual worlds to be deployed and prediction models to be built.

An incomplete list of projects that operate in a similar space, and that could complement or compete with Fetch.AI is shown below:

  1. Hashgraph (lacks an inherent intelligence built into the ledger)
  2. IOTA (IoT data exchange but no intelligence)
  3. Ocean Protocol (a marketplace for data and algorithms)
  4. SingularityNet (AGI deployment claims but not really sure how)
  5. Satori (platform on hashgraph ledger/protocol which does not inherently include any intelligence features)
  6. Neurochain — Fetch team does not have enough information to draw a conclusion at this stage, but they watch it with interest.

Their approach is to build their framework on top of their own ‘Smart Ledger’ from the bottom up, with intelligence built into it from the start, and with the tools to create and refine a collective intelligence, and deliver it to all its users. FETCH and the AEAs that live in its world are able to adapt dynamically to deliver or receive value. On a final note, their economic framework allows dynamic marketplaces to be created by the AEAs, which is unique.

Source.

Fetch has some big competitors from traditional businesses in ML and AI such as Google, Facebook and IBM, which have been conducting research on multi agent system and natural language processing for a long time.

Roadmap

Planned milestones

Source.

Token Mechanics

The ERC-20 token is required to participate in the public test network: the development, deployment and use of Fetch.AI code and assets as part of the Fetch.AI network, protocol and platform.

Mining rewards consist of 15% of the native Fetch.AI tokens. These additional incentives are replaced over time by the value generated by delivering services to agents: search, discovery, predictions and trust information. Mining starts at the release of the main network towards the end of 2019.

Role of the Fetch.AI token

The Fetch.AI token is the key method of value exchange on the Fetch.AI network. It is required for all network exchanges, as a refundable method of registering with the network, for staking and as a mechanism for delivering value back to those performing work on the network. The Fetch.AI token allows for autonomous economic agents to get things done.

Fetch.AI’s token allows agents access to the digital world. It enables them to exist in this world, in multiple locations, and explore it looking for other agents to deliver value to or gain value from. This value can be in the form of services, data, infrastructure use or access to data processing such as AI and ML algorithms.

Fetch.AI tokens can be used for many purposes, the largest five of which are:

  • Ability to connect agents and nodes to the network. This is an access deposit token that acts as a form of stake to demonstrate desire to behave appropriately. It modulates the ability for bad actors to flood the network with undesirable nodes or agents due to the escalating cost of doing so.
  • Value exchange between agents. The Fetch.AI token is required in order to allow for two agents, regardless of where they are, to perform a value exchange. The Fetch.AI token is infinitely divisible, thereby supporting transactions that have very low monetary value, but in aggregate provide new and profound level of insight and opportunity.
  • Access to the digital world. Fetch.AI tokens are needed to access, view and interact with the decentralised digital world. This is a space optimised for digital entities: an abstract representation of the real world in many dimensions that allows machines to make sense of and work within. The Fetch.AI token is needed to gain access to all aspects of this digital world for agents.
  • Ability to access and develop ledger-based AI/ML algorithms. The Fetch.AI token enables development of and access to a broad range of machine learning and artificial intelligence tasks that are available on the ledger. These may be primary services, developed by Fetch.AI, such as trust and prediction models, or they may be large-scale independently developed services for network users. Fetch.AI refers to these collectively as Synergetic Computing.
  • For exchange into Fetch.AI’s operational fuel. Operation costs in Fetch.AI are decoupled from the Fetch.AI token in a similar way to that of “gas” on the Ethereum network, but with additional functionality designed to increase the stability of such a fuel and look at addressing issues associated with high and low-velocity economies. Fetch.AI’s operational fuel allows access to processor time for contract execution and services for agents.

The ERC-20 Token
At Fetch.AI’s Token Generation Event, ERC-20 Fetch.AI tokens will be issued. These are required to access the public test network. Holders of the ERC-20 Fetch.AI token will be able to proportionately generate Fetch.AI test tokens on a regular basis for the purposes of development and testing. Fetch.AI test tokens can be used for many things, including, but not limited to:

  • Agent development. Holders can develop and test all manner of agents on the Fetch.AI network including those that represent data, services, hardware devices, people or facilitate connections to the existing economy or other decentralised networks.
  • Network participation. Mostly via the Fetch.AI network participation application (NPA) this involves downloading, installing and using a mobile application specifically designed to convert the device’s sensors and information into agents that exist on the Fetch.AI network. It also facilitates direct value exchange using the test tokens and exploration of the Fetch.AI world.
  • Node development and operating. Holders can operate nodes on the public test network, provide services to agents and perform processing on behalf of themselves or other users on the network in the form of useful proof-of-work execution.
  • Economic analysis. Analysis of the network’s overall performance and economics, looking at how the utility value per-token is delivered.
  • AI/ML development. Holders can develop machine learning and artificial intelligence applications and services and have them executed as part of useful proof-of-work. Between such developers and node operators, these applications and services can be delivered to those that want them and the value exchanged accordingly.

Essentially, no part of developing or participating on the Fetch.AI test network can occur without the ERC-20 token. This ERC-20 token acts as the key enabler for access to the test network’s existing utility value as well as the component that facilitates the ability to develop and access future utility value.

Source.

Token Metrics

Fetch.AI Foundation Pte Ltd (the issuer) will be issuing 1,152,997,575 tokens (FET tokens), initially as ERC-20 tokens on the Ethereum network as part of a Token Generation Event (TGE).

The hard-cap for the public fundraising round on Binance Launchpad is $6 million USD, for 6% of the tokens.

Public sale price — 0.0867 USD

Private sale price — 0.05267 USD

As initial circulating supply is 11%, market cap based on circulating supply at the moment of ICO is 11 million dollars.

Any tokens unsold in the token sale will remain with the issuer for allocation no sooner than 12 months after the TGE and will be released periodically over a 24 month period

Token allocation

Circulating tokens over time

Token vesting

Source.

Binance announced token sale of Fetch token and there is a very interesting line in FET token sale and economics table — Initial Circulating Supply. It is sown there that it will be 11% which is 5% more than public token sale, and there was no prior info about any other party have their token unlocked with TGE.

A lot of people asked about that in chat and the funny thing is that admins gave away different answers (see 3 screenshots below), which is strange at least. Moreover, even if they simply didn’t have full info and last answer shows real situation, there are still problems with that answer:

  1. If there is a lock-up period on those tokens, then why are they counted as Initial Circulating Supply? Lock-up = not in circulation.
  2. If all or some of those tokens don’t have any lock-up period, why is no such info was given in Token vesting table? Why didn’t the team mention those 5% beforehand as a special case in that table?

Source.

Summary

Team: A+ team specialising in AI and ML

Idea: Quite a number of projects plan to integrate AI to the blockchain, but Fetch has some unique ideas about which roles those AIs should take in their system

Development stage: pre-testnet

Whitepaper: very good white paper, yellow paper and blue paper

Roadmap: roadmap lacks information, simply outlines major milestones

Fetch AI project is very ambitious, building a combination of AI, ML and blockchain to create a marketplace, which is spared from human error is challenging at least. It is no wonder that Fetch hired so many specialist from variety of fields and founded/entered alliances/collaborations in science, IOT, blockchain and car manufacturing.

There are a lot of projects working in IOT and AI sectors, the competition will be fierce, it may be true that those projects only partially intersect with Fetch, but it could affect outcome nonetheless since adoption and network effect have high value in crypto.

There are also some concerns, as mentioned before, giant corporations like Google and Facebook are involved in research on multi agent system and natural language processing for a long time, it will be hard to compete with them, considering the head start.

Also such an ambitious project will take quite a few years to reach maturity, it is hard to imagine a quick shift to AEA on different marketplaces, since AI training takes a lot of time and a lot of data and ML need massive computing power, it will be hard to expect changes from the get-go.

From publicity perspective this project had a fair share of hype, but it got even crazier recently with Binance launchpad announcement. Such advertisement is the best there is since ICO market is quite dry lately and previous launch on Binance was spectacularly successful.

There is a lot of FUD generated by some of the private investors who chose ETH peg, which is connected with ETH volatility in 2018. Let’s also mention mysterious additional 5% in Initial Circulating Supply which were poorly commented by admins in english telegram chat.

To sum it up Fetch AI is an ambitious and interesting project with great team, high development activity (according to Binance), interesting and useful idea, formidable competitors, last minute FUD and lots of hype.

This is not financial advice.

Subscribe to detailed companies’ updates by Paradigm!

Medium. Twitter. Telegram. Reddit.

--

--