Radix: Detailed review on the project
Radix is a high-throughput platform to build and distribute decentralized applications.
In research and development since 2011, Radix DLT is the first, infinitely scalable, Distributed Ledger Protocol for trustless systems. It is an eventually consistent distributed database, with absolute ordering of related events and n-1 fault detection. It is specifically designed for ease of use and to run on resource restricted devices, helping drive both mass adoption and for use in the Internet of Things (IoT).
The Radix platform will enable developers to create, distribute and manage highly scalable, efficient and secure distributed applications for both public and private networks.
The Radix Public Network is a modular, general-purpose, global computer for decentralised applications that enables inexpensive and scalable transactions at incredible speeds with near-instant finality.
Radix offers a novel distributed ledger architecture for decentralized applications, that is sharded to scale in an efficient, unbounded linear fashion combined with a secure consensus algorithm called ‘Tempo’.
The Tempo Ledger consists of three fundamental components:
- A networked cluster of nodes
- A global ledger database distributed across the nodes
- An algorithm for generating a cryptographically secure record of temporally ordered events.
An instance of Tempo is called a Universe and any event within a Universe, such as a message or transaction, is represented by an object called an Atom.
All Atoms contain at least one endpoint destination, represented by an endpoint address. Endpoint addresses are derived from an identity, such as a user’s public key and are used to route events through the network.
Atoms generally take the form of either Payload Atoms or Transfer Atoms. An example of a Payload Atom is a communication, sent to one or more parties, like an email or an instant message. Transfer Atoms are used to transfer the ownership of an item, such as currency, to another party.
Atoms may also contain other Atoms, as well as various other data, depending on their purpose. This extra data might include conditional destinations, owners, participants, associations and application meta-data. Exotic Atom variants can be created for specific application purposes if required.
Clients may create and submit Atoms to the network via any node it is connected to. A submitted Atom is then processed by the network and, if valid, a Temporal Proof is constructed for, and associated with, that Atom from that point forward.
Tempo relies heavily on eventual consistency to achieve a total ordering of events.
The Tempo ledger is a distributed database which stores all Atoms that exist in a Universe. It is designed to be horizontally scalable, supports semi-structured data, and can update entries.
A local ledger instance operating on a node can be configured to store all, or part, of the global ledger. A subset of the global ledger is known as a shard. The total shard space is configurable per Universe, but is immutable once deployed. Nodes can reconfigure to support any subset of the shard space, helping to ensure that the Universe can handle large load requirements without requiring expensive hardware to operate a node. Critically, this enables performance constrained IoT devices to participate as first-class citizens in a Universe.
Sharding is a fundamental design feature of Radix, which implies a robust approach for guaranteeing that Atoms are in the correct shards, and an efficient method for determining which nodes will retain copies of which Atoms.
Considering that all Atoms must have at least one endpoint in their destinations, we can derive a shard ID using the destination, truncated to the shard space dimensions via a modulo operator. Some Atoms, such as Transfer Atoms, may have multiple destinations and therefore will be present in multiple shards.
This is by design, as an Atom that is present in multiple shards increases the redundancy and availability of that Atom. A further benefit is that any Atom that performs an inter-shard transfer is present in both the previous owner’s and new owner’s shards. This, in part, eliminates the need for a global state and mitigates any expensive inter-shard state verification operations needed to prevent “double spends”.
While Payload Atoms are relatively simple, comprising of some arbitrary data, destination/s and a signature, Transfer Atoms are more complex.
An owned item is represented by a Consumable. Ownership is defined as a sequence of Consumables, which provide an auditable history of owners over time. Consumables are a subclass of Atom.
To transfer ownership of an Item(α) contained in Atom(αn) to Bob, Alice creates a Consumer(αX), which references the Consumable(αn) that specifies her as the current owner and signs it with her identity. Consumers are also a subclass of Atom, and identify a Consumable that is to be “consumed”.
She also creates a new Consumable(αX), which contains the Item(α) being transferred, along with the identity of the new owner: Bob. The Consumer and Consumable are packaged into a new Atom(αX) and submitted to the network for verification.
Any node that receives Alice’s Atom(αX) can now trivially validate that Alice is indeed the current owner of Item(α). This is performed by validating the signature of the submitted Consumer(αX) against the owner information present in the last consumable for Item(α) held in the node’s local ledger. If the signature successfully validates, then Alice must be the current owner. The transfer will then execute and Bob becomes the new owner.
Some transfer operations may require that Item(α) is not transferred in its entirety, such as currency. Consumables can be configured to allow partial transfers of an item, if the item specification allows it. In this instance Alice would create two Consumables, one to Bob for the principle, and another back to herself for the remainder. Similarly, multiple Consumers may be used to reference many Consumables owned by Alice and transfer them all to Bob in one execution, thus guaranteeing atomicity and reducing network load.
To ensure swift delivery of events to all nodes in a shard, Tempo employs a Gossip protocol to communicate information around the network. Gossip protocols have proven to be an efficient and reliable means of achieving mass propagation of information in a peer-to-peer network.
Nodes broadcast information about their configuration, such as a set of shards they wish to receive events and state information for, and any network services they may offer (such as relay and discovery) allowing further optimization of information delivery. They may also broadcast metadata about the other peers they are connected to, further assisting in the routing of information and events.
Nodes within the network adopt a “best effort” approach to keeping their local ledgers up to date via the active synchronization and gossip protocols. When receiving an Atom via any of these means, a node will perform validation of the Atom against its local ledger. If a provable discrepancy is discovered, a node can communicate this information to its neighbouring nodes allowing them to act and resolve the discrepancy.
Though reliable, this approach will undoubtedly lead to occasions where events are missed and the state of an item may be incorrect in some local ledger instances. To resolve these inconsistencies, nodes rely on detectable causal history anomalies triggered by events. They can then query other nodes to obtain missing information and achieve eventual consistency with the rest of the network regarding an event and its subsequent state.
For Atoms to be validated correctly, they need to be routed to the nodes that contain the associated shards allowing the causal history of any Consumables, state and other information to be verified.
Endpoint destinations provide the required routing information to ensure that Atoms are received by appropriate nodes via the gossip communications layer.
Consider the example of Alice transferring Item(α) to Bob. Alice included her endpoint destination, which indicates she is transferring from Shard(1), and included Bob’s endpoint destination which indicates she is transferring to Shard(3). Nodes storing Shard(1∥3) need to be aware of the event of; Alice’s spend; Bob’s receipt; and of the state of Item(α) in each shard. Post the event, nodes storing Shard(1) no longer need to be aware of any future changes to the state of Item(α) (unless it is sent again to Shard(1)). The responsibility of Item(α)’s state has transferred to any nodes storing Shard(3). If Bob should then spend Item(α) to an owner in another shard, the responsibility of maintaining the state of Item(α) will once again change.
Processing only events that affect state within a node’s subset of the global ledger, and the shifting responsibility of state maintenance, greatly reduces total state processing overhead. This is key to the scaling performance of Tempo.
The foundation of Tempo consensus is based around Logical Clocks which are a simple means of providing a relative, partial ordering of events within a distributed system.
Within Tempo, all nodes have a local logical clock; an ever-increasing integer value representing the number of events witnessed by that node. Nodes increment their local logical clock when witnessing an event which has not been seen previously. Upon storing an event the node also stores its current logical clock value with it. This record can then be used to help validate the temporal order of past events if required.
Only the receipt of an Atom that has not been previously witnessed by that node may be classed as an “event” for any given node within Tempo.
Temporal proof provisioning
A Universe is split into Shards, where nodes are not required to store a complete copy of the global ledger or state. However, without a suitable consensus algorithm that allows nodes to verify state changes across the shards they maintain, “double spending” would be a trivial exercise, where a dishonest actor could spend the same item on two different shards.
Temporal Proofs provide a cheap, tamper resistant, solution to the above problem. Before an event can be presented to the entire network for global acceptance, an initial validation of the event is performed by a subset of nodes which, if successful, results in: A Temporal Proof being constructed and associated with the Atom, and a network wide broadcast of the Atom and its Temporal Proof.
Using Alice’s transfer of Item(α) to Bob as an example, the process starts with Alice selecting a node she is connected to, Node(N), and submitting Atom(αX) requesting that a Temporal Proof of a specific length be created.
Upon receiving the request, Node(N) will, if it is storing either Alice’s or Bob’s shard, perform a validation of the Atom(αX). In the case of it having a copy of Shard(1) for Alice, it will ensure that Item(α) hasn’t been already spent by Alice. If any provable discrepancy is found, such as Item(α) being already spent by Alice, or the Atom is badly constructed, processing of the Atom will fail. Otherwise, Node(N) will determine a set of directly connected nodes which are storing either Shard(1∥3), select one at random, and forward it the submission request. If a suitable node is not found, Node(N) will search through its node graph and associated metadata to discover viable relay/s with connections to nodes maintaining Shard(1∥3). After Node(N) discovers a suitable candidate, Node(P), it will append a space-time coordinate (l,e,o,n) and a signature of Hash(l,e,o,n) to the Temporal Proof (creating a new one if none is yet present). Where l is Node(N)’s logical clock value for the event, o is the ID of the observer Node(N), n is the ID of Node(P), and e is the event Hash(Atom). Node(N) will then transmit the Atom(αX) and the current Temporal Proof to Node(P).
Upon receiving the submission from Node(N), Node(P) will also validate Atom(αX), and if successful, will select a subsequent node to forward the submission to, append its (l,e,o,n) coordinate and signature to the Temporal Proof and transmit Atom(αX) and the Proof to the next node. The process repeats until the required number of nodes have participated in the Temporal Proof or a provable discrepancy is discovered by any node involved in the process.
The length of a Temporal Proof defines how many nodes should be part of the provisioning process. A length that is too short reduces the efficiency of resolving conflicts between Atoms should they arise, and may result in an Atom not being correctly verified, requiring it to undergo temporal order determinism at each node. Lengths that are very long unnecessarily increase the bandwidth load within the network, as well as the time taken for an Atom to become final.
Once the Temporal Proof length has been determined, if the Atom being transmitted has any dependencies or Consumables, the network can also optimise node selection to improve the future speed of verifying that transfer. This is because an auditable causal history can easily be created if a node that was involved in validating a previous transaction, upon which this transaction relies, is included in the new temporal proof.
In simple terms, if Alice sends Item(α) to Bob, and Bob then sends Item(α) to Carol, it is highly beneficial for network efficiency if one of the nodes that were involved in creating the Temporal Proof for the Alice → Bob transfer is also part of the Temporal Proof for the Bob → Carol transfer.
Achieving Temporal Proof causal history is relatively simple: if, when taking part in Temporal Provisioning, any candidate nodes available to Node(N) are also part of the Temporal Proof of any dependencies of Atom(αn), Node(N) will select at random one of those as a priority if not already part of the Temporal Provisioning for Atom(αX).
To increase the likelihood of creating a Temporal Proof with these properties, the length is again an important factor. For most purposes, log(n)∗3 or Max(3,sqrt(n)) should be sufficient, where nn is an estimated size of the nodes present in the network at that time.
To assist with total order determination of events, nodes declare to the network a periodic commitment of all events they have seen.
This commitment is produced either when a node takes part in Temporal Provisioning for an event, or at will over an arbitrary interval. A commitment is a Merkle Hash constructed from the events a node has witnessed since submitting a previous commitment, with the first leaf being the last commitment a node submitted, producing a linked sequence of commitments over time.
If the node is taking part in a Temporal Provisioning process, the commitment is included in a node’s Temporal Coordinate as c, resulting in the extended space-time coordinate (l,e,o,n,c). The commitment is tamperproof as the coordinates are signed by the producing nodes.
A node may be requested to provide information to enable verification of any commitments it has produced at any time. They should deliver all the relevant Atom hashes to the requesting node, allowing it to reconstruct the commitment hash and verify. Requesting nodes can then take appropriate action in the event of a fraudulent commitment being detected.
This uncertainty of when a commitment verification may be requested also prevents nodes from tampering with their logical clock values, as all commitments have a logical clock value associated with them and so tampering is easily detectable.
Radix has an educational video on this subject.
Anyone may run a Radix Node on the public network; these Nodes are responsible for validating events, relaying messages and executing scripts on the network.
Collectively these services are referred to as “Work” — the amount of Work that a Node can carry out for the network is directly proportional to the general computing resources of that Node.
For a public network to operate effectively, it is this Work that must be rewarded.
On Radix, all Work is packaged into objects called Atoms. Work is simply a matter of executing all Atoms submitted to the Universe, subject to the Atom being valid and having a sufficient fee to cover the execution cost.
A public Radix network (Universe) is segmented into a very large shard space (currently 18.4 quintillion shards). The start and end point for any Atom in the Radix Universe is an address, which is formed of a public key and a Universe checksum. The shard number of an address is deterministically calculated by taking a modulo of a public key over the total shard space to derive the shard index. This makes it trivial to for anyone to correctly calculate the shard a public key lives on.
Due to the size of the shard space, the probability of two randomly generated addresses living on the same shard is very low. This means the majority of conventional transactions will be touching two (or more) shards.
At the start, all Nodes that join the Radix network will be able to maintain all shards simultaneously as most will be empty, and the resource cost of holding an empty shard is essentially zero. As the network grows each Node will be unable to maintain all shards and will need to prune shards until the resource requirements matches their own available resources.
Each Node must calculate which shards they wish to maintain and which they wish to drop. A good strategy for this is to select the shard set in which you have the highest aggregate probability of being selected to help create a Temporal Proof. This is because the number of Temporal Proofs that a Node helps to create directly affects the rewards — share of the fees and new supply emission — that a Node receives, meaning that every Node wants to be included in as many Temporal Proofs as possible.
For any given Temporal Proof, only those Nodes maintaining at least one of the shards that the Atom touches may be in the selection pool. For example, Bob on Shard(1) sends a token to Alice on Shard(2), only those Nodes maintaining Shard(1) and/or Shard(2) may be chosen to create the Temporal Proof.
Since the path length of a Temporal Proof is logarithmic to the available Nodes maintaining the required shards, Nodes will naturally select shards with the lowest number of active Nodes as a ratio to the activity on those shards.
This behaviour creates an overlapping mosaic of Nodes maintaining different configurations of shards, with each node incentivised to seek out active but poorly maintained shards to maximise reward for useful work done.
Radix’s team consists of a group of serial entrepreneurs, digital nomads, and experienced software developers.
Dan Hughes — Chief Technology Officer.
Prior to discovering Bitcoin in 2011, Dan helped develop the software required to securely deploy NFC based payments in mobile phones. He has previously built, run and exited 3 successful software startups. Dan has spent the last 6 years building, testing and refining his own DLT protocols, creating Radix in the process.
Piers Ridyard — Chief Executive Officer.
A serial entrepreneur, Piers started in Blockchain by experimenting with creating insurance smart contracts that could operate without the need for a carrier in early 2015. Before taking the helm at Radix, he co-founded Surematics, a YCombinator S’17 company, helping to create the world’s first decentralised dataroom.
Robert Olsen — Chief Operating Officer.
A serial entrepreneur, true digital nomad and super networker, Rob has been a crypto investor and blockchain evangelist since 2012. Drawing on his considerable operations experience, Rob continues to hone the Radix operations, marketing, PR, community communications, exhibitions and social media presence.
Stephen Thornton — Chief Scientist.
An expert in physics, cryptography and software development, Steve set up the first transatlantic private encrypted internet, helped develop SSLEAY, ported openSSL to run on mobile platforms using asynchronous sockets, and wrote the firmware for encrypted mesh routers for the ‘MoD. He now develops and validates the security, logic and resilience of the Radix network and algorithms.
Shira Abel — Acting CMO.
A seasoned marketer who founded Hunter & Bard and has been an Acting CMO to several high growth startups, Shira uses her extensive experience and skillset to help build successful companies. As Acting CMO of Radix Shira reviews and consults on everything marketing: messaging, growth strategy, social media, PR, dev relations, events and more.
Zalán Blénessy — Developer Operations.
Previously, Zalan helped companies like ST Ericsson create efficient mobile operating systems before contracting for Apple, solving hyper-scale deployment problems. Zalan mined his first Ethers in 2015 and was immediately hooked. He now ensures the Radix developers are as effective as possible, allowing them to focus on taking Radix to the next level.
Marc Rubio — Developer.
An Android, iOS and Web developer with a masters in electronic engineering, Marc uses his diverse skillset to help inform the design and implementation of the first Radix powered mobile apps. Marc has been involved in the community and all things decentralised since 2013, getting first hooked by Bitcoin, and later discovering Radix.
Joshua Primero — Developer.
A software generalist and Java ninja, Josh has been tinkering across the software stack for the past 10 years. From writing GPU drivers for NVIDIA to developing full stack apps for numerous startups, he digs it all. He now brings his engineering and execution expertise to Radix, helping to move the codebase from test to production.
Edgars Nemse — Developer.
Edgars quit studying AI at the University of Edinburgh to co-found Edurio, an ed-tech startup, attracting over $2m in funding, helping to build better tools for teachers and pupils. After following the crypto scene since the early days of Bitcoin, he’s now at Radix looking to apply his wizardry across the full stack.
Mauricio Urraco — Developer.
A full stack developer who likes to take a pragmatic approach to engineering: prototyping, launching early and experimenting. With a wide range of technical experience, including working as a Research Engineer at INRIA, he decided to join Radix and dive-deep into the world of distributed ledger technologies.
Florian Cäsar — Developer.
A passionate software architect across the board, Florian has engineered diverse software projects ranging from indie-puzzle games to an award-winning machine learning framework. After completing his military service as an OSINT researcher, he now designs and implements the Scrypto platform and language while assisting with making other aspects of Radix production-ready.
Angad Mutha — Community Manager.
Prior to Radix, Angad helped scale enterprise startups in San Francisco. A web dev by profession and growth marketer by choice, he straddles both sides of the fence. He got obsessed with decentralized protocols after receiving his first bitcoin in 2011. At Radix, he is responsible for community management and digital marketing.
David Osuhon — Head Of Special Projects.
David was previously Chief of Staff for a fast growing UK startup and comes to Radix as Head of Special Projects. Having worked for companies such as Tangle Teezer and Bank of America, David now brings his Leadership and project management skills to Radix.
Radix does not have any advisors listed on their website.
People from GitHub:
- Angad Mutha (angadmutha) — Repositories — 2. Stars — 2.
- Marc Rubio (MarcRubio) — Repositories — 4. Stars — 3.
- Edgars Nemše (MuncleUscles) — Repositories — 13. Stars — 14.
- Mauricio Urraco (murraco) — Repositories — 13. Stars — 4.
- Joshua Primero (talekhinezh) — Repositories — 3. Stars — 4.
There is no information on partnerships since the platform is still in development. This was confirmed by the Telegram Chat admin.
All Radix use cases are centered around their scalability feature and capability for integrating with existing merchant Point-of-Sale (POS) solutions
Stable Value Tokens — to protect consumers and merchants from wild price swings
Decentralized debit cards with DLT payment rails that are compatible with existing merchant point-of-sale systems
Decentralised exchange for trustless trading of digital assets
Secure, peer-to-peer instant messaging and email communication clients
Goods and services marketplace
Appstore for decentralized applications built on the Radix Public Network
Markets and volume
There is no pre-sale. Radix tokens will be available to purchase on the Radix Decentralised Exchange when we go live in Q1 2019
Other platforms for consumer Dapps: ETH, EOS, Cardano, Quantum, Lisk, RChain, GXChain, Nuls, Orbs, OST.
Radix will be launched as a public network in Q1 of 2019, at which point people will be able to either purchase or earn Radix tokens.
Radix is also working to enable:
- Mass market low volatility tokens
- DLT card payment rails
- P2P instant messaging
- Decentralised exchange mechanisms
2018 — Q4
Target: 1,000,000 Transactions Per Second
3rd Party Token Creation API Live on Alpha Test Net
Multi-sig and timelocked transactions live on Alpha Test Net
Scheduled: Decentralised Exchange White Paper
Scheduled: Economics White Paper
2019 — Q1
Launch: Radix Main Net — Radix Tokens Only
Launch: Radix Wallets and Messaging on Main Net
Radix Token Distribution Starts
3rd Party Tokens Enabled on Main Net
Restricted Scrypto Released on Beta Test Net
2019 — Q2
Radix Naming Service Enabled on Beta Test Net
Restricted Scrypto Released on Main Net
2019 — Q3
Turing Complete Scrypto Released on Alpha Test Net
Radix Naming Service Enabled on Main Net
Temporal Proofs double as a public record of Work done by each Node on the network, allowing a fast, auditable way of determining who has done what Work and in what proportion.
The end user cost of processing an Atom is proportional to the complexity of its execution. Similar to the Ethereum Gas price, a per byte execution cost is charged. Initially a minimum fee will be set by the Radix team, but subsequently this cost will be set according to network consensus.
Public network incentives are split into two main components: execution fees and new emissions.
An execution fee is earned by a Node when it participates in creating a valid Temporal Proof. In a Temporal Proof of path length nn, the Node reward is calculated as: Atom Execution Fee / n.
That is, a Node in a Temporal Proof with a path length of 10 will get 10% of the total execution fee due for processing that Atom. This fee is available to spend almost immediately.
The number of Temporal Proofs a Node is included in also helps to determine the amount of new emissions that Node will receive. New emissions are set by the Radix token economics and can be either fixed or variable. This will be covered in detail in the upcoming Radix Economics White Paper.
Initially the main reward for running a Node will be emission of new supply. As the network grows, and the number of transactions/operations conducted on the network increase, fees will make up an increasingly large proportion of the rewards and incentives.
When a fee is paid on the public network the fee portion of the Atom gets withheld once the Temporal Proof has been created. It is not collected by anyone at that stage, it simply disappears.
The processing fee payment for an Atom is similar to the Bitcoin UTXO model: Alice’s wallet address contains 10 tokens and she wants to send 9.25 to Bob. Alice will specify 9.25 to Bob, and the remainder, less the fee, back to herself. This creates:
- 9.25 tokens for Bob
- 0.25 tokens back to Alice
- 0.5 tokens as a fee
The fee is not “paid”, it simply forms part of the unclaimed transaction total.A Node may claim their portion of the fee by spending what is owed to them. They do this by creating a Transaction Atom that includes a Consumer which references the Atom containing the fees owed.
In Figure 8 above, Bob has also earned a 0.25 fee. This is because he was also a validator node for the Temporal Proof for the transaction from Alice to him.
Bob now pays 9 tokens to Carol, and also needs to pay a 0.5 token transaction fee. He pays 9.25 tokens from his own wallet and the other 0.25 from the fee he earned for validating the Temporal Proof of the Alice to Bob transaction.
It is also important to note that the spending of fees owed has a causal relationship to the Atom that created the fee. As a result the Atom spending the fee portion cannot reach finality before the original Atom. For example, if Alice’s spend to Bob ends up being invalid, Bob’s transaction to Carol will also fail.
Radix is capable of supporting various types of crypto economic token supply models. These include fixed, linear, pegged and inflationary. Radix Stable Value tokens follow a dynamic inflationary supply, so there is no hard cap.
Radix tokens will have an initial short pegged period ($1 = 1 RDX) after which the token will be allowed to float free. As demand for the currency increases, the total circulation of Radix low-volatile coin will also be increased.
If demand reduces, the system has a few mechanisms in place to burn tokens in circulation as well. If these systems fail, the currency will decrease in value, in real terms, against other currencies (such as the dollar), until demand regains.
Team: in comparison with other crypto projects team doesn’t shine in any aspect
Idea: one of the most well known problems of blockchain
Development stage: testnet
Roadmap: I would be sceptical about any kind of roadmap from project with such a long development time
But this project has some problems:
- It has been in development for too long. It may be seen as a proof of developers trying to create something perfect, but that is only a speculation. Fact is that release date is always moved.
- Too much FUD related to Dan Hughes and eMunie (pre-rebranded Radix). There was a problem with security, again, to much deadline were not met, ban from black hat world, paypall. This may be only FUD, but there was so much of it that it can be found easily, and that gets to people, who could be potential investors.
- No partnerships, no marketing. It is good that this project tries to achieve acknowledgement with their product, but Network Effect is very important, they need to attract users if they want to be successful.