Dispatch Architecture in a Nutshell

Two networks designed as one, built for the value of data

Zane Witherspoon
Dispatch
8 min readDec 6, 2018

--

Our Mainnet is live!

We went live with the Dispatch Mainnet at 4:06 pm PST on December 3rd, which begs the question: What does the Dispatch Protocol do?

We recently published our newest Technical Whitepaper with lots of nitty-gritty details about the Dispatch Protocol. But, for those of us who are experiencing whitepaper-induced heartburn, this article breaks down the core components of our network.

Two networks tied together by the DVM

The 10,000 Foot View

Ultimately, the Dispatch Protocol is built with three main components:

  1. The Dispatch Ledger: Similar to the Bitcoin and Ethereum public ledgers you know and love, but with a fancy new consensus algorithm, Delegated Asynchronous Proof of Stake (DAPoS).
  2. The Dispatch Artifact Network (DAN): A distributed network of data farmers who can hold onto that big, bulky data that won’t fit in the ledger.
  3. The Dispatch Virtual Machine (DVM): The smart-contract engine that glues together these two distributed networks.
The OG Ledger

The Dispatch Ledger

The idea of a distributed ledger is fundamental to what makes blockchains so powerful. Much like the ledger above, the Dispatch Ledger keeps track of everyone’s balances but also tracks things like smart contracts’ state, artifact custody (See DAN below), and pretty much anything else you’d want written into the public record for the rest of time.

So what makes the Dispatch Ledger different from all of the other ledgers out there? Well for starters, we’re not technically a blockchain. This is because our DAPoS consensus algorithm actually runs a block-less architecture.

The Consensus: Delegated Asynchronous Proof-of-Stake (DAPoS)

How does DAPoS work? It all starts with the Stakeholders. The Stakeholders are accounts that hold Dispatch’s native token, the Divvy. They can use their balance to elect Delegates in charge of validating accepting transactions into the ledger. Delegates will gossip amongst themselves on the validity of transactions on a per-transaction basis, instead of batching them into blocks. The Delegates can be elected from a pool of volunteer Bookkeepers. The Bookkeepers are responsible for executing the transactions accepted into the ledger, applying the results to the state of the network (referred to as the world state), and reporting the network state to the end users.

Let’s look at an example transaction:

  1. An end user creates a transaction, cryptographically signs it with their private key, and sends it off to a Delegate to be validated
  2. A Delegate will receive that transaction and give it a simple check for validity (things like does the signature match the data? Is this transaction too old to be added into the ledger? Does the sender have the required balance?)
  3. Assuming the transaction looks good, the Delegate will sign the transaction as well and send it off to their peers
  4. Once a transaction has received two-thirds of the Delegate signatures, it is considered ‘accepted’ into the ledger
  5. Accepted transactions are then executed by all the Bookkeepers (including the Delegates) and state of the network is updated
Fees is just a 4 letter word

Eliminating Transaction Fees

One of the things we’ve seen people really fall in love with about Dispatch is the fact that we have no transaction fees in our system. To figure out how we got rid of them, let’s think for a second about why they exist in the first place.

Transaction fees serve two main purposes:

  1. To compensate validators for putting transactions in the ledger
  2. To prevent spam on the network

We solve the former by paying the Delegates a time-based salary minted right out of the protocol, instead of paying them what’s essentially transaction-based commission (which we believe makes them more honest as well).

Preventing spam on the network is a little trickier. Our solution is stake-based rate-limiting. Much like traditional tech business models, most web-services will either charge you up front or limit how much you can use the service for free. We consider your percentage ownership of the network’s native token (the Divvy) to be equivalent to your ownership of the network’s bandwidth (measured in hertz compared to Ethereum’s gas). So that means if you own 1 Divvy, you might be entitled to send around 1 transaction per day.

Since the goal is to flatten out network spikes, we want to disincentivize sending transactions when network traffic is already really high. Instead of cranking up the prices like BTC, ETH, and EOS, we can crank up the time until you get your tokens back. So you can know that your 1 Divvy is always going to be worth that 1 transaction, even if you might have to wait a little longer before you can send another.

Because your Dapp is decentralized and its data should be too

The DAN (Dispatch Artifact Network)

The DAN is at its heart, a collection of algorithms that support a network of decentralized data. The DAN is specifically designed to work with the Dispatch Ledger and the DVM, so integrating distributed data into your Dapp is as easy as writing a single smart contract.

Artifacts are the distributed data objects stored in the DAN. Artifacts can be either structured (like sql) or BLOBs (Binary Large OBjects). A structured Artifact has several columns and rows of data, while a BLOB is something more like a movie, a .pdf, a VR asset, a side-chain, or some other big bulky file. Artifacts can be encrypted and sharded for security as the Uploader sees fit.

Most of the algorithms in the DAN provide some level of security or functionality to the Artifacts in the DAN. Some of these algorithms include:

  • Storage Orderbook — Used to match Farmers providing data storage with Uploaders in need of decentralized storage
  • Kademlia DHT — Used to locate where in the network an Artifact can be found
  • Proof-of-Replication (PoRep) — Used to prevent one Farmer from pretending their multiple Farmers and claiming the rewards multiple times
  • Proof-of-Retrievability (PoRet) — Used to ensure Artifacts are still online and available
  • Multi-Party Make-it-Happen (MiH) — Protocol used to transfer custody of an Artifact between actors in the DAN
  • Update Deltas (∆) — Defines the difference between an Artifact (A) and its updated version (A’)
Image result for zksnarks
For consumers and businesses alike

Zero-Knowledge Analytics

While we were building out this network of decentralized data, we started thinking about the value of data. Business models like Google, Amazon, and Facebook are demonstrating that most of the value of data isn’t in the application layer, but the analytic layer.

So, how can we allow people to access the analytic value of that data?

*drumroll*

Introducing Zero-Knowledge Analytics: enabling queries on data stored in the DAN that can return answers that are provably correct, without revealing the underlying data itself. This is an amazing stride in the consumer push for data sovereignty, but this is also an amazingly powerful tool for business that don’t want the risk associated with holding regulated data. Thanks to policies like HIPPA, GDPR, and the new California data privacy law, data is becoming toxic. Facebook is facing a $1.6 Billion fine from the EU for the breach of consumer data. In a world where we’re all holding our own data, this tool has the capabilities to revolutionize data-centric business models.

(*Warning: technical jargon incoming*) ZKA works using a combination of Homomorphic Encryption, Secure Multi-Party Computation (SMPC), and zk-Snarks. The data is homomorphically encrypted and given to the querier so they can calculate their own encrypted answer. In the case that the data is held by multiple parties, all the participants use SMPC to determine the unencrypted answer and give it to the querier. A SNARK is then formed to prove that there exists some decryption key (d) that can decrypt the queriers encrypted answer into the SMPC unencrypted answer.

The DVM: at least 100000000 times faster than the DMV

The DVM (Dispatch Virtual Machine)

The DVM is the Smart-Contract engine that ties together the Dispatch Ledger and the DAN. The DVM is backwards-compatible with the Ethereum Virtual Machine (EVM) by design. That means that 99% of all Solidity/EVM Smart-Contracts should be functional on Dispatch. Part of this design decision is selfish. I organized the SF Ethereum Meetup for pretty much all of 2017 and saw so many of my friends spend weeks learning Solidity, I wanted to enable them to do more with the skills they’ve learned. It also helps that most Dapps are developed to work in the EVM by a substantial majority.

We made most of our additions to EVM by introducing a new set of 0xd0 OpCodes that run mostly DAN functionality. A few of these new extensions include:

  • 0xd0 (ARTIFACT) Returns the Merkle hash of the accounts Artifact
  • 0xd1 (ARTIFACTSIZE) — Returns the size in bytes of the accounts Artifact
  • 0xd2 (ARTIFACTSTRUCTURE) — Returns 0 for BLOB (Binary Large Object) Artifacts and 1 for structured Artifacts
  • 0xd3 (ARTIFACTENCRYPT) — Defined at time of account initialization. Returns 0 for unencrypted Artifacts and 1 for encrypted Artifacts
  • 0xd4 (READARTIFACT) — Formally declares the address of a new Downloader on the ledger
Join the network!

Want to be a part of it?

I think what’s most exciting out of all of this is that we’re way beyond a whitepaper and an idea. We have it already implemented in code. We launched our Mainnet last Monday Dec. 3rd, and we’re seeing more and more Dapp developers hop on board the Dispatch train 😻

If you’re excited about our tech please give us some 👏 or a share. Have an opinion or feedback? We want to hear it. Join the conversation on Discord

Learn more about Dispatch Labs:

--

--

Zane Witherspoon
Dispatch

2x exited technical founder • Executive Director at Superset • Previously @DispatchLabsIO & @FathomPrivacy acquired by @Delphia