Wavelet Beta

The fastest, developer-friendly, open-sourced public blockchain that sustains 31,240 transactions per second. Now without Avalanche.

Kenta Iwasaki
PERL.eco
18 min readJun 27, 2019

--

A clip of 240 nodes slowly building up to reaching 31,240 TPS on DigitalOcean ❤️.

A tl;dr for those who need it.

We have constructed an entirely new consensus protocol for Wavelet: a public blockchain made for writing/deploying robust, decentralized apps.

This new protocol makes Wavelet incredibly:

  • Scalable (the fastest public blockchain processing over 31,240 payment transactions per second; finalizing transactions in a matter of 1–4 seconds across millions of nodes)
  • Practical (supports WebAssembly smart contracts, decentralized governance, and system/smart contract upgradeability)
  • Succinct (running a node requires only 512MB RAM and a healthy Internet connection; the lowest barrier to entry to running a full node for a blockchain)
  • Open (whether you’re a student or a billionaire, become a validator and reap node rewards with just your phone or laptop and the smallest amount of PERLs you have lying around as stake)
  • Secure (the only publicly available leaderless proof-of-stake blockchain in the market (no committees, no block producers, no VRFs; nil))

And of course, as we have promised: Wavelet is now open-source.

Not only that, but the release comes included with:

  1. A public testnet,
  2. A new whitepaper,
  3. A bunch of open-sourced research and code,
  4. A rigorous benchmarking setup, and
  5. Thorough documentation for smart contracts/hosting your own testnet/understanding how Wavelet works.

To learn more, here’s a pile of links:

  1. Wavelet’s source code may be accessed by clicking here.
  2. Wavelet documentation and instructions may be accessed by clicking here.
  3. Wavelet’s whitepaper may be accessed by clicking here.
  4. The testnet with a PERL faucet may be demoed by clicking here.
  5. The smart contract SDK in Rust may be accessed by clicking here.
  6. A tutorial for writing a Rust smart contract may be accessed by clicking here.
  7. A tutorial for hosting your own Wavelet network may be accessed here.
  8. The Kubernetes operator for benchmarking Wavelet may be accessed here.
  9. Wavelet’s proof-of-stake governance model may be accessed by clicking here.
  10. An in-progress implementation for efficient transaction syncing in Wavelet using Minisketch may be accessed by clicking here.

Now, on the downside, there still are a few issues/kinks need to be worked through before Wavelet then proceeds to mainnet. Fear not though, they are relatively trivial.

That pretty much sums up this tl;dr. For the inquisitive reader, let’s dig deeper into what exactly has been going on.

What took so long?

Date back to 17th January 2019. I was in the office in a cold sweat, finalizing the WebAssembly smart contract SDK so that developers could build applications on top of our Avalanche-enabled ledger Wavelet.

I hesitated finalizing the SDK. I turned my chair and looked over to one of my best friends, who’s also one of our core developers, Heyang — and quietly asked him:

If Avalanche can’t guarantee that nodes execute transactions in a consistent order, how the hell would smart contracts work?

In layman's terms, say we had a virtual bank developed as a smart contract application on top of Avalanche, and a girl Alice. Alice has 1000$, and wants to top up 500$ so that she could then transfer 1500$ to some guy Bob.

To some nodes, they would perceive Alice erroneously first attempting to transfer 1500$ to Bob, failing, and then topping up 500$. To other nodes, they would perceive Alice performing what she wanted in the order she desired.

This question sparked one of our now legendary in-office debates, with the other guys at work leaving as it had just hit 6PM. We were scribbling what-if scenarios, thinking of a few sensibly crazy ideas like using Snowball to order smart contract transactions across the network and some other crap like that.

You can see me and Heyang with the team winning US$15,000. Imagine how ugly the debate got 😆.

Although the debate grew tense, it unfortunately ended in disappointment: as we passed the pencil and paper back and forth, we slowly came to realize that every single attempt at a remedy we conceived made absolutely no sense.

It was becoming increasingly obvious that what we were doing was recklessly attempting to workaround the very fundamental reason why Avalanche is even able to scale:

Avalanche achieves scalability by not guaranteeing a consistent ordering of transactions across a network.

And so, the debate slowly died down.

I started to pack my things up, only to then hear Heyang propose: “What if we used Snowball, the basis of why Avalanche works, and build an entirely new consensus protocol around it?”

I thought about it for a moment.

“What if we used Snowball to have nodes agree on how a batch of transactions in a DAG are ordered?”

This started a new flurry of questions and arguments back and forth that started to make just a little more sense, which continued on for the next two weeks.

Our ongoing heated exchanges also left me puzzled how Heyang didn’t get sick of me discussing consensus protocols every chance I got.

A new family of consensus protocols

Fast-forward ahead to 4th February 2019. I opened up a private chat with Heyang, and sent him a file titled “WaveletWhitepaper.pdf”.

Those two weeks of intense debate did not end up in vain. What came into fruition out of those debates and discussions Heyang and I iterated on over those last two weeks was a new whitepaper.

A whitepaper that describes a new family of consensus protocols that is so simple, yet so incredibly versatile.

The whitepaper in its full glory.

In spite of its simplicity, this new family of consensus protocols supported:

  1. Smart contracts,
  2. A total ordering of transactions,
  3. A transaction pruning mechanism that makes running a full node require only 512MB of RAM,
  4. A leaderless proof of stake mechanism,
  5. Decentralized governance,
  6. Decentralized validator rewards, and most importantly
  7. Scalability.

What then followed was obvious: “Let’s ditch Avalanche.

I hovered my mouse over and selected the Wavelet folder on my Desktop, and pressed the DEL key. Probably shouldn’t have done that.

But what then proceeded was a rewrite of the entirety of Wavelet four whole times over the next 3 months.

The first four rewrites that were chucked out the window.

Before any queries, yes, I’m sick of rewriting code. Yes, it’s not the best use of my time to rewrite code. And yes, I promise we are not doing another rewrite ever again.

It was definitely worth it though.

What came out of a total of 10 (yes 10) rewrites of the source code of Wavelet is finally a blockchain that the team and I know you can absolutely rely on.

A blockchain that no longer splits hairs and that’s backed by logical reasoning as to how it is able to achieve both safety and scalability under the constant threat of adversaries.

A Family of Wavelets

Let’s get into a bit of the fine details how the consensus protocol works, though not with the mathematical formulation and statistical assumptions that are nonsensical when describing a consensus protocol deployed in a real network setting.

A single member of the Wavelet family has three components: (1) an overlay network protocol, (2) a gossip protocol, and (3) a query protocol.

  1. Overlay. The overlay network helps bring new nodes in, and evicts stale nodes out of the network. There’s a wide variety of overlay networks used publicly, with a few that do their best to prevent adversaries from mingling with honest nodes. We stuck with S/Kademlia, so that each peer is typically connected at most to about 16 peers. Pretty straightforward.
  2. Gossip. The gossip protocol helps reliably deliver transactions created by any node in the network to all other nodes in the network. We just made a simple flooding protocol on top of our overlay network protocol. It’s easily provable that flooding is resilient against adversaries. Every transaction a node gets, gets forwarded to all of a nodes neighbors that have yet to have received such a transaction. Large bandwidth consumption definitely has room for improvement, but again pretty straightforward.
  3. Query. Now, this is the interesting part. The query protocol is any arbitrary Byzantine fault-tolerant binary consensus protocol. In simpler language, a protocol which if honest nodes follows, will be able to have all honest nodes consistently learn of a single network-wide binary value: 1, or 0. We stuck with Snowball, though we acknowledge that there has been more resilient protocols such as the one proposed in IOTA’s Coordicide where the alpha parameter of Snowball is dynamically attuned. We also have a cool way to establish proof-of-stake on top of Snowball which made Snowball a viable choice for Wavelet.

We now have all three components; don’t like any one of them? Replace them.

So how does Wavelet work?

Step 1

Let’s say I wanted to create a transaction. Transactions from the perspective of Wavelet are vertices of a DAG. All nodes maintain a DAG of transactions. So, I would pick parent transactions after specifying what changes I want my transaction to make to the current ledger state.

We specify an algorithm for honest nodes to follow: Grab all transactions that are about 5 depths away at most from the frontier of the graph, which have no children transactions. Those are our transactions parents.

I would then use the gossip protocol so that all other nodes would learn about my transaction. Upon receipt of a transaction from other nodes, I would also use the gossip protocol so that my neighbors would learn of transactions that I receive from other peers.

Abstracting this a little, it’s easy to break down that the DAG each node maintains is eventually fully replicated across all other nodes using the gossip protocol.

A visual depicting how Wavelet has nodes gossip to create a graph of transactions.

Just as a little thought experiment: If we stopped nodes from gossiping transactions around for a few moments, no matter the network size (10,000 nodes, 100,000 nodes, whatever), it makes perfect sense that this DAG will eventually be replicated across all nodes, right? Good.

Step 2

Cool, so nodes are busy working away creating new transactions; gossiping to eventually fully replicate a single network-wide DAG. What do we do with this mess of a DAG?

This is where the query protocol works in beautifully.

To setup a bit of terminology, the root of the DAG is at depth 0; transactions building on top of the root are considered to be at higher depths way from depth 0. Now, from the genesis of the network, nodes start with a single transaction, which you can refer to as the DAG’s root. Stick with me for a bit: let's call this root the start of round 0.

For every transaction that is built on top of the start of round 0, there is a chance that this transaction may be marked as critical. The conditions for a transaction to be marked as critical is determined by a verifiable random mechanism. For Wavelet’s purpose, a variant of the difficulty puzzle in Bitcoin without the complications of mining which is further detailed in our whitepaper.

The very moment I either create, or receive a critical transaction, what then happens is that I would report that critical transaction out to the network using the query protocol. On a more global view, there might be two, perhaps three critical transactions at most proposed by the entirety of honest nodes in the network. Typically, a great majority of them will report the same exact critical transaction.

What the query protocol then does is help guide honest nodes to pick only one out of the few proposed critical transactions. Let’s mark this single critical transaction honest nodes have decided on to be the end of round 0.

We then get all transactions in all paths connecting both the start and end of round 0, order them deterministically (something like lexicographically-sorted breadth-first traversal works here), and apply them sequentially to our ledger state.

Any transactions not applied in all depths between the start and end of round 0 gets discarded.

We set the end of round 0 to then be the start of round 1. Rinse and repeat.

There you have it, Wavelet.

I’m not kidding.

A visual depicting how Wavelet partitions a DAG of transactions into finalized “consensus rounds”.

Step 3

There is no step 3.

Time for some hard questions: How can we be sure nodes don’t fork? How do we prevent mining of critical transactions? What if an adversary does not use the same parent selection algorithm?

I have outlined and provided a series of proofs to show that Wavelet is resilient against the scenarios outlined by these questions so long as the query protocol may safely achieve binary consensus under the presence of adversaries.

So, as you might guess, the choice of query protocol is what is most important in ensuring Wavelet guarantees safety and liveness.

Please read the proofs if you’re still skeptical. It’s in the Discussions section in the whitepaper. I promise that it’s pretty trivial.

Now, for another question: Why Snowball?

A couple of reasons are outlined in the paper, though fundamentally, it is because the safety and liveness of Wavelet is ultimately derived from the safety and liveness properties its accompanying query protocol can guarantee.

Of course, justifying Snowball required a bit of investigation. The birth-death rates and probabilistic bound assumptions the Avalanche paper gave was not convincing in any way, so I gave a combinatorial proof instead.

Liveness proof for Snowball. A bit below the above Lemma is a safety proof.

Now, one last question, which required three whole sections and a novel mechanism to answer in the paper:

How do we explicitly set how often critical transactions randomly appear?

The same question in the Bitcoin world is, how do we finely adjust the difficulty of the difficulty system as time goes by?

One of the fundamental reasons Bitcoin does not scale is because a lot of forks typically coexist at a single point in time. Bitcoin introduces magic numbers like 10 minute expected block time intervals, and relies on an inaccurate timestamping solution. This inexplicably has contributed to Bitcoin’s lack of scalability. More on this is detailed in the introduction of the whitepaper.

And so, we came up with a very unique solution for dynamically and finely tuning system parameters based on strong, safe estimates over how much congestion/load there is in the network.

This is one huge factor that allows Wavelet to robustly scale to hundreds of thousands of nodes. I would certainly encourage you to read about it in the whitepaper, which also details how parameters are sensibly derived.

Reaping the benefits of Wavelet

So, when rounds are finalized, the set of transactions accepted in each round are finalized, and the order in which those transactions are accepted are finalized. Therefore:

1. A round is irreversible once finalized. This allows us to make running a node pretty cheap. If a round is finalized, we could always just delete the transactions of the round because we wouldn’t need them anymore.

To give a bit of perspective, a single transaction is about 200 bytes; we can easily store a DAG comprised of 100,000 transactions all in-memory. Only about 512MB of RAM therefore is needed to run a full node. A smartphone could be a Wavelet full node. Hello IoT?

This ensures Wavelet doesn’t suffer from a high barrier to entry, which has impeded both Bitcoin and Ethereum from taking over the world.

A Bitcoin/Ethereum full node is a nightmare to setup, and requires paying for some pretty powerful hardware.

2. Accepted transactions are guaranteed to be totally ordered. With this property, we’ve been able to incorporate smart contracts safely into Wavelet.

This makes us, to my knowledge, the very first public ledger employing a DAG-based consensus protocol that fully supports WebAssembly smart contracts with a working open-sourced implementation.

Being able to guarantee a total ordering allows us to implement decentralized governance systems, voting systems, or anything else that is sensitive to an ordering of events that are typically cumbersome to implement in other blockchain systems.

Now, there are also other innovative aspects to Wavelet discussed in more detail in the whitepaper such as a novel, fair proof-of-stake governance system.

However, I don’t want this post to get too lengthy — going through all the features and possibilities of Wavelet would just take too long. More blog posts are to come to go through each and every little aspect of the whitepaper.

Public testnet

Now, as we have promised since the very dawn of time…

Wavelet is now fully open-sourced.

And what better way to accompany the open-source release than release a public Wavelet testnet accompanied by a sexy re-designed Lens UI?

Lens: An open web interface to Wavelet. Presently connected to Wavelet’s testnet.

Maintain your own Perlin wallet, transfer PERLs from one account to another, stake/withdraw PERLs for nodes you are connected to, write/spawn/run WebAssembly smart contracts — do whatever you want.

All of this is our way of presenting to you the culmination of all of our efforts over a year in developing Wavelet.

To get started playing around with the testnet, click here.

Login screen for testnet.

Clicking the link would guide you to a login screen, where you can generate a new private key for your wallet/enter an existing one. Press Login and you are good to go.

Should you create an account, its initial balance will be 0 PERLs. We created a faucet bot that dispenses PERLs to whatever account you desire, which you may access on our Discord through the #wavelet-faucet channel.

#wavelet-faucet channel where the bot that dispenses PERLs awaits your command.

Using the `$claim` command, you can attain 300,000 testnet PERLs from the faucet every 10 seconds. Ping us on Discord if you need help!

To accompany the testnet and open-sourcing as well, we have also created a docs site that explains:

  1. how to host up your own testnet,
  2. how to write/deploy smart contracts, and also
  3. how Wavelet works behind-the-scenes.

Checkout Wavelet’s documentation by clicking here.

To note some limitations, the gas fees in the meantime for the testnet are very high — simply claim PERLs from the faucet a large number of times and specify a large gas limit before working with your smart contracts.

Additionally, this testnet release does not allow you to plug-and-play your own nodes just yet. We understand this makes it little difficult to test out how staking is like on Wavelet.

If you are really keen on testing out Wavelet’s proof-of-stake system:

  1. follow the instructions here to setup your own testnet locally on your computer, and
  2. checkout all the documentation and instructions regarding staking and reward withdrawing over here.

Smart contracts

We finally finished the smart contract refactor we were working on all the way back in January 17th 2019.

We completely refactored the smart contract SDK to make it more imperative, and more Rust-like so that developers can develop smart contracts on Wavelet with the exact same tools and workflow when it comes to developing any ordinary Rust application.

Love using Jetbrains IDE’s? Desperate for good debugging tools? Trying to find a way to write smart contract unit tests without jumping back and forth two different programming languages (looking at you Remix/Truffle!)?

A Wavelet smart contract clone of Ethereum’s ERC20 standard.

All of that is answered with the new Rust smart contract SDK. You can now write smart contracts in Rust, write unit tests for your smart contracts in Rust, and run all of your benchmarks for your smart contracts in Rust.

We equipped the SDK…

  1. To allow you to register hooks to run some logic when a user sends your contract money,
  2. Debug and print out logs to the console using a debug!() macro extremely similar to dbg!(),
  3. Verify Ed25519 cryptographic signatures and hash arbitrary amounts of content using BLAKE2b-256/512, and SHA256/512,
  4. Gain access to consensus-related contextual information to produce verifiable RNGs, and
  5. Easily pass data back and forth from a wide variety of data sources into smart contracts using a simple, length-prefixed little-endian binary encoding.

We’re incredibly thrilled and can’t wait to see what you can do with Wavelet with WebAssembly Rust smart contracts. And, of course, we are well on our way to start supporting other programming languages that can compile down to WebAssembly.

You can expect a few blog posts and workshops from us in the coming future on how to setup your environment if you’re new to Rust, and on how to build your first Wavelet smart contract 😄.

In the meantime checkout the tutorial and our documentation on Wavelet’s smart contracts by clicking here.

Benchmarks

Let’s talk about how Wavelet’s benchmarks were made, ran, and analyzed.

In the eyes of an inquisitive beholder, they may have noticed in the GIF at the top of the post only reports about 28,000 TPS. Here’s a screenshot that marks numbers very close to 31,240 TPS. The maximum TPS, unfortunately without a screenshot, actually is about ~33,000. Benchmark experiments are expensive.

For Wavelet, we prioritized that there be low node requirements for someone to be able to participate in the Wavelet network. So, we setup the network using Kubernetes to spawn 240 nodes that are all hosted within the Singaporean region of DigitalOcean.

We simulated worse-than-practical network conditions using a Linux tool tc(1) to simulate 220ms average communication latency between each node, with an average packet loss of 2%. This left nodes to only being able to emit outbound traffic at a rate of at most 890KB/s.

We rigged each node to only have 2vCPUs and 4GB of RAM, forced each node to only run one single Wavelet instance, and had nodes store ledger state in a locally stored LevelDB instance with the entire graph of transactions kept in-memory. Nodes operated a graph pruning mechanism, enabled to prune all transactions every 30 rounds.

The benchmark task was to have 10% of the nodes in the network create and gossip batch transactions as fast as possible for 40 individual stake placement transactions to demonstrate real-world conditions, where each and every transaction modifies the ledger state of each and every node in the network.

We emphasized making these benchmarks realistic. Full cryptographic signature verification of each and every transaction was performed by each node, using an assembly-optimized version of the Ed25519 signature scheme based on its reference SUPERCOP implementation.

After a warm up period of 25 minutes while the benchmark task was running, we then collected statistics. Numbers were collected over a 5-second moving average window, reporting 31,240 transactions being finalized per second.

The average latency finalizing a single transaction is sub-second with only honest nodes. Introducing adversarial nodes attempting to thwart Snowball yielded an average latency from 1 to 8 seconds at most.

Our networking stack of choice is our own Noise, which was rewritten to support gRPC to take advantage of HTTP/2’s framing protocol. More on that will be written in another blog post, though we have experimented and benchmarked 5 other networking stacks to only be disappointed by their performance over a high-latency network.

If you’re curious about the protocol parameters set, they’re described in the Results section in the whitepaper.

We’ve also tested Wavelet without the simulated strict network conditions to see how it would perform as a private blockchain. Wavelet sustained over 180,000 transactions being finalized per second with the exact same benchmarking task in this case.

Benchmarking code, of course, is open-sourced as a Kubernetes application built using CoreOS’ Kubernetes operator-sdk. Benchmarks lasted 5 days.

Security audits

We’ve been undergoing both code and academic audits around Noise, Life, and Wavelet behind the scenes for quite some time now, with Wavelet’s security audit just about to finish up.

The academic audit has been led by Dimitris Papadopolous of HKUST , Foteini Baldimtsi of George Mason University, and Joshua Lind of Imperial college; focusing rigorously on the safety and liveness properties of Wavelet.

We would’ve withheld the release for Wavelet if it weren’t for the substantial reviews and amounts of feedback given by Dimitris, Foteini, and Joshua throughout the last few months.

More details on the code audits, the teams behind the code audits, and a release of reports over Noise, Life, and Wavelet will be coming up in another blog post soon.

What’s left before main net?

In spite of how incredibly functional Wavelet’s current implementation and protocol is, there are nonetheless a few things left to tackle in its source code before officially launching a main net.

Gossiping/syncing protocol improvements

The way transactions are currently being broadcasted and pulled from other nodes is still not optimised in terms of bandwidth and time complexity.

We’ve planned to replace Wavelet’s gossiping protocol with a recently proposed bandwidth-efficient transaction relay protocol for Bitcoin, exploiting some intricacies of erasure codes (BCH codes, to be exact).

We are continuing to work on it internally and are always open to your suggestions and code contributions.

An in-progress Minisketch implementation in pure Go for implementing “Bandwidth-efficient Transaction Relay for Bitcoin”. A big hand to Pieter Wuille for answering a few questions we had while we worked on our Go port.

As a result however, testnet in the meantime prevents new nodes from joining the network.

Until improvements have been established to pulling transactions from new nodes, node syncing is temporarily malfunctional.

Parent selection liveness improvements

Unfortunately there are still cases where a small percentage of transactions created by honest nodes get mistakenly dropped as a result of selecting inappropriate parents.

This obviously has implications over what user experience Wavelet may deliver, as any transactions that are dropped would require nodes to have to assign new parents to their dropped transactions, resign them, and re-broadcast them.

There are a few in-house ideas on how this could be solved, such as using Proof-of-Work to confirm the validity of certain dropped transactions, and mixing in monetary incentives into the parent selection procedure.

Closing remarks

Closing remarks snipped off of Wavelet’s whitepaper.

I cannot express how thankful I am for everyone who has contributed to the uprising that brought Wavelet to where it is today.

My sincere gratitude goes to Heyang Zhou, Andrii Ursulenko, Claudiu Filip, Ahmad Muzakkir, and Douglas Szeto for bearing with me through each and every rewrite of Wavelet.

I would like to leave you with one final note: Perlin’s original mission has not changed one bit.

Our mission was, and always has been, to unlock the world’s untapped treasure trove of computing resources. Not only from the smartphones and PCs and data centers, but also from other blockchains.

And the only way we can unlock these resources is by bringing new, globally well-accepted standards to startups, developers, and enterprises.

To us, Wavelet is the key piece of the puzzle that will enable us to make the Internet evermore interoperable.

’Til next time,

Kenta Iwasaki

--

--