# Wavelet: Gossiping and Querying

## Part 3 of the Wavelet Series: The workings of the gossiping and querying processes in Wavelet

*Whew! It’s been a while. *But now we’re back with another part of the ** Wavelet Series**!

In the previous article, **Wavelet: Staking and Decentralization**, we discussed one of the most important aspects of Wavelet, the **leaderless proof-of-stake mechanism**.

In this article, we’re going to show you the workings of two very important protocols in Wavelet, **gossiping** and **querying**. They play an important role in making sure that Wavelet is **secure** and **scalable** and all information is **consistent**.

# Gossiping

The gossiping protocol is very straightforward.

Imagine in school, a random guy called Chad got his *interesting* internet history exposed to one of his classmates. The classmate then *casually* tells his friends about it, and they tell theirs. Eventually, everyone in school knows about Chad’s *interesting* internet history (*RIP Chad’s school life. Maybe that’s why Chad decided to try Wavelet in the previous article*).

The way the information about Chad got spread around is called gossiping. The information gets shared until everyone knows it, which is very similar to what happens in Wavelet.

Let the school be Wavelet, the students be the nodes, and the information be related to transactions, which are put on the graphs of respective nodes.

Putting it all together, you get the gossiping process, where all nodes communicate with each other and update the information of transactions on their graphs until they all have the same graphs. This only happens when there aren’t any new transactions made for a certain period of time.

However, the transactions aren’t finalized through this process. That’s where the querying process comes in.

# Querying

Unlike gossiping, querying is a lot more complicated.

As mentioned previously, querying is the process of going through a node’s graph bit by bit, filtering information of transactions within the node to be consistent with other nodes. It’s a lot like proofreading the nodes’ graphs.

*But wait! There’s more!*

Actually, there’s a LOT more…

We should start from the beginning. The querying process starts when a **critical transaction** happens. A critical transaction is a transaction who’s prefixed zero bits is larger than the **difficulty parameter**, which is set beforehand. The occurrence of the critical transaction is random and multiple critical transactions could occur.

However, for scalability purposes, only one critical transaction can be chosen. How do you choose between multiple critical transactions and only pick one? This is where the **snowball sampling** comes in.

## The Snowball Sampling Technique

The **snowball sampling protocol**** **involves nodes exchanging information, namely the critical transaction, with neighboring nodes multiple times in rounds. This process keeps repeating until the critical transaction being received is consistent for a large number of times specified beforehand. Now time to get more specific.

The logic, as written on the **paper**, is shown on the left, but it is pretty difficult to understand at first glance. *It took me a long while to understand myself,* but it’s pretty *logical* once you understand it.

Let’s set up a scenario where two critical transactions occur (let’s call them Red and Blue).

First, you start from a node (let’s call it Larry). Larry will randomly select a bunch of neighboring nodes and do a little survey, receiving information about the critical transaction those nodes are aware of. It could be possible that some nodes are aware of Red while others are aware of Blue. So whichever critical transaction has the majority of the node’s awareness, that transaction is noted by Larry. Let’s say this survey returned Blue, which now Larry is aware of.

Now Larry does the same process again except with a new set of randomly selected neighboring nodes (a previously surveyed node could also be selected since it’s random). This survey returned blue as well. As this survey returned the same transaction as the previous one, a “counter” starts to keep track of how many times the same critical transaction has the majority’s awareness. The counter is now at 1.

Again, the same process with a new set of randomly selected neighboring nodes, except this time the survey returns Red. This resets the counter and now Larry is aware of red.

This process continues to repeat until the counter reaches a large number that is set beforehand. This allows all nodes to eventually be aware of only **ONE** critical transaction.

*So there you go!* That’s the snowball sampling technique where you end up with only one critical transaction.

*Now what?*

After finally selecting a critical transaction, you now have a **consensus round**. The selected critical transaction is the **end** of the **current** consensus round and will be the **start** of the **next** consensus round.

## Consensus Round

A consensus round is a non-overlapping depth interval whose starting and ending points are critical transactions. The process of finalizing a consensus round involves accepting, rejecting, and ordering transactions.

We now have our end critical transaction (from the snowball sampling) which gives us the interval for the round, but we haven’t met the non-overlapping depth criteria yet. *What does this “non-overlapping depth” even mean?*

If you read my previous articles, you’d know that the graph is simply a long history of transactions where the transactions were referenced by nodes (for more information, read **part 2** of this series). However, with nodes being able to reference multiple transactions and there being no limitations to which non-finalized transaction(s) a node can reference, you end up with a very complex tree of transaction histories with multiple transactions in the same depths (or overlapping depths) throughout the graph.

So how do you end up with non-overlapping depths?

Try looking at the consensus round as a maze. The starting point is the start critical transaction and the ending point is the end critical transaction.

You have to get from the start to the end using the referencing of the transactions as a path. If you can reach the ending point, then all the transactions along the path are finalized. If you meet a dead end (the transaction has not been referenced), then that transaction is rejected.

This repeats until all the transactions in the consensus round is either finalized or rejected.

Afterward, all the finalized transactions are ordered using the **Breadth-First traversal **(for more information, read **part 2** of this series) starting from the end critical transaction.

This results in a nice non-overlapping depth interval between two critical transactions, thus finalizing the consensus round. The next round starts building right after, starting from the end critical transaction of the current round.

**And that’s the entire querying process!**

All of this repeats everytime critical transactions occur to finalize transactions and to have a nice graph without any overlapping depths.

# However…

Building graphs is faster than proof-reading them. In other words, gossiping is a faster process than querying. This results in the transactions to not be finalized fast enough to keep up with the growing graph. This is the issue called **Partial Synchrony**.

But our developers came up with a method to tackle this issue by introducing the **load factor**, which lets a node accurately approximate the network load, which can further allow it to adjust the number of transactions the querying protocol processes and hence, solve the issue with partial synchrony.

But what really is the load factor? How does it work? How does it solve the issue with partial synchrony?

Find out next time, on Dragon Ba…. *wait… I mean….*

We’ll go more in-depth about partial synchrony and the load factor in part 4 of the ** Wavelet Series**:

**Wavelet: Partial Synchrony and the Load Factor**

’Til next time,

Maikro~

Check out the other parts of the ** Wavelet Series**:

Part 1:

**Wavelet: An Introduction**

**Part 2:**

**Wavelet: Staking and Decentralization**

# Try out the Wavelet beta now:

WaveletDocumentationBuild a Decentralized Chat on Wavelet Tutorial