Sitemap

Fully decentralized p2p orderbook and more

jl777
4 min readJan 2, 2020

I had thought that there would already be an easy to use module to spawn a p2p network that would synchronize any sort of data that was fast, efficient and mostly in sync.

However, there didnt seem to be any such thing as the concept of “mostly in sync” seems to be at odds with what most decentralized projects are doing. The reason it doesnt need to be 100% in sync is that for a DEX orderbook, being 99% in sync seems sufficient and usually when you are after the 100% level of things, you give up a lot in other areas.

Anyway, we need this for the atomicDEX to be able to scale to massive numbers of users, so I wrote one.

The original goal was to have no more than 20% wasted bandwidth, sync 99%+ and have a reasonably low latency of a few seconds in as fast a p2p network as possible.

After thinking about this for a while, it seemed that a hybrid push/poll/pull model would achieve a good balance of speed and bandwidth efficiency, but it was hard to tell just how fast or how efficient it would be. I wrote a simple simulator and the result was pretty promising, so the next step was to make an actual p2p network and send messages back and forth.

I started with the baseline komodod and added hooks for a new p2p message type and put all the functions needed for that in a single file. We get called during the send message to a peer loop, and when a message comes in. Convenient functions to send messages and so I coded up a linear search based version that had a simple control flow just to get some actual results.

At first there was a fair amount of inefficiency as the nodes were not very smart. I thought of a way for each node to know the entire network topology so they could predict which nodes will get a broadcast packet after a few hops, but that just seemed like a very complex way to do things.

What I came up with was a much simpler solution, where a node would broadcast a packet with a depth count. The receiving node would decrement this and if it is ≥ 0 would send it onto its peers. This is the push phase. assuming no redundant destinations, a relay depth of D will get AP^D nodes with a packet, where AP is the average number of peers. Clearly setting the right value of D for the size of the network will be important.

Lets assume about half the nodes get the packet during the push phase. How will the rest of the nodes even know that there is a packet to get? This is where the polling comes in. Each node would send a summary of all recent packets to all its peers. The bandwidth usage for this isnt that big, assuming the AP is not so big. This allows the receiving node to see a list of available packets and pull the ones that it doesnt have.

It all sounds so simple, but the reality was a lot of back and forth to get the packets propagating, without flooding the network with redundant packets. I had to use a lot of memory tracking what packets were sent to which nodes and which nodes send which packets. Eventually, things got to where only highly connected nodes had a significant inefficiency, so typical duplicate packet rates of 2% to 25%. Decent result and mostly within the original goals.

Then I added a cap to the number of peers a node would push a packet too. That solved the efficiency issue (when it was combined with precise tracking of which nodes knew about which packets).

Now the code was on the slow side as internally things were mostly just doing large linear searches, so everything was changed to constant time hashtable lookups. A significant speedup was gained as 100% accuracy wasnt required, it could behave like UDP instead of TCP, occasional drops were not fatal. Using this concept internally, algorithms were chosen that work most of the time, but not always, but are very fast.

After this improvement, the performance was getting 1000+ tx/sec on the testnet, with 99.99% in sync and about 1% duplicate packets. But the CPU usage was on the high side, especially with the 8192 tx/sec configuration, so I made a few more modifications and now there is a decentralized datablob synchronizing system that is able to sustain (indefinitely) 4096 tx/sec with very good overall characteristics.

I still need to add an api layer so the dataset can be queried, but that is just connecting things to the existing core. I imagine there are many usecases for a fully decentralized p2p network which is bandwidth efficient, fast and mostly in sync. It could even be used as the rampool portion of a blockchain, but probably much more useful in decentralized video streaming and other such applications.

I also added a txPoW mechanism so that each packet submitted needs to use some CPU power. This makes for a very good spam protection without needing to use stake weight, special nodes, etc. Very decentralized and the testing with the txPoW disabled is going literally thousands of times faster than it would be when it is active. This way a few dozen nodes can simulate tens of thousands of nodes.

All in all, it seems we are one step closer to a scalable atomicDEX, in addition to whatever else this tech opens up.

--

--

Responses (1)