Ankush Sharma
Sep 5, 2018 · 1 min read

Thanks Farhan for taking time to read.

The general idea was to take advantage of all cores available and run instructions in parallel and remove the bottleneck imposed by large data structures. messages/sec surely depends on the limits imposed by the network as it involves sending data over the network and 8M messages per second will be in idle case with special hardware. Can you please elaborate more on the RPS thing ?

The number of delete operations was of the same order as the number of add operations and the number of items stores in BST is large in number at any point of time (~1000–10000). Averaging out, overall BST was more fast in the operations for us. You are right with the cache locality thing that helps in fast CPU operations but i am not sure how comparison between cache locality and fast operations stands. Therefore we went with BST.

    Ankush Sharma

    Written by

    Hardcore Techie | OSS fan http://github.com/darxtrix | Distributed Systems, Scalability & Performance | Prev @Koinex & @Paypal | IIT BHU’16