PARSIQ Exposed | Introduction

Part 1, with CTO Daniil Romazanov

PARSIQ
PARSIQ
4 min readJul 24, 2023

--

PARSIQ Network was launched just over 1 year ago on July 18, 2022

As the year draws to a close, well, for the PARSIQ Network — I want to dedicate some time to have a more in-depth conversation about the tech that we have built.

Our PARSIQ | Exposed series will consist of three articles with the following parts:

  1. Introduction
  2. Tsunami API
  3. Data Lakes

In Part 1, I would like to shed some light on the ‘why’ and ‘how’ of our transition as it happened from my point of view.

Before we get into that, let me dedicate a couple of paragraphs to PARSIQ Core, aka ‘The Old Platform’, and the decision-making process behind the transition.

The decision to step away from the tech that we’d been maturing for four years was not taken lightly. I wanted to say, ‘it didn’t happen overnight’, yet it is exactly how it happened.

On a cold winter day, we sat down together with Alan Durnev, PARSIQ’s CTO at that time, in our Tallinn office to talk about the PARSIQ platform. Making progress in the conversation, we arrived at the conclusion that the current tech did not satisfy our ambitions and did not fully reveal the true potential of the foundation that we’d been building back then.

We started to discuss new possibilities, fantasizing and reflecting on ‘what was the best solution for the problems we saw in the data space’. This is how the basics of what we have today were born: Tsunami API (Core API at that time) and Data Lakes.

As Alan and I approached Tom Tirman, the CEO of PARSIQ at that moment, for the first time with glowing eyes, a bunch of fantasies and a suggestion to abandon everything we’d been thoroughly building for all those years, I am pretty sure he thought we were joking.

But as the discussion went on — we, collectively, came to the conclusion that it was the only way forward for the company.

There were two reasons why we couldn’t keep pushing what we had:

The first was Unconventional Tooling. At that time, we had our own Domain Specific Language called ParsiQL that allowed users to write Smart Triggers to monitor events / transactions on different blockchain platforms. While it did sound neat, it drastically increased the entry level barrier — and in this space, ‘ain’t nobody got time for that’. 🙂

The second reason, was the crypto space itself is an evolving market. We could see that use cases were coming at us in diametrically opposite directions, meaning there couldn’t have been a universal solution to solve all of them. So, why would we even bother spending time and effort on something that was not really achievable?

From that moment, we couldn’t continue the development only in one direction. The company needed to expand the offering to suit the variety of custom use cases in the rapidly evolving field. To achieve the result, we needed to:

  1. Provide efficient access to historical data
  2. Offer tooling for a simple way of custom use case creation

And thus began the building of Tsunami API and Data Lakes. It really amazes me how much can be done in a span of five months because this is how much we spent to deliver the production-ready release of the PARSIQ Network. It might seem like a very short time span and one may wonder if an actual product can be built in such a short period of time.

However, our biggest takeaway from all those years building data solutions was expertise, which allowed us to deliver a new set of products in less than half a year.

Looking back on the PARSIQ Core today, I’m very proud of the company for building something so much ahead of its time. We can see now that the Web3 data space is just starting to realize how important real-time features are, while we’ve been supporting them for many years.

Today, the PARSIQ Network supports six blockchain platforms, including: Ethereum, Polygon, BNB Smart Chain, Arbitrum, Avalanche, and Metis. For each of them, we have indexed every single block, transaction, event, and function call. All the platforms support Data Lakes, assisting and solving the most sophisticated data use cases.

For those who, like me, who find satisfaction in numbers, the PARSIQ Network is backed up with strong computational power. In total we maintain approximately:

  • 2,500 CPU Threads
  • 10,000 Gigabytes of RAM
  • 600,000 Gigabytes of NVMe
  • 100 gbps network

With this much computational power spread across the globe, I am pretty positive the PARSIQ Network has a lot of space to grow technically and keep accommodating new customers. There is still plenty of work ahead and there will always be, but before that, I look forward to sharing more with you in the next two parts of the PARSIQ | Exposed series about what is happening behind the scenes with our tech as it pertains to Tsunami API and Data Lakes. Stay tuned!

Dedications

Before the end of Part 1, I would like to say a big thank you to everyone on the team who made the release of the PARSIQ Network possible a year ago.

Development

  1. Alexey Rehov
  2. Artjom Aminov
  3. Pavel Lepin
  4. Timur Rassolov
  5. Eugene Shumilo
  6. Denis Kozicki
  7. Martins Paberzs

Product

  1. Alan Durnev
  2. Nikolay Roll
  3. Ivan Ivanitskiy

Business Development

  1. Emilijus Pranckus
  2. David Siddock
  3. Simon Harmgardt

Community & Content

  1. Casey Nash
  2. Dave Mcleod
  3. Scott Cowan
  4. Ryan Cheng

Marketing & Communications

  1. Anastasia Nesterova
  2. Darya Terekhova
  3. Francis Foster

Operations

  1. Rong Kai Wong
  2. Serafima Osipenko

Design

  1. The John

The Top Guys

  1. Tom Tirman
  2. Igor Bakardžijev
  3. Andre Kalinowski
  4. Martin Best
  5. Anatoly Ressin

Thank you! We wouldn’t have made it this far without all of you ❤️‍🔥

--

--