PARSIQ Real-time Blockchain Data Delivery

Daniil Romazanov at ETHDenver

PARSIQ
PARSIQ
9 min readMar 1, 2022

--

The PARSIQ team recently had the opportunity to attend Ethereum Denver. In fact, we were proud to be an official sponsor of the largest & longest running ETH event in the world!

Our entire BizDev team was there, not only to network and meet as many teams as possible, but also to gain knowledge and insights from the top minds in crypto!

Importantly, on February 17, 2022 Danill Romazanov (PARSIQ product owner) hit up the Art Stage to give a presentation: “PARSIQ Real-time Blockchain Data Delivery”. Due to technical difficulties on the part of ETHDenver, the full presentation was not recorded. Luckily, our team was able to find another video recording and put together the majority of the presentation. Although the quality of the video file may not be perfect, the content of the presentation certainly does not disappoint! You can watch the video here, or, you can read the transcription of it below.

To find out more about PARSIQ’s time at ETHDenver, have a look at the recap of our experience here.

ETHDenver2022 | PARSIQ Real-time Blockchain Data Delivery

Presented by: Daniil Romazanov

Why Data Matters

Without transparency, blockchain technology doesn’t make sense. (Of course, there are some exceptions, like private blockchains, and so on; but those are not really general purpose chains, and they are exceptions that prove the rule.) So the concept is clear. But then there is the practical part.

In reality, why does data matter for projects? It matters because of numbers. If you don’t have numbers, then you don’t have users. So, for your protocols, for your dApps — people will come to your project, and they will want to see the numbers (your TVLs, your APYs, the return on investment, and all the information like that). In short, they want to see the numbers, and many of them will want to know how much can be made there.

But it is not always so simple. When we talk about data, and about all the values at play, “data” is not a static variable. You cannot — at any random point — just grab data. Instead, you have to continuously aggregate it, and you have to be precise. And then, afterwards, there are the things that have to do with off-chain UX. In many cases, your on-chain activity causes changes and after-effects in off-chain applications. Let’s say you are playing an NFT-based blockchain game, and you want to obtain some NFT: you want to lock-in a new skin, or new area, or new item. What is most important is that it has to be done quickly. Users cannot wait for 20 minutes before they get the information from the blockchain. But it is here that we come to some problems and obstacles with blockchain data. There are three points I want to talk about with regards to this…

They have to do with (1) infrastructure and reliability, (2) scalability, and (3) data interpretation. We’ll start with the first point.

Let’s say you have a project running on Ethereum. And you have to know when your TX happens, and when the events happen on your smart contacts. For a smooth experience, you have to have this information quickly and you have to know that it is reliable. This means that you cannot simply set up a node. Because nodes are a pain in the ass: they tend to fall behind, they lose pace all the time, they randomly crash at important blocks — all this stuff is negative. If it can break, it will break. All of these things result in data loss. And a loss of data results in poor user experience, resulting in a lot of additional manual work.

Next, there is the issue of scalability. Here, you take all the problems you have with infrastructure and then multiply that number by the number of blockchains you want to deploy your project on. (Of course, the complexity of the issue is quite different for every blockchain, because even if you are talking about EVM-compatible blockchains they are all different. So, if it’s easy to run an Ethereum node, it isn’t necessarily as easy to run a BNB node, and then there is Polygon doing hard forks at 9am on a Sunday morning!)

We have overcome this problem with PARSIQ.

I will just give a short technical overview. For every blockchain that we ran, we had from one to five nodes (it depends on how unstable the blockchain was) and they were all allocated in different clusters. We also built some software around them to make the whole system 100% reliable. So, if something happens to one node, then we automatically switch to the data that comes from another node. And even in the case that everything becomes broken — if all the nodes break down — the software that we built around it knows exactly where it stopped. Meaning that if any of these nodes come back online, the software will know at which block — at which transaction — the whole thing stopped importing and it will be able to recover itself from that point.

Scalability is just about having a team who can monitor and manage it all. For the projects building on the blockchain, I think it is really viable to have a separate team that is in charge of infrastructure, especially if you are just starting out and cannot afford a dedicated team for that on its own.

Then, third, there is data interpretation, which may not be the biggest problem, but is a problem nonetheless. Its a problem because like raw data looks like this *points to a bunch of numbers that don’t seem to mean anything*. Okay, maybe some of you know these are function calls — alright. But nobody wants to waste time on this.

For us at PARSIQ, we already have all the ERC standards, like 20, 721, 1155, and other standards, so all of this information can be retrieved in human-readable form without all the stubbings. This allows you to have, at least, the event names, the variable names, the parameters, all the values, and everything like that decoded easily. If you want to monitor your own project or protocol you can just supply us with your API, and from there you will get all the events decoded and delivered to you and your system.

There are some lessons that we have learned along the way of building this technology.

Lessons Learned

The idea behind the PARSIQ platform has been to facilitate every possible use case that is present on the blockchain. It was a good goal two years ago, but the market has expanded and changed so quickly. And in that change, we have learned some lessons.

The tech we have built works great. We have our own domain-specific language, which does its job. But people — all the hundreds of projects — don’t have time to learn it. So, it works great for us, but other projects don’t always want to spend time on it (because it means time overhead). That is one lesson we have learned.

A second point is that people just want to work from the comfort of their own infrastructure, their own codebase, their own programming language; they don’t want to learn something new. This raises some problems when you upgrade your project or deploy new versions: on our side we must make all the relevant changes. It can get tricky, especially if you are in the cloud, and you are scaling your project. There are a lot of difficulties with that.

So, the creation of an all-purpose solution in blockchain right now is not really possible, because of how young the market is — because of how fast it is capable of changing.

So, with that in mind, I want to talk a little bit about what we at PARSIQ have up our sleeve.

What’s Next?

It is our best belief that blockchain data should be easily accessible. Unfortunately, the “transparency and openness” of blockchain data does not mean “ease of access” to that data. With that in mind, we want to introduce a core API system (name pending). This system will provide access to both real-time and historical data. That is, it will provide historical and real-time at the same time, which will allow a protocol to collect the data of everything that happened on that protocol, starting from, say, today minus 1 million blocks. And after you catch up on all the information in the past, in the same manner you will continue getting real-time data of all the current events. You won’t have to change anything here: it just works out of the box.

I should also mention that it is really fast. We have gotten it to the point where 1 million blocks can be processed in just only 1 minute. That is very fast compared to what is currently on the market.

This API provides you with raw data, and it gives you basic filtration mechanisms. So, you will be able to query the whole of the blockchain history by sender, receiver, contracts it interacted with by topic, etc.

As I previously mentioned, we think data should be accessible. So, these APIs will be free. And more than that, they will be free without any limitations on result sets. Right now, there are limitations on result sets. For example, the normal limitation right now (with Etherscan) is 5k entries, and you cannot get any more than that on this day. With what we are doing, you can come to us instead and fetch everything related to your protocol easily.

There are two main reasons we see a need for this:

First, Protocols want to build their data, statistics, aggregations, and everything. And, of course, there are the developers that do their work on these protocols; and there are the data scientists and data analysts, or people do look at data for fun. These people don’t want to spend their time working on the infrastructure and with all the pitfalls along the way. They just want to focus on the data that they are interested in, and they need it to be quickly and easily accessible to them.

Second, as I mentioned, some of the tools that we have created are more convenient for us, but other projects don’t always have the same specific needs for everything we offer. So, we want to focus ourselves more on the protocols themselves, on their needs, and we want to help them build the data infrastructure that matches their protocol’s needs. We want to help them with their data (with all the aggregation, statistics, etc.) so that they can remain focused and busy with their projects.

There are as many use-cases as there are projects in crypto, so you cannot expect to make a single, unified solution for them all. For us, we have come to the point where we just want to take particular projects and help them. While we are doing so, we will be gaining new expertise — and we already have a lot of expertise in data, infrastructure, and everything like that. But we want to also get into all the new stuff that is happening as well. So, we are going to build a toolset, a framework, that will make it possible for projects to work directly with their data. We will make it possible for them to store everything (like all the statistics, aggregation, historical data, or any other data the protocol needs) inside our platform. The data will also be accessible via APIs if the protocol wants to expose it to their front ends, developers, or to anyone else.

When we are ready — when we have enough projects under our hood — we also want to share this framework. We want to ship it out, opensource, to the developer community so that the developer community can build stuff upon our platform without having to come to us for help. It is likely that in the future we will have something like a grant program to attract more developers to utilize our platform.

Finally, I want to emphasize that everything I have just talked about is possible only because of all the tech we have been developing for the last couple of years. But what I am talking about adds a little bit of a twist. We just love what we have done, and we want to build more things upon it on our own!

Thank you for your attention!

About PARSIQ

PARSIQ is a blockchain monitoring and workflow automation platform connecting on-chain and off-chain applications in real-time, providing transaction notifications for end-users. With PARSIQ you can connect blockchain activity to off-chain apps and devices, monitor and secure DeFi applications, and build custom event triggers and power real-time automations.

Website | Blog | Twitter | Telegram | Discord | Reddit | YouTube

--

--