PARSIQ Network: Solving Web3 Data Needs

Efficient and Easy Access to All Web3 Data

PARSIQ
PARSIQ
8 min readJul 11, 2022

--

This is the first of a three-part series of blog posts on how PARSIQ Network helps solve market and industry challenges in the world of Web3 and enables businesses to easily build on the blockchain. The second post can be found here, and the third can be found here.

Over the last few months, we’ve introduced you to the new suite of flagship products that the PARSIQ Network is preparing to launch.

We are very excited about our progress, and the future that these developments represent.

Instant Web3 Data!

…That’s what we’ve made possible with the first of these products, the Tsunami API.

Providing both historical and real-time data, the Tsunami API is currently in the final stages of its testing phase and will be launched this month, July 2022! We will then expand our offerings to include Data Lakes and Data Hubs.

Last month, we described how these products will set a new standard for Web3 data. And we introduced some technical aspects of these products, like their ease of use, query performance, the blockchains they support, and the types of data they make available.

All of these details are (of course) important, but they may seem a bit abstract to general readers. In this post — and in the others in this series — we’ll get into some of the more tangible details.

Two of the biggest problems that the Tsunami API solves center on issues of data accessibility. Let’s call these two problems the ease of access problem and the access efficiency problem.

While separate problems in their own right, these problems also have significant overlap, seeing that they are both infrastructure problems that every project must always deal with.

TL;DR

  • Solving the ease of access problem means PARSIQ provides you with any and all blockchain data, ready “out of the box”, requiring minimal manual work and no node maintenance whatsoever.
  • Solving the access efficiency problem means that data from multiple blockchains is available instantly. And for all of the blockchains that we support, we are running a number of nodes; and, we have built a system of software that makes it bulletproof to all sorts of node failures.

But there are a lot more details than this involved. So, what exactly are these problems? And how does PARSIQ solve them?

Let’s take them one at a time.

The ease of access problem…

Ease of access to data — and not just any data, but relevant data — is something that every dApp and protocol requires to run efficiently and to accomplish the purpose they were designed for.

Let’s say you run an investment management platform on the Ethereum blockchain. Your main purpose is to provide your users with the analytics and tools that will make it simple to build a crypto portfolio and to manage it with ease.

In order for your platform to succeed, you’ll need to have a lot of information streams running at the same time. Not only will you need to know, in real time, when TXs happen and when events happen on your smart contracts; but, in order to provide your users with the information needed, say, for automated risk analysis or predicting the highest ROI, you will also need to have easy access to historical data.

In order to instill confidence in your users, you’ve got to have all of this information available quickly, you have to know that it’s reliable, and it must be smooth (i.e., your users don’t want to feel any friction).

These details take us to the heart of the ease of access problem.

All of the data available on the blockchain can be used for something — say, for user experience, for building indexes, for access/restriction of various fields, for the visualization of a protocol’s information, or what have you. Whether it be TXs from today, from a year ago, or from tomorrow, all of this is sourced from fundamental data. Platforms require a seamless integration for all of this data.

But how can you get all of this without creating other hassles for yourself?

The Tsunami API solves the ease of access problem.

As any developer knows, nodes are a constant source of frustration — they lag, don’t add new blocks, and randomly crash. It’s as if they’re the biggest fans of Murphy’s Law: Anything that can go wrong, will go wrong.

All of this results in data loss. And data loss means poor user experience. And poor user experience means more manual work!

And, more than that, if you want information that is days (or weeks!) old, you’d have to maintain an archive as well.

This means that you’d have to actively store and maintain upwards of 10TB of data. But not only that! If your user experience depends on the nodes (and this is true for a lot of platforms), then you also can’t be relying on just one node.

Building from your own infrastructure, you’d have to have at least 2 or 3 nodes to correct for these issues. On top of that, you’d have to build a system around all of these nodes, so you can switch easily between the nodes (in case any of them glitch or fail, for whatever reason) automating a seamless stream of accurate data. The alternative is to rely on a 3rd-party node service to run these processes for you.

We have solved this infrastructure problem.

With the Tsunami API, we have indexed and stored any and all of the blockchain data on the blockchains we support. Upon full launch, this will include Ethereum, Avalanche, BNB Smart Chain, and Polygon (with more to come). All of this data will be instantly available to the platforms that request it.

What dApps and protocols are provided when they utilize the Tsunami API is a massive amount of data, provided to them instantly, accurately, and efficiently.

How much data are we talking about?

As of July 1st, 2022, this includes…

…Nearly 15 million blocks on Ethereum, involving storage of nearly 8.5TB, with more than 1.5 billion TXs, 3.5 billion calls, and 2 billion events.

…Nearly 16 million blocks on Avalanche, involving storage of nearly 2TB, with more than 175 million TXs, over 637 million calls, and over 658 million events.

…Nearly 20 million blocks on BNB Smart Chain, involving storage of nearly 25TB, with more than 3 billion TXs, nearly 10 billion calls and 10 million events.

…Over 30 million blocks on Polygon, involving storage of nearly 9TB, with nearly 750 billion TXs, over 2.25 billion calls and 2 billion events.

So long to the days when platforms need to face the ease of access problem. With the Tsunami API, all of this is ready “out of the box”, requiring minimal manual work and no node maintenance whatsoever.

The access efficiency problem…

The access efficiency problem is not entirely distinct from the ease of access problem. Yet, it still stands as a problem of its own, requiring a unique solution.

When it comes to the ease of access problem, the issue is about wanting to collect data, having to archive several nodes, and storing a vast amount of ever-updating data.

If you didn’t lay the proper groundwork, then going back and retrieving all of the data is not only a chore, but if you can’t process it fast enough to catch up and keep up with the next block generated, then you won’t ever get caught up to the most recent block!

This requires a lot of infrastructure and planning in its own right.

The access efficiency problem expands on the above, multiplying the troubles that are encountered in the ease of access problem. This is because, if you want to build on more than one blockchain — and presumably you will if you are trying to build, for example, a successful investment management platform — then the amount of data you are dealing with expands rapidly.

This is an issue of infrastructure scalability: take all the problems you have with infrastructure on one blockchain, and then increase it according to the number of blockchains you want to deploy your project on.

And, don’t forget: blockchains differ from one another. A solution you use on one may not offer a simple copy-paste solution on another.

Once again, we have overcome this problem with the Tsunami API.

For all of the blockchains that we support, we have indexed all of the data, run a number of nodes as support, and have built a system of software that makes them resistant to all sorts of node failures.

For instance, if some nodes lag, fail to update, or lose blocks, you are automatically provided with accurate data from other nodes in the system. And, because we have all of the historical data stored, we can confidently call the Tsunami API “re-org” aware: even if all of the nodes were to break down (which, of course, is a highly unlikely scenario), the software that we built is capable of recovering itself from the point of loss!

Not only is your data instant, but is always safe and tolerant of any faults it may face.

For platforms seeking a world of accurate data, seamless user experience, and reliable infrastructure (and what platform isn’t seeking all of these things?), the Tsunami API is sure to deliver confidence along with every block!

Summing it all up

The two main problems described above — the ease of access problem and the access efficiency problem — are neatly solved by our Tsunami API.

Sound infrastructure is needed by every platform seeking a future in Web3. Having reliable data, and trouble-free access to that data, is key to having sound infrastructure. And that is exactly what PARSIQ provides with the Tsunami API.

Importantly, the Tsunami API is only the foundation of what we are building! For many projects, this API would stand as a crowning achievement of their offering to the blockchain community. Yet, upon the creation of our Data Lakes and Data Hubs, an even bigger set of solutions are not only possible; they will be made simple.

The next part in this blog series will provide an example of how the Tsunami API enables businesses to build on the blockchain. Be sure to have a look!

About PARSIQ

PARSIQ is a full-suite data network for building the backend of all Web3 dApps & protocols. The Tsunami API, which will ship in July 2022, will provide blockchain protocols and their clients (e.g. protocol-oriented dApps) with real-time data and historical data querying abilities.

Website | Blog | Twitter | Telegram | Discord | Reddit | YouTube

--

--