Published in


PARSIQ Network: A New Standard for Web3 Data

Introducing the Tsunami API, Data Lakes, and Data Hubs

Currently in the process of a major evolutionary development, the PARSIQ Network is set to reinvent the way dApps and protocols access Web3 data. We are setting a new standard when it comes to speed, customization, and accessibility.

No longer will it take hours or days to access and collect data, and no longer will it take weeks to restructure that data. With PARSIQ, the process is nearly instantaneous. We’re capable of providing data for 6,288,000,000 transactions in under 1 second!

Our aim is to become the go-to, full-suite data network for building the backend of dApps & protocols across all of Web3. And we’re already paving the way to realize our goal. No one currently offers the same services — with the same level of efficiency, comprehensiveness, and speed — as PARSIQ will with the new products we are creating.

Our stack of products is a complete package, ready “out of the box”, allowing users to access and manipulate data however they like and in whatever language they prefer. Whether it be historical data and indexing, real-time data and monitoring, or even data storage, the ease of accessing and utilizing Web3 data from the PARSIQ Network is unmatched.

Our flagship product — the Tsunami API — is slated for launch in July of 2022. So, all of this is less than couple of months away!

The purpose of this article is to better introduce you to the PARSIQ Network and its products. We will focus specifically on the first three products (of several) that we are preparing for launch: the Tsunami API, Data Lakes, and Data Hubs.

Before getting into the details, it’s worth giving you an idea of how these three products fit into our overall plan. Here is our current roadmap:

For a broad outline of the entire roadmap, have a look at Tom Tirman’s (CEO) letter on how PARSIQ Network will redefine Web3 backend. Of course, specific details will be provided about all of these other products in the future. But for now, let’s get into the details of our flagship products!

PARSIQ Network Products

In the coming months, PARSIQ Network is planning to launch our Tsunami API, Data Lakes, and Data Hubs. Already in a successful alpha testing phase, the Tsunami API is scheduled for release in July of this year.

While each of these products is unique and serves its own role, when taken together, they form the basis of our platform’s ability to provide Web3 dApps and protocols with solutions for all their data needs. This includes real-time data, historical data, data storage, and more.

The Tsunami API: Instant Web3 Data

The Tsunami API is the core API created by the PARSIQ Network. Tsunami provides Web3 dApps and protocols access to the full spectrum of data on a given blockchain. Providing raw data — both real-time and historical — will enable our clients to track any kind of events, metrics, or statistics they desire, such as TVL, liquidity, current and historical token prices, user balances and more.

The Tsunami API covers historical data, real-time data, and raw data: What does all this mean?

  • Historical data means that users can query data from past events on a blockchain. For example, they might say, “provide me the data from the time between block X and block Y, or from date X to date Y.” All of this data is quickly pulled off the blockchain. In short, this means that users can select a specific block (or event), or set of blocks (or events), from the past that they are interested in and access all of the relevant data included within the scope of the query.
  • Real-time data means that users can receive blockchain data as it occurs in real time. Essentially, this provides users with a live-stream of events or blocks they are interested in. Where historical data can be thought of as “pull-based,” requiring users to define the data they are interested in and pull it off of a chain; real-time data is “push-based,” being immediately pushed off the chain as the blocks are being built.
  • Raw data means that the data is presented as is, with no manipulation or mediation having been done to it in the process of delivery. Raw data is extremely complex — think, for instance, of the type of data you see on Etherscan — and is made more easily readable, indexable, or usable in a number of ways, depending on the needs and tools on the end of the customer receiving this data. Both the historical data and real-time data provided by the Tsunami API is raw data.

Upon its launch, the Tsunami API will support the Ethereum, BSC, Polygon, and Avalanche blockchains, with more chains planned for support in the future.

Users are given access to every event on these blockchains. The reason why we provide it as raw data is because it is the most fundamental kind of data and contains any and all information a developer might need. Providing raw data enables projects, businesses, and developers to manipulate the data in whatever way they choose.

No matter the block size, whatever transaction or information on contracts; every small, fine-grained thing that is on the blockchain is made available through the Tsunami API. We have made it easy to access the data through our API or portal, only requiring simple HTTP requests.

All the events on a blockchain?! How far back does PARSIQ’s data go?

Great question! And one that comes with a great answer. Anatoly Ressin, PARSIQ’s Chief Blockchain Architect, put it directly:

What we have created allows us to even scan all of Ethereum, from the genesis block up to the current moment.

In other words, the Tsunami API provides users with raw data from the entire history of a chain. It gives the ability to apply basic filtration mechanisms, allowing for the querying of the whole of the blockchain history, for example, by sender, receiver, contracts it interacted with by topic, etc.

That sounds impressive. But how long will it actually take to process and receive the desired data sets?

Another great question! Speed has always been a challenge for projects needing to make calls on historical data, revise and update their metrics, analyze the relation of past events, etc. As developers will know, this data can often take a long time to process — from several hours even to days!

The Tsunami API is truly an exception in this regard. As Daniil Romazanov, PARSIQ’s CTO, put it:

We have gotten it to the point where over 6 billion transactions can be processed in under 1 second! If it’s not clear, this is very fast compared to what is currently on the market.

It is safe to say that the Tsunami API allows users to immediately get results from very, very big data sets. We are currently in alpha testing of the Tsunami API and we’re already finding our testers are very impressed with the speed and accessibility of this API. We’re also excited to say that in the future we will provide a higher level of functionality, providing data in an aggregate way, and not just as raw data.

Before moving on, let’s sum up the details about the Tsunami API:

  1. Will provide both historical data (pull-based) and immediate, real-time (push-based) data.
  2. Provides raw data that is capable of being defined by various filters.
  3. Grants access to the entire history of data on a blockchain.
  4. Delivers results with impressive speed and efficiency.
  5. Supports Ethereum, BSC, Polygon, and Avalanche at launch (more chains to come).
  6. Is easy to access via API or portal, and only requires simple HTTP requests.

Data Lakes: Custom-Tailored Data Made Accessible and Easy

Following in the wake of the Tsunami API are our Data Lakes.

In the simplest terms, a data lake is a central repository where data can be delivered, stored, processed, and analyzed in its native, raw format.

The Data Lakes created by the PARSIQ Network will be specialized and custom-tailored lakes, dedicated to each individual decentralized app or DeFi protocol. They will be dedicated sites for any and all of the data generated by a specific project, platform, or protocol.

This sounds like a welcome solution in the world of Web3 data, but what makes Data Lakes different from the Tsunami API?

Our Data Lakes are powered by the same technology as our Tsunami API. In fact, you can think of them like forks of that API. What is so important about them is that they will, in essence, serve to both extend and refine the scope of the Tsunami API.

Data Lakes will extend the Tsunami API by providing more options where users can query data. For example, a developer may need specific data related to, say (completely hypothetical!, and because they are so widely known), Uniswap or AAVE. Supported by PARSIQ Data Lakes, all of the historical and real-time data related to these protocols will be even more easily accessible than anything on the Tsunami API.

Data Lakes will refine the Tsunami API by providing custom-tailored data for each of the dApps or protocols supported by a lake. In order to render the data open and readily available, we have to conduct a deep dive into the custom logic of the dApp or protocol. This allows the data to be made even more easily usable than anything on the Tsunami API.

For both of these reasons, our Data Lakes stand as an especially important milestone, because no one on the scene is currently offering this kind of custom, concrete data support for Web3 platforms. Reflecting on the importance of Data Lakes, Tom Tirman (CEO) stated:

A lot of projects and potential clients will be interested to hear more about Data Lakes, how they work, and how to get one set up. Not only will they be beneficial for the protocols themselves, but they will also open worlds of data for third-party users.

Importantly, involved in the customization process, Web3 platforms will be able to define the conditions of the type of data or statistics they require. Having done that, the data can be provided. Or, if aggregated data is desired — for instance, TVL, liquidity, pool size of various token pairs, etc. — that can easily be provided as well.

In fact, it is even possible that, in the future, public Data Lakes could be built and be geared toward specific types of data that would be of interest specifically to individuals or retail parties.

What happens if something goes wrong with a protocol supported by a Data Lake, for example, all its data somehow goes missing? Would their Data Lake become useless?

Surprisingly, no!

We’ve built it that way on purpose.

The PARSIQ Networks Data Lakes are fault tolerant. This means that if a data lake were to ever cease functioning (however unlikely), it can be brought back online, resuming its state at any point and restoring any missing blocks or other data.

Everything about this process is automated, so clients won’t need to worry about the state of their Data Lake. Even more than this, if there are ever glitches or technical errors — say, if 10 blocks dropped off the chain — then we could easily make the necessary calculations, providing reliable data in its place with little to no effort.

Again, let’s recap what Data Lakes are all about:

  1. Extends the Tsunami API by providing more options for user queries.
  2. Refines the Tsunami API by providing custom data of Web3 dApps or protocols.
  3. Will be valuable to the protocols themselves and to third-party users.
  4. Allows platforms to define the conditions of the type of data they require, be it aggregated or not.
  5. Fault tolerant: can be automatically brought back online in rare cases of loss of function.
  6. Allows for faulty data to be restored, retaining its reliability.

Data Hubs: Rendering Web3 Data Secure, User-Friendly, and Reliable

Data Hubs are a bit easier to explain than everything covered in the Tsunami API and Data Lakes. This is because, just as Data Lakes are an extension of the Tsunami API, Data Hubs are an extension of Data Lakes.

Specifically, Data Hubs are a data storage solution for custom-tailored data of the dApps and protocols supported by Data Lakes. Every Data Lake has its own unique Data Hub. However, Data Hubs are not as open and expansive as Data Lakes. They contain, instead, only the data specific to the particular dApp or protocol supported by the Data Lake.

For example, say (again, completely hypothetically!) a Data Lake for Uniswap or AAVE. Each of these Data Lakes would have their own Data Hub, which contains only data produced by the protocol — such as events, blocks, function calls, etc. Data Hubs also contain all of their aggregated data, like volume, TVL, etc.

As Web3 develops, becoming even more complex and interconnected, the need for data is continually increasing; but so is the amount of data out there! Data Hubs are a necessary part of the solution for maintaining control over all of a project’s information. As Simon Harmgart (Senior Account Executive, Business Development) noted,

Now, more than ever, projects are starting to take data very seriously. This is super exciting to see! With the PARSIQ Network delivering not just instant data, but also the ability to store and aggregate it at the same time… this really will be a game changer for a lot of projects!

What will be particularly useful for dApps and protocols using PARSIQ Network will be the SDKs that are planned to be launched after our Data Hubs. Basically, the SDK will serve as a library that clients can use in their app code to get data from Data Hubs and interact with the Tsunami API. The benefit is that the SDK offers an even more user-friendly interface to work with the data we provide.

Akin to Google’s Firebase on Web2, our Data Hubs will provide the final, foundational piece (alongside the Tsunami API and Data Lakes) in our suite of products that will position the PARSIQ Network as the go-to backend for all Web3 data needs, solutions, and access.

PARSIQ Network in Action

Up to now, we have gotten an in-depth look at the flagship products that the PARSIQ Network has in store.

With our new suite of products PARSIQ Network will set a new standard for Web3 data and become the go-to backend solution for all Web3 data needs.

Not only does PARSIQ provide any and all raw data (historical or real-time), but we also custom-tailor the data to the needs or logic of a dApp or protocol and provide the means for storing and securing that data. And, with the launch of our SDKs, it will be easier than ever to automate one’s workflow when using that data.

To sum it all up, it works like this:

  • The Tsunami API provides easy access to all of the data — both historical and real-time — on a supported blockchain.
  • The Data Lakes extend and refine the reach of the Tsunami API, making the custom-tailored data of particular dApps and DeFi protocols open and available to the protocols themselves and to third-party platforms.
  • Data Hubs store a dApp’s or protocol’s custom data, always keeping it ready to hand, secure, and (in conjunction with the SDK) easily navigable.

But how does this all work in the real world? What’s an example of how these tools will be used?

An important question!

A great example can be had with the way IQ Protocol will use PARSIQ Network’s technology. IQ Protocol will be one of the first DeFi protocols to have its own Data Lake.

This means that the logic of the IQ Protocol will be put into a database where the IQ Protocol team and developers can use it to power their front end, and connect their platform with all of the data that PARSIQ provides.

Essentially, this means that all of the users, NFT renters, token stakers, those who list assets on IQ Protocol, will be using the front end of a platform whose backend is powered completely by PARSIQ and our Data Lakes.

But we can also provide a hypothetical example of how another DeFi protocol — say Uniswap — would benefit from becoming a client of the PARSIQ Network (once again, this is purely hypothetical! Uniswap has been used as an example throughout simply because of how well known they are — meaning, the example can be understood by a broad audience).

Uniswap would be able to benefit from Data Lake as it would facilitate all of their fundamental data needs all from a single source: PARSIQ. For example, we would provide the current and historical TVL numbers, immediate price updates for all tokens, the open and close price of tokens (again, both current and historical), comprehensive user balances (current and historical), and so on.

A problem most protocols and projects face is needing to aggregate — and refine — data from a number of sources. A major benefit of our offering is that not only would our clients be receiving all of their data from one place, but they would also not need to do any of the aggregation work on their own. Collection of data, pre-processing of data, aggregation, storing: PARSIQ Network does all of this for our clients!

This already sounds like an impressive development, but is this all that the PARSIQ Network has to offer clients?

To this, we must offer a resounding, “no!”

The above example is simply a basic sketch of how some of the largest DeFi protocols could streamline their systems, becoming more efficient, by utilizing our products.

But the imagination really is the limit to how platforms could utilize the data we provide for them.

For instance, our data could be used to open possibilities for types of DeFi automation. Automation in the DeFi world has been a very tricky obstacle to overcome. With the all-encompassing nature of the data we provide, our network makes automation more feasible. Our solutions are just as scalable as the best, but unmatched when it comes to speed and comprehensiveness.

For instance, a protocol could build in their backend any sort of automations, such as retrieving data, setting up new filters, monitoring things like contracts, pools, currency pairs — or really, whatever they desire. By using the data we provide (and which they can request with the API and SDK), they can create a kind of feedback loop involving their protocol and whatever source of data they are interested in. One result is that they would be able to create extra, custom logic on the side of their protocol that would facilitate automated actions and events.

Again, other examples of the way dApps and protocols could benefit from our network could be given (and will soon be created in practice!). The key point, though, is that the limitations on what we offer our clients is limited only by their needs and abilities — other than that, innovation abounds!

The Continued Journey Ahead

With all of what has been described above, the PARSIQ Network is redefining what it means to quickly and easily access Web3 data. Becoming more and more entrenched, the whole world of Web3 and blockchain technology is at an important point of transformation and growth.

In order to usher in the next generation of functionality, innovation, and adoption, platforms will need more than simple access to data. What is needed, instead, is highly specified, customizable, fast, and flexible access to that data. With the launch of the Tsunami API is on the horizon — coming in July 2022 — that is exactly what PARSIQ Network offers.

Currently in the testing phase, all the projects we have spoken to about our developments have understood our evolution to represent a great advance. This is saying a lot, as we have visited with quite a few projects this year alone. Having attended a host of conferences, such as ETHDenver, Avalanche Summit, NFTLA, Binance Blockchain Week, ETHDubai, DevConnect, ETHAmsterdam, and Permissionless, the Gumball3000 rally, and still more to come, such as NFT Connect, Consensus, and NFT.NYC.

In closing, it is worth pointing out other helpful resources we have available for learning more about the PARSIQ Network, our evolution and goals.

  • If you’re new to the PARSIQ Network, or are just interested in learning a bit more about our platform in general, be sure to check out our PARSIQ 101 article.
  • If you would like to read a letter in which Tom Tirman (CEO) describes how we will redefine Web3 backend, and presents the outline of our roadmap, click here.
  • And for an insightful reflection on the motivation for our platform’s evolution from Danill Romazanov (CTO), click here.

We are excited by the opportunities and challenges that await us, and look forward to our accomplishments to come!


PARSIQ is a full-suite data network for building the backend of all Web3 dApps & protocols. The Tsunami API, which will ship in July 2022, will provide blockchain protocols and their clients (e.g. protocol-oriented dApps) with real-time data and historical data querying abilities.

Website | Blog | Twitter | Telegram | Discord | Reddit | YouTube



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Go-to backend for web3 applications 🌊