Published in


PARSIQ Network: Solving Web3 Data Needs

Custom Logic on Your Own Terms

This is the third of a three-part series of blog posts on how PARSIQ Network helps solve market and industry challenges in the world of Web3 and enables businesses to easily build on the blockchain. You can find the first post , and the second post .

PARSIQ has had a . We have been hard at work, that will raise the bar for Web3 data accessibility and backend efficiency for developers and their dApps or protocols.

The Tsunami API will be the first of our new flagship products to launch, coming in July 2022!

Providing real-time and historical data — and performing with impressive speed and flexibility — the Tsunami API stands as the foundation for the exciting .

But note, this point is crucial for understanding the broad-visioned purpose that the Tsunami API fulfills:

The Tsunami API is only the foundation of what we are building!

This statement may sound a bit surprising. You might be wondering: What do you mean that the Tsunami API is only the foundation? Isn’t this your new flagship product? Hasn’t releasing this API been your main goal this year?

Yes! The Tsunami API is our new flagship product. But at the same time, it is just a step — to be sure, a vitally important one! — to where we are heading.

The journey towards becoming the go-to data source for Web3 backend is not something that can be completed in a single stride. Along the way, there will be many exciting developments, none of which would be possible without the Tsunami API.

The point of this post is to explain, on the one hand, a bit about what we mean by this, and on the other hand, why it is so important.

Let’s dig in!

Many projects would consider the Tsunami API to be their crowning achievement — their endgame product, so to speak. That is, it wouldn’t take a stretch of the imagination to consider the Tsunami API as one’s primary and sole offering to the blockchain community of developers, dApps, and protocols.

For PARSIQ, however, Tsunami represents only the beginning of the large-scale vision that we are currently bringing to life.

This isn’t, of course, to downplay the significance of the Tsunami API. Not by any means! We’re extremely proud of this API, and look forward to the ways it will solve a number of problems facing developers seeking easy access to Web3 data.

In fact, the first two posts in this series were dedicated to spelling out the details of these problems!

  • , we discussed two very common problems facing the blockchain industry when it comes to data accessibility, as well as fundamental infrastructure problems relating to the efficiency of blockchain data. We also described how the Tsunami API provides solutions to these problems.
  • , we talked about the two kinds of companies in need of easy access to Web3 data: Web3 native projects and traditional Web2 businesses. Both of these types of companies face their own challenge when extracting optimum value out of blockchain data. But, as we saw, the Tsunami API provides elegant solutions for both types of companies, bringing blockchain data to the real world!

So, it goes without saying that we take seriously our achievements in building the Tsunami API! And how could we not? After all, it’s our new flagship product.

However, in contrast to the previous two posts in this series, what we want to stress now is the very simple fact that the Tsunami API is really only the beginning of everything we have in store.

Yes, the problems that Tsunami helps solve may be great, but upon the creation of our Data Lakes and Data Hubs, an even bigger set of solutions are not only possible; they will be made simple.

The reason why this is the case is because the Tsunami API opens the door to truly custom-tailored DeFi data.

Custom-tailored data sounds important! But what do you mean by that?

  • Most simply, custom-tailored data is a sort of “catch all” term we’re using to describe the way the Tsunami API serves as the basis for providing custom logic to developers on their own terms.

Admittedly, this answer remains a bit abstract for the average reader. What’s this supposed to mean?

There are two important aspects that need to be addressed in order to properly answer this question.

One of them has to do with the simplicity of the network of sources of data, and the other has to do with the parameters of the data being delivered.

Let’s have a look at each individually…

Simplicity in Data Sources

When it comes to blockchain data, there are many sources and ways of gaining access. It is not as if — at the level of raw data, or the entire history of events on various blockchains — the Tsunami API is providing something exclusive, something that could not be accessed by other means.

Data is data.

Yet, at the same time, it is not always as simple as that. When we talk about “data,” it is important to remember that it’s not something merely abstract: there are always a number of interests at play. For example, there are particular values represented, specific conditions involved, information at stake, needs to be met, and so on.

These interests are what make data valuable. You do not — at any random point — just “grab a bunch of data.” That would be like digging a bunch of holes, hoping to find a treasure. You might get lucky and find something interesting or useful. But it’s an incredibly inefficient way of going about the job.

No, you don’t just want “data.” You want the specific data you need to accomplish whatever task you are facing right now. You also don’t want to struggle with the process of retrieving that data. The goal is for the data to serve your needs, and not the other way around.

So how can I make this process easier?

This is a question that developers are always asking themselves.

The Tsunami API is an impressive API, but that doesn’t mean there are no alternatives or competition.

These alternatives and competitors, however, are marked by limitations you won’t necessarily find with the Tsunami API — especially when paired with our Data Lakes and Data Hubs.

What are some of the alternatives?

The most obvious alternative to the Tsunami API is Covalent. Covalent offers a great service. And, when compared to the Tsunami API, their offerings are very similar to our API. But, as we’ve seen in the previous post in this series, solutions to problems relating to Web3 data accessibility aren’t just about the availability of data, but also the right amount of flexibility when querying that data.

Unlike many of the best of our competitors, the Tsunami API has a great degree of flexibility. It isn’t, for instance, limited by the number of blocks, and can be queried by any basic parameter — you don’t need, for instance, to always specify a range of events or transactions, such as “who initiated the tx,” in order to find the data you’re looking for. Instead, being as broad or as granular as you like, you can quickly locate the relevant information.

Another alternative is for platforms to work with external node providers or to host nodes themselves. Again, this can get the job done. But it often means taking more than one step. You must, for example, talk to the node providers, set up your own nodes and archive the data, etc. To understand why this option often creates as much trouble as it solves, .

Where other APIs may not offer as much flexibility or speed, node providers only provide access to nodes (and nothing more). Either, you’re working with multiple sources to gain access to the data you need, or you’re working with APIs that require extra steps when defining the data you are wanting to access.

In both cases, the simplicity of the process is interrupted, requiring developers to do something more in their queries. You must, for example, search all the logs for the information on a node, process the information, or what have you.

The Tsunami API brings developers great simplicity.

The simplicity of the Tsunami API allows developers to gain access to data on their own terms. And that is exactly what developers are looking for when focusing on building their own platforms.

We built it this way on purpose.

Custom Logic on Your Terms

In saying everything we’ve said above, something very important to recall is that the Tsunami API is not really where PARSIQ is attempting to stand out from the crowd.

This last sentence probably sounds strange.

Yes, the Tsunami API is fast; yes, the Tsunami API is efficient; yes, the Tsunami API is flexible. Yes, to all of these things, and more. But focusing on these details alone, however, would be to overlook the bigger picture.

Ultimately, our concerns lie with this bigger picture: we have been creating this API as the foundation for the bigger and better systems that we’re building. While node providers and other APIs have very specific use cases, the ‘endgame’ of our competitors sometimes may just be the starting point of the Tsunami API.

Our aim is to provide truly custom-tailored defi data. One way we do that is by what we just described above. But that is by no means the limitations of this vision. The suite of products that we are building atop the Tsunami API, such as the Data Lakes and Data Hubs, brings the level of customization even closer to home.

How so? What are Data Lakes?

Essentially, Data Lakes are portions of data that have been cordoned-off according to the specific needs of each individual decentralized app or DeFi protocol using our services. The flexibility involved is even greater than with the Tsunami API, including only the external data that is of interest to these platforms, or data that has been generated by them.

Put in the simplest terms: Data Lakes are smaller, localized reservoirs of data that ‘makes sense’ to the projects to which the lake belongs.

Ok, so Data Lakes provide protocol-specific data, but what are Data Hubs?

Alongside our Data Lakes are our Data Hubs, which accompany a Data Lake by storing any of the data that is too large or too expensive to be dealt with using purely on-chain resources.

This is especially important in light of the fact that both kinds of companies interested in Web3 data (Web3 native platforms and Web2 companies interested in blockchain technology) are always looking for data solutions that will allow them to create more sophisticated platforms with the best possible user experiences. More on this topic can be found in the .

Here, the closest alternative to PARSIQ is The Graph. Similar to our Data Lakes, The Graph provides platforms with subgraphs that host elements of data that pertain only to the platform. Yet, in order to do this, their solution involves coordinating multiple types and sources of data with one another, requiring constant points of connections between, for example, indexers and nodes.

With this type of solution, developers must frequently resync their nodes and reindex their data, in order to ensure that everything is accurate and up to date. All of this takes time and resources.

Thanks to the Tsunami API — which already contains an active index of a blockchain’s entire history — and our Data Hubs which store any and all relevant information, our Data Lakes do not need this kind of routine, time-intensive, and laborious maintenance.

This sounds ideal, but how can these be set up?

PARSIQ is dedicated not only to making all of Web3 data available, but also easily accessible. We happily help projects by cooperating with them, allowing them to determine the limits and constraints on the data they need, so that they can make sense of the information that is on-chain and so create uses for it in the real world.

Summing it Up

With all of what has been described in this post (and in this blog series), the PARSIQ Network is redefining what it means to quickly and easily access Web3 data. Beginning with our Tsunami API, PARSIQ will provide highly specified, customizable, fast, and flexible access to that data.

But Tsunami is really only the beginning!

With the launch of Data Lakes and Data Hubs, PARSIQ will become the ‘one-stop shop’ for all backend Web3 data. Not only will dApps and protocols be able to get all of their data from a single source — and not have to work with a number of APIs, node providers, and so on — but their developers will also have control over the exact types of data they want, customization and all.

So, when we say the Tsunami API provides the basis for custom-tailored data, we do not simply mean that the data can be queried easily. We also mean that it can be made highly refined in order to:

  • Serve as a foundation for developers to have streamlined access to the exact points of data they need
  • Allow developers to utilize that data in ways that make sense, based on the internal needs and purposes of their dApps or protocols
  • Provide the basis for frictionless relationship between Web3 data, one’s business logic, and effects in the real world

In short, offering custom-tailored data means providing direct access to very specific types of data.

This means custom logic for developers, completely on their own terms!

This concludes the three-part series on how the Tsunami API will solve your Web3 data needs.

Thank you for reading, and we look forward to the release of our new products!

🚀 🖖 🌯


PARSIQ is a full-suite data network for building the backend of all Web3 dApps & protocols. The Tsunami API, which will ship in July 2022, will provide blockchain protocols and their clients (e.g. protocol-oriented dApps) with real-time data and historical data querying abilities.

| | | | | |



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store