Published in


PARSIQ Network: Simplifying dApp Analytics

PARSIQ is all about data.

We’re all about making Web3 data easy, accessible, and usable for anyone and everyone utilizing blockchain technology.

That’s why — with the launch of our Tsunami API in July 2022 — we’ve laid the groundwork for PARSIQ to become the go-to source for dApps, protocols, and developers creating the world of Web3.

As a fundamental piece of Web3 infrastructure, the Tsunami API not only provides instant access to both historical and real-time blockchain data, but (alongside our Data Lakes) it also allows users to specially customize that data, making it fit their needs!

In this post, we’re going to look at the way the Tsunami API greatly simplifies an issue vital for anyone building on the blockchain and wanting to understand the large amount of complex (and not to mention valuable!) data passing through their systems. Namely, dApp analytics.


  • Web3 analytics are becoming increasingly more important for the creation of dApps that can rival the seamless user experience provided by their Web2 counterparts.
  • Yet, analytics are a tricky business. There is so much information on chain, and a constant trouble for developers is not only having easy access to all of the relevant data, but also maintaining that data (e.g., making sure the data is accurate and up to date, etc.)
  • Unlike many of the best of our competitors, the Tsunami API not only has an impressive amount of indexed data and unmatched speed, but we’ve also got a great deal of flexibility when it comes to querying the chains.
  • This means that PARSIQ’s offerings can greatly simplify the process for developers wanting to take full advantage of blockchain data when it comes to analytics.

But before getting to all these details, you might be wondering…

You say the Tsunami API provides fundamental infrastructure and delivers instant Web3 data… But how does all this work? Why will PARSIQ become the standard source of blockchain data? What kinds of solutions does PARSIQ offer?

Great questions!

In previous posts, we’ve taken the time to provide in-depth explanations of how our services provide valuable solutions to pervasive blockchain problems.

For example, in this post we explain how PARSIQ enables companies to build on the blockchain. Whether they be Web3 native projects or traditional Web2 businesses, we’re at the forefront of Web3 technology, pushing forward its mass adoption:

  • For Web3 platforms, our tech increases the speed of bringing their project to the market, relieves common pain points for developers, and can vastly improve their user experience.
  • For traditional businesses, we remove the difficulty of integrating blockchain data into their already-existing tech stacks.

But this is really only the tip of the iceberg. To learn more about how the Tsunami API provides efficient and easy access to blockchain data, be sure to read this post. And to see how seriously we take the customization of blockchain data, read this post on the way we provide custom logic on your own terms.

Of course, we’ve also got a lot more exciting plans in the works, such as expanding to include more blockchains, creating some very exciting Data Lakes, and making decoding blockchain data even easier — to mention just a few!

All of this will come in due time. For now, though, let’s continue on with the topic at hand: dApp analytics.

What are dApp analytics and why are they so important?

‘Analytics’ is a very broad term, covering a wide range of types of data. Put simply, analytics are the sets of data that contain relevant information about the use and functionality of a platform. In fact, most people are probably already familiar with the idea of analytics at some level — think, for example, of Google Analytics, which tracks (among other things) web traffic, or Twitter analytics, which lets users see basic stats about their account, such as Tweet impressions, profile visits, number of new followers, and so on.

When it comes to dApps, analytics convey similar things. Things like the number and identity of unique visitors to a platform or users of a dApp, how they interact with the platform, the time users spend on one part of the platform as opposed to others, the kinds of transactions that are made, the frequency of those (and similar) transactions; analytics also provide insight into information about volume, minting events, NFT listings; and they also report on the performance of the platform or dApp.

But the importance of analytics doesn’t stop there! Another important function of analytics, especially for projects operating in the DeFi space, is fraud analytics. A well known example in the insurance space is the detection of fraud rings. Yet, criminal organizations running large scale, systemic insurance fraud operations are impossible to detect on a single case basis. It’s only logical to expect this kind of behavior will also come to plague decentralized insurance providers. Without access to application level analytical data it will be impossible to detect this kind of abuse.

In short, analytics provide a window into a dApp for analyzing and assessing all of the data passing through its system.

And when it comes to Web3 dApps and protocols, there is definitely no shortage of data — with the constant creation of every block, more and more data sits waiting to be processed and made useful. We’re talking upwards of 1000 terabytes of data generated yearly by the biggest networks 😵‍💫

That’s a lot of valuable information!

So, when it comes to answering the question of why dApp analytics are so valuable, an obvious answer presents itself: dApp analytics are a crucial part of the creation and maintenance of the world of Web3.

For developers, they are crucial because it allows them to better understand the inner workings of their platform, giving them insight into how to build better dApps with improved user experiences. And for users, analytics are important for providing transparency, and even providing the means to protect themselves from tracking.

Yet, as true as this answer may be (and it is true!), it doesn’t really give us an accurate picture of the state of Web3 development today when it comes to the use of analytics.

To illustrate what we mean by this, let’s consider a comparison of the maturity levels of Web2 and Web3 landscapes.

The current state of Web3 vs Web2

When compared to their Web3 counterparts, Web2 apps are leagues beyond in their ability to measure and maintain analytics. But there’s a good reason for this.

Even though it’s easy to forget, we’re only in the early days of Web3. The number of dApps being built — though increasing — is still very small when compared to traditional Web2 apps.

And, when projects do produce dApps, almost all of the attention is put into making sure the core functionality works. It’s rare to see dApps that offer users an experience you’d call sophisticated, or boast functions that would be nice to have even if they are purely auxiliary.

But to be clear… This is no fault of developers today! That is certainly not what we’re suggesting. Today’s blockchain developers are the ones boldly taking the necessary first steps to make Web3 the revolutionary technology it can become.

In fact, the steps taken are already paying off.

Large companies — which are, by nature, slower moving and typically more conservative — are beginning to experiment with various aspects of Web3. In many ways, these experiments mirror the kinds of experiments big business made in the late 90s with Web1, and in and around the 2010s with Web2 (for example, with the launch of the Apple App store in 2008).

In both phases, big companies feared they’d lose their competitive edge if they didn’t begin to take these new technologies seriously. And, what did we see? Companies and brands rushed to assert their presence, creating a website or getting an app in the store.

But these early experiments weren’t especially impressive. They were mostly a combination of marketing and FOMO. Simply having a website or an app was enough to stake your claim and generate some attention. Nothing special. And this is exactly what we’re seeing with many major brands participating in Web3, when they buy virtual land, create NFTs (or other digital assets), or mint social tokens.

But this phase hardly lasts long. Soon after the initial experiments with the new technologies, companies eventually want to see some real return on investment behind their sites or apps, or, in this case, their move into Web3. When that happens, it becomes less of a marketing fad, and more a central part of how businesses drive their development.

This is where data becomes all the more important. No matter what stage of development Web3 is at, data will always remain vital. And PARSIQ is all about data!

Importantly, it’s not just data that becomes all the more important — but analytics as well.

Again, when compared to Web2 counterparts, Web3 has a lot of catching up to do when it comes to analytics. Once more, think about the ease at which analytics can be obtained from Google or Twitter! With Web3 being so early in the stages of maturity, though, this is natural.

Before we know it, the same need for analytics will begin to explode for Web3. After all, companies won’t keep investing blindly without hard metrics about the performance of their dApps. And when that happens, a whole new industry will form around application analytics on the blockchain.

As we see it, the Tsunami API is positioned to become a top contender for providing all of this data.

The Tsunami API and data analytics

How can PARSIQ help simplify the process of maintaining analytics for Web3 dApps?

To answer this question, let’s have a look at the basics of how EVM-based chains work.

As events occur on the blockchain, the Ethereum Virtual Machine (EVM) is set up to store logs of these events. Every transaction emitted from smart contracts means data logged somewhere on the chain (while ‘events’ don’t necessarily occur, transactions or ‘calls’ do). Typically, these logs are used for debugging purposes, for keeping systems up to date, or for keeping a public record of something that has happened in a smart contract.

For example, a log can contain data about the transfers of tokens or the ownership of an NFT.

Important to note is that these logs are also specifically designed to be translated outside of the blockchain itself. This is because the data they contain (being purely raw data) cannot be read by a contract itself. For a translation of the log, what is needed is something to make its data legible.

How does that work?

Without getting too complicated, event logs are made out of a combination of different fields. Among the most important of these fields is the one that defines how many “topics” there are in the event, also important is the other “data” (such as, e.g., spent gas, block data, log data).

Topics are like a unique fingerprint or signature of a specific event that occurred. And these topics — or event signatures — are also searchable on the blockchain. You can find the types of events you’re looking for, if you know what you’re looking for. On the other hand, data can be any kind of large complex information structure. On its own, though, the raw data itself isn’t readily useful. Even though it contains all the information you might be looking for, there is far too much information for it to be useful on its own.

Imagine, for instance, for some reason you want to know the exact date, time, and location of when and where Danny was born, and you also want to know when he lost his first tooth. And now imagine, that what you have is just piles and piles of information, a library’s worth of details about Danny’s entire genealogy — and not just the basic facts about the names of people in his family and the relations between them. Instead, you have every single detail (no matter how boring or exciting!) of everyone’s life included in the genealogy. With thousands upon thousands of pages sitting in front of you, you wouldn’t know where to begin your search.

Raw blockchain data is kind of like that!

The trouble — as developers will tell you — is that blockchain data is a mess. There is so much of it, and it’s hard to know what to do with it. So, not only gathering all of the relevant information into one place, but also having the ability to search it according to their own specific needs, is no simple feat.

Just like it would be important to have the ability to search for references to “Danny” or “birthday” or “lost tooth” in your search through the library of his genealogy, blockchain data needs to be made searchable as well.

This is where the Tsunami API can help!

Sure, there are plenty of platforms specifically designed to create visualizations of Web3 analytics. And those kinds of visualizations can be great! But those types of platforms don’t solve the basic problem faced by developers: which is, again, gathering all of the relevant information into one place, but also having the ability to search it according to their own specific needs.

In order to create interesting visualizations of analytics, you’ll still have to have all of the information readily at hand.

So, we’re still stuck at square one.

The Tsunami API provides specific methods to easily work with and search for transactions and logs. For this very reason, we’ve designed it to be very flexible with its querying capabilities.

The Tsunami API, for instance, isn’t restricted by the number of blocks. We’ve already indexed the entirety of the chain for you, all the way back to the genesis block! And beyond that, it can be queried by any basic parameter, such as blocks (for block number/block hash), tx (for tx has/contract/tx origin), or for events. This means that you don’t need, for instance, to always specify a range of events or transactions, such as “who initiated the tx,” in order to find the data you’re looking for.

Unlike many of the best of our competitors, the Tsunami API not only has an impressive amount of indexed data and unmatched speed, but we’ve also got a great deal of flexibility when it comes to querying the chains.

This will prove to be essential for projects that want to run analytics on their dApps (and what dApp wouldn’t?).

Helpful as this may already sound, the Tsunami API isn’t the limit of what we’re creating at PARSIQ. In conjunction with the Tsunami API are our Data Lakes. When utilized for the sake of analytics, Data Lakes will make the job of developers simpler than ever.

Essentially, Data Lakes are portions of data that have been cordoned-off according to the specific needs of each individual dApp or protocol using our services. Each and every protocol could, in theory, have its own Data Lake. For example, there could be a Uniswap Data Lake containing all (and only!) the data related to Uniswap activity. Or, as we have already created, an NFT Data Lake, which contains every bit of data about NFTs — from history of their price, to legacy of ownership, and so on.

With Data Lakes, the flexibility is even greater than with the Tsunami API. This is because the data involved only includes information that is of interest to a platform, or data that has been generated by them. Put in the simplest terms: Data Lakes are smaller, localized reservoirs of data that ‘makes sense’ to the projects to which the lake belongs.

Now, when needing to piece together various types of analytics, developers will have no trouble getting ahold of the data they need. And their dApps will have no trouble incorporating it.

All in all, PARSIQ has sought to make it easy to get all historical and real-time log data for a specific contract or group of contracts. With PARSIQ this can all be done instantly!


As the blockchain expands, and as Web3 and DeFi become more and more common, so too will the need for a fast and reliable source for analytics data. What will be sought is a simple solution for both historical and real-time data. And this is exactly what PARSIQ provides.

Above, we discussed why analytics are important for dApps. We also discussed some of the important use-cases, like the various ways analytics are helpful for users of a platform, are necessary for developers wanting to improve their offerings, and are even useful for preventing things like fraud.

With our Tsunami API and our Data Lakes, PARSIQ has gone far to simplify the process of delivering Web3 data analytics.


PARSIQ is a full-suite data network for building the backend of all Web3 dApps & protocols. The Tsunami API provides blockchain protocols and their clients (e.g. protocol-oriented dApps) with real-time data and historical data querying abilities.

Website | Blog | Twitter | Telegram | Discord | Reddit | YouTube |



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store