PARSIQ Fireside Chat + Q&A
Special Tsunami API discussion with Tom, Danny and Anatoly
Hey all, the last few weeks for PARSIQ have been huge, with the launch of Tsunami API across Ethereum, Polygon, Avalanche and BNB Chain as well as some great feedback from our over three dozen alpha testers!
To celebrate the launch we had an extra special Q&A for you with a slightly different format. This time our CEO, Tom Tirman, was joined by Danny Romazanov (CTO) and included the long-awaited video session of PARSIQ’s Chief Blockchain Architect — Anatoly Ressin!
Tom: Hi everyone, welcome to PARSIQ’s livestream, today we have a slightly different format from the standard AMA’s, we’re doing a Q&A plus a fireside chat discussion. This will be with myself and our CTO Danny, and due to popular demand, something that the community has been really waiting for, for a long time, we have PARSIQ Chief Blockchain Architect Anatoly.
Danny: Hi, everyone!
Tom: And this session will be following a really productive week with the launch of one of PARSIQ’s core products on the PARSIQ Network, the Tsunami API. Which has now launched on 4 different blockchains, and which has had over 3 dozen alpha testers, with many more in the pipeline and really exceptional feedback. So, I just want to say I’m really proud of the team for delivering this milestone, thank you to the community for supporting us on this change of the roadmap. I think it was the best decision. The feedback has been amazing and we’re on the right path here. So, without further delay I will probably jump into the pre-collected questions from the community. We’ll try to answer some of them as best we can, then we’ll dive into further discussion.
Q: This is a popular question about the Tsunami API and the whole new direction of the PARSIQ network. Will the smart trigger and other core functionalities be discontinued. So Danny I think I’ll leave this one to you.
Danny: Alright, so we don’t have a plan to discontinue completely. Some of our clients, a lot of our clients still use the Smart Triggers, and there are a lot of smaller, B2C use cases as well. The new platform is, I would say it’s like from developers to developers, even in B2B context, it’s still from developers to developers. Our core, meaning the PARSIQ core platform allows a simple no code/ low code approach for smaller clients and easier use cases. So, I think it will not be going anywhere anytime soon. We will definitely try to re-onboard the bigger, bigger clients to our Tsunami API, and new set of products.
Tom: And just to add to this, I think there was also a question somewhere about these real time events. So push based events, just like they were on the PARSIQ core platform will be available on Tsunami as well. Right?
Danny: Yes, yes, they will. It actually will be slightly different from the platform, because the platform was done with a more human readable approach. Tsunami actually follows more like a developer approach. So, it will be a little bit different. But then on the other hand, Tsunami will give some benefits over the old platform when it comes to push based data because it will not have a strict limitation on how many actions it produces and so on.
Tom: I think this is a good chance to ask Anatoly about his thoughts. I mean, we’ve discussed this a lot back when we started PARSIQ, you know in 2018/2019. So, what is the let’s say differentiation, advantages, key differences between querying historical data, pull-based, pinging API’s versus processing real-time streams of events and push-based delivery, what are your thoughts on it?
Anatoly: Yeah, I just want to add some comments on the fate of the Smart Triggers because I would say that the technology that works under Smart Triggers is still used, it is still powering our ability to react to different events, to different situations on the blockchain. As well, we discovered that only concentrating on real-time notifications is not sufficient to fulfill expectations of our clients. In most cases they want to process some amount and sometimes a big amount of historical data and then just to switch to the real time updates of what they already analyzed. And in this case, it is how Tsunami API was born.
So, we started to collect data and as I remember, from the very first features, how PARSIQ should work, it would always be the database. So we have a data stream that is split into two directions. One direction fills our databases for all types of forensics, all types of what we can do with historical data, and other streams will be consumed by the platform where users can attach to these events. But as in every startup, we are just following the needs of the market and now we see that other historical data, even collected in that way, how we prepared and how we dissected all the blocks into events, into calls, into a much more detailed way in comparison to how our competitors do it. And the ability to deliver these things. I would say that it is really what our clients are expecting from us for now.
Tom: Thanks Anatoly. As long as you were on the topic of competition and other types of data solutions with the blockchain, and the key differences. So, I think we should take this question next.
Q: What’s the main advantage PARSIQ has over competition, that allows Tsunami to overcome them in speed and precision? What stops them (or at least makes it difficult for them) to catch up with Tsunami?
Anatoly: I would say that firstly, we have our own, maybe it is not very popular in a decentralized world, the proprietary format, how we maintain the data. But I believe that at some moment, we will even publish all the data structures, how we process that data, how we’re storing it, but it really allows us to process data in an enlightened manner I would say.
This is one of the most important things. I would say that we are even considering such functionalities that are absolutely missing in big data. In big data that is connected to the blockchain functionalities, like for example, cases, where you can fetch all the events from the transactions where particular other events happened. For example, we can answer: Can you give me all transfers from the blocks, from the transactions where a given event happened and this given event is not a transfer and it is some semantic event. Yet, this feature is highly anticipated by users but now we are just experimenting with such features because it means functionality like joints, and joints, they are traditionally very hard to implement for big data providers. So for now, at least in ad-hoc client requests. We can organize such data streams and it’s a very big difference.
Danny: That’s actually a great example because of the way we structured and processed the blockchain data, it makes it easy for us to create custom and more complex data cases. For example, when one entity is dependent on the other entities with some additional business logic behind that, it just makes it easy for us to work on these cases now.
Tom: Yeah, I would also like to add in terms of competition, who are the competitors. It’s usually, like node APIs sort of like Infura and others, they are not the same, they are proxies to the node. We have ready made data that you can ask and take at any point. With the Data Lakes we’ll have translated events which means aggregated data, specific data, custom tailored data for a specific use case like NFTs, like a certain protocol, like DeFi, like DEX’s.
I mean, we do have the nodes just like those node API providers and so much more. Then we have other competitors such as the big blockchain data indexers. And like, Anatoly and Daniil mentioned, differences in the architecture, the data structures, so that comes, with really, really great speed, scalability, throughput and distributed nodes and databases means that no data gets lost, and so on and so forth.
We’ve had excellent feedback from alpha testers. A lot of them said it’s the fastest they have ever used, some said it provides types of data that other data providers do not provide. We’ve had those complaining that other API’s and data providers take minutes, hours or days to resync new blocks, resync the nodes in relation to the new blocks and so on. So, all feedback has been extremely positive. We are quite certain that we have a competitive market advantage with our technology. Additional use cases, we don’t only provide data, historical data, but the real-time streams with push-based delivery on top and we will also have storage using Data Hubs where you can maintain state and aggregate this data, with a lot of other perks that haven’t been announced yet.
Q:Is PARSIQ aggressively targeting particular projects to onboard Tsunami? It would be fantastic to be able to announce a top 100 market cap project to the list of users.
Tom: We do not discriminate. So, if someone wants blockchain data whether they’re big, small, DeFi, CeFi, they’re all welcome to use it. The list of alpha testers and the projects in the pipeline is all over the place. We have DeFi projects starting from big protocols, really, really big ones and those that have like 300 million in TVL and more, up to small ones, medium sized ones. There are centralized service providers, analytics platforms, CeFi companies, big and small. I think, yeah, we don’t discriminate. So, the idea is to get everyone easy access to blockchain data.
Q: Are PARSIQ able to disclose everyone outside of PARSIQ involved in the Alpha/Beta testing?
Tom: We put some initial names up on our website. So, obviously there are a lot more so yeah, one by one I think you will be seeing official announcements regarding some companies, some projects and protocols and apps who are using it. Of course, depending on the commitment and whether they want to be revealed or not.
Q: Tsunami is now live on multiple chains, which chain appears to be the most popular in terms of usage so far?
Danny: Probably I can take this one. I would say, the two main chains would be Polygon and Ethereum because, like serious big projects tend to launch there nowadays. Back in the days there was no alternative to BNB actually, so if protocols wanted to launch on a chain where a transaction doesn’t cost like 100 bucks it was BNB but nowadays it mostly goes towards Polygon. And there are still a lot of projects on Ethereum, so I will say these two main chains so far.
Tom: Anatoly, what’s your opinion on adding more chains, EVM versus non-EVM and so on, so forth?
Anatoly: I would say that what I see is a very strange bias from non-EVM chains to implement EVM on top of them and there is also a demand for monitoring their EVM versions that I implemented on top of non-EVM. Like in Aurora or NEAR for example, for the solution that will work with Solana. But I will say non-EVM chains are harder to monitor because of the binary nature of their messages, of what they are transferring. EVM chains are more transparent in terms of what is happening, how it is being executed. I would say probably such chains as Solana and NEAR, they will be monitored in both versions as EVM and non-EVM, but i would say that because of the high throughput of corresponding blockchains, we are still working on the architectural solution for maintaining big data sets. We already tested our approach on Polygon, because Polygon itself is a rather quick blockchain, but of course things like Solana are much, much quicker. So, I would say that firstly it will be easier for us to add monitoring for EVM based blockchains, but as we see more traction for the non-EVM chains we will definitely add them. But it is interesting that for now, the most liquidity is just migrating from Ethereum to Polygon.
Tom: Definitely and we will add more chains. EVM chains like Anatoly said are easier to integrate for us, to index. Non-EVM is a bit tougher but we’ll definitely do them as well. And another thing is we will be working with layer two scaling solutions — that is in the plans. Danny, what do you think about subchains so we’ve seen Avalanche launch subnets, Polygon is doing supernets on Polygon Edge. We’ve seen things like app-chains. So DYDX launching their app on Cosmos, on their own Cosmos chain so basically one chain for one app, one chain for one game.
Danny: Yeah, I think there are two points of view on that, that I have. First of all, from the decentralized perspective, it kinda kills the vibe because most of these projects are going to own the data completely. Because it is unlikely that there will be a lot of validators for these networks and mostly like projects will be there to decide. Well, it depends on the model and type of blockchain but still, this part is kind of a shady zone for me, at least for now.
But generally, nevertheless, all of these solutions will need monitoring solutions. They will need data and they will need real-time data as well. And we actually had some conversations with well, very initial conversations with Polygon. They were interested in whether we can bring Tsunami on Polygon Edge. It’s quite an interesting topic, because it would be a big achievement if we actually could automate these things and make Tsunami available on all Polygon, Polygon Edge sub-nets. If we can make an automated solution there, like when you roll out the new chain, almost instantly you get Tsunami there, that would be very interesting and I think those things are doable.
Tom: Yeah definitely, I imagine if you’re a big game or a big dApp and you require your own subnet, subchain or supernet dedicated only to your own dApp or game and you launch, then Tsunami is ready in a matter of hours to support your sub-chain. So building this framework would be quite good if these subchains pickup.
Q: Any news on Uniswap or AAVE and whether they will use Tsunami?
Tom: So, we have this concept of Data Lakes, dedicated API’s for translated process specific customer tailored data. We will build dedicated Data Lakes for some of the really big protocols like Uniswap and AAVE. We have connected and worked with those teams before, so we will definitely be looking to work with them on these Data Lakes because everyone benefits, their community, everyone using Uniswap data like price aggregators, analytics platforms, DEX traders and so on. Similar to AAVE, whether they will use it for their backend, that’s a good question. I guess that will depend on what the roadmap is, what’s the implementation right now and how well we build these Data Lakes.
Q: AAVE had custom triggers set up originally, will they move across to using Tsunami now?
Tom: So the custom AAVE triggers are still live on on the Core PARSIQ platform. But using the AAVE Data Lake that I mentioned, you will be able to get this data as well and much more.
Q: Have any DEXs or CEXs shown interest in using Tsunami?
Tom: Yeah, we have a couple of DEXs in the alpha tester pipeline, and not CEXs but we definitely have multiple CeFi companies. Some of them, quite large, have tested it and are happy.
Q: We need to be able to now act on our triggers! When will Ncase be available? Will it be included in Tsunami or be its own product?
Tom: Yeah, you can still use the triggers on the old platform, but once the real-time push based approach is implemented on Tsunami you can now set them up there as well. When will Ncase be available? So, we have no no set date right now because as you know, it’s all hands on deck with the PARSIQ Network which is Tsunami Data Lakes, Data Hubs, SDK, the whole roadmap has been published on our blog until next year, it’s included in my CEO Letter.
Will Ncase be included in Tsunami or be its own product? Well it is its own product, but it benefits a lot from a layer or blockchain data provided by Tsunami because if you set up payment infrastructure, non-custodial payment infrastructure, you want to know immediately on your side what’s going on there and that’s where Tsunami will come in.
Q: What’s the plan for social media content? Is PARSIQ going to now host a bunch of videos about Tsunami? I.e. use cases, performance, all would make great videos.
Tom: So, it’s quite a developer oriented product. I think the main goal would be to publish documentation, expand documentation, and do some educational content. And of course, once we implement the SDK, there could be some really good hackathons for people to build Data Lakes on top of PARSIQ. Danny, do you have anything to add?
Danny: I actually heard that we have some educational content on the way already about Tsunami. Generally, I would agree that as a developer product, probably the most important stuff is good documentation. And, yeah, we have some things coming like SDKs. It’s important to both have good documentation and understandable code there. I think we will get to the point where we focus more on educational content. And I think, definitely at some point, we will publish some performance testing with some of the other platforms just to kind of show off Tsunami API and Data Lakes.
Tom: Yep and we’re planning for the usage dashboard, so you can see the number of requests and calls and everything.
Danny: Yeah, you will generally have like over like the whole overview of the system, you know, like you can see what are the latest blocks, how many transactions, events, how much data was processed. Like last 24 hours, this kind of stuff.
Q: I’m wondering if blockchain data becomes searchable, like the internet through Tsunami, what can be done with this? Any thoughts?
Anatoly: I would say that Tsunami, while being a developer oriented thing, also deserves some its own face. The best way data could be presented is through our own block explorer. That will absolutely unveil the power of what we can do, while processing blockchains. So yes, it should become surfable, like the internet.
Tom: Yeah and I’d like to add that we do have on the roadmap, something we call PARSIQ Atlas, which is pretty much like an application layer on top. Tsunami is mostly a B2B or B2D product and you know, we are planning to build something that anyone can use, even if you’re not a developer, not a technical person, to be able to get interesting data sort of by surfing different dApps and protocols and other types of applications on the blockchain and get relevant, insightful data from them.
Q: It has been alluded to that the PRQ token will be the utility token, or means of “use” by holding PRQ for access to API queries. Is this still true?
Tom: So yes, the same model that is implemented as with the PARSIQ Core platform, will be implemented for the PARSIQ Network. But in addition, I think this is much more interesting… We have the Data Lakes. So Data Lakes, dedicated API’s for custom-tailored, aggregated data for specific use cases, NFT Data Lake, Uniswap Data Lakes and so on, it could be countless. And there were multiple types of Data Lakes, Data Lakes that we build as the PARSIQ team and also those that third parties can build using our SDK. The idea here is that you would have to stake a certain amount of PARSIQ tokens. If you don’t have tokens, and don’t want to buy them, then you’re free to rent them from the IQ Protocol renting pool. But you have to have them either rented or purchased and to be able to deploy and run the data lake on the PARSIQ network. The idea is that you can make it public, the Data Lake that you built and deployed on PARSIQ. Or, you can make it private and implement some kind of payment fee for people to get access, where a revenue share model will be implemented for the person who built it and for the PARSIQ network.
In addition, we have plans for a PARSIQ DAO where different types of voting can occur, including things like incentivizing developers to build on top of the PARSIQ network, grants and other things that are beneficial to the growth of the PARSIQ network.
Q: What kind of doors will Tsunami potentially open for the crypto space in terms of innovation and new products?
Anatoly: Hmm. It’s a question about the future. The ability to react quickly and precisely on top of the data is always a benefit. I believe that this is a question about what types of new things you can expect in the near future. It’s an open question but I believe that with such big amounts of data easily consumable, it will allow users to create their own fast prediction mechanism. For example, on such data you can easily apply different statistical tactics like principal component analysis like for example you are monitoring hundreds of cryptocurrency rates, but when applying principal component analysis to them you discover that there are only 3 factors and you can reconstruct them in real-time. It’s just an example of applications and probably it will also create different types of derivatives for example, a lot of different things.
Tom: Yeah, as long as we’re on the topic of the future, there was a question about the roadmap going forward. So, we have a roadmap up until a certain point published, so this is still the plan still on track. And beyond that timeframe, we do have certain goals and ideas, which we haven’t publicly yet disclosed. So there will be a time for that. But the idea is right now to build out what we have planned. So, the Data Lakes, theSDK for third parties being able to build a Data Lake and the marketplace where people can deploy and run and monetize their Data Lakes and get as many users as possible using Tsunami and the Data Lakes and Data Hubs.
What else, we have some interesting integrations planned, we will both expand the blockchain integrations. As well as integrations with some centralized software providers. We will expand the product line vertically as well by building some application layers on top of the data that we have, one is PARSIQ Atlas and there’s more coming. So, I think there will be a time to dive into a more expanded roadmap.
Danny: I think while we’re on the subject, probably a good thing to mention is that we do have a lot of discussions, like there are multiple ways that you can go. There’s a crazy amount of ideas from us, from the whole team. But, what we want to do is actually focus on the data. Data is something that we have that we have expertise in, we have a lot of data. I think we want to move in this direction. Not trying to, you know, split our focus to make like 10 products in different kinds of niche industries.
Tom: Be the one stop shop for blockchain data, a full suite solution.
Anatoly: Just building a full view of all the data that’s ever happened in blockchain, the relations between each other. There probably, may be, some economical relations between major players so as you build more and more data, the more and more connections and relations become visible. We want to discover them and pack them so they will be easily consumable by those who need it.
Q: Can you please describe the backend architecture behind Tsunami API?
Danny: I would say no. I mean there are a lot of things going on. There’s a lot of software built around the nodes to make it consistent at all times. Because like, even if you run like a set of nodes or like a pool of nodes in one location, it doesn’t necessarily mean that they will act the same and some will fall behind, some will have some other errors. So, there is a lot built around that to make our data as you know, as fresh as possible and following the latest propagated block on the chain. There is a lot on the database layer, how we process blocks and how we store them, how we make it fast and efficient. There is a lot to do wiTsunami API itself, and to the Data Lakes, the SDK and Data Hubs SDK. So, there’s a lot to potentially talk about the tech, not really going to be able to answer it quickly
Q: Can the advantage of using Tsunami API be quantified in dev hours saved or maybe data storage costs?
Tom: Yeah, definitely, I think compared to things like building your own solution, running your own nodes or using node APIs and building non-scaleable listeners and so on, I think there could be a calculation done on dev-hours saved and how costs can be reduced.
Danny: There is a lot to that. It all depends on the use case. It can save you starting from something as small as just running a node to actually 10s of 1000s of dollars per month if you want to have access to a lot of data at all times. The costs are increasing greatly like the more chains you have. So, definitely some calculations can be done but yeah, it all heavily depends on the use case.
Q: Will Tsunami API be able to work on private blockchains (on premise) is there any scope to do business here?
Danny: I mean, I can take this one. It all depends, there is no answer yes or no here. It once again depends on the use case, it is possible. If those private blockchains want to collaborate with us then technically everything is possible. It can definitely work on private chains if those chains want to integrate with us and want to collaborate on that.
Tom: Yeah, I think Anatoly and myself we were discussing this like years back, whether it’s a viable business decision to give the software on premise, license it on premise — and whether we want to integrate enterprise blockchains, private permissioned blockchains, Hyperledger fabric and so on, so forth. I mean, it’s possible it’s possible.
Anatoly: At least if those private blockchains are just instances of EVM chains, then why not. And i would say that one potential target that we could explore is European blockchain services infrastructure, that is actually Ethereum. There is a lot of activity on this blockchain that may at one time be interesting to analyze and provide information about what is happening at this level. It’s just an EVM so if there is information from them, then we can go and start to analyze it.
Q: What’s the feedback been like for the Tsunami API + PARSIQ Network?
Tom: The whole feedback that we got in terms of how we built the product, how it works, how fast it is, how it solves some of the problems that people have with existing solutions, the ease of access and everything else. It’s all been super, super positive. I didn’t expect that. That’s why it’s easier to sell to potential users, potential projects. The combination of getting indexed historical data along with custom tailored, specific data and the real-time push based events is a pretty killer use case that I don’t think anyone has implemented like this, I figure it’s a huge market.
Teams are coming in to build their existing protocols. We’re struggling every day. CeFi companies are now looking more at DeFi, everyone needs blockchain data. It’s going to be an insanely huge market and PARSIQ will take a good share. I’ve never been more certain than now after we’ve done all this work, built the products and worked with the alpha testers. So, we even have different companies who have their own chains. Some of them I’ve never heard of reaching out because they’re struggling with data. And can we integrate their chain which has a really narrow use case but I mean potentially we can integrate any public chain that has usage and that will bring value to the users.
So yeah, for me, I’m, I’m very hyped. I’ve never been, as you know, not for a long time as optimistic and motivated as now. The whole team is like this.
So, I hope the community can also share some of that optimism because things are looking really, really good. I’m very excited by what’s coming in the roadmap. And yeah, like I said, just a start, so much more to do. Yeah, Danny, maybe you can share what current or future implementation you’re looking forward to the most and what has been, or will be challenging.
Danny: Yeah for me i would say the most interesting part here. Tsunami will definitely keep growing, we will be bringing EVM and non-EVM chains as well. It’s not going to be like one new chain every week or something like that, I don’t think we ever want to go this direction. But, I think we want to keep up with the market. New chains will appear, some chains will die. You cannot predict which chains will survive, but we will integrate many of the chains, some will probably not stay with us forever, many will. And for me it’s kind of like unannounced things, the most interesting part is actually collaboration with the projects.
So, when we started this new venture at the beginning of this year, our motto was and still is, “We are custom-tailored DeFi”, and that is something that excites me the most. We can closely collaborate with other projects, which are blockchain protocols, and some even less blockchain-oriented projects like Web3 to Web2 and vice versa, and work with them to solve their struggles. It’s a really exciting part of this collaboration with others and working on their business logics and working on different use cases. I’m just happy that the new platform actually allows us to open our technology to various use cases, instead of trying to come up with some standardized way for blockchain data, because with the current state of things it is just not possible. Because blockchain, despite being around 14 years old, or around that it’s just been born right now. Being born as actual projects and users coming to blockchain, the amount of users and use cases grows over time. The interesting part, the life of blockchain will be at that moment where it will stop being just a financial instrument but it will get to healthcare, governments, maybe voting and stuff like that. Like real-life use cases, so there are a lot of things to be excited about.
Tom: Yep, you heard it from the CTO himself.
Q: The last question to Anatoly just in general, how do you see PARSIQ in the long-term future? How do you see it positioned in the market? Do you think by seeing so many people building on decentralized networks and distributed ledger technology, seeing Web2 companies come in and utilize this technology, everyone will need data, do you think the market will be big enough, what’s the opportunity here for PARSIQ to capitalize on?
Anatoly: I would say that I keep the same vision for PARSIQ as when I was asked a year or two years ago. It’s a complete solution like Google for blockchains, that aggregates all the data in both statistical aspect, dynamic aspects, and we are only at the very beginning of our journey. Blockchains they are now, just as Daniil said, despite 14 years or more, with us, blockchains they are still young. They’re still in the stabilizing phase where the world will select a set of technologies that will collaborate and become standard. We want to scan all this data and scan not only a lower layer that happens only on the blockchain level, but also all the processes on top of these things.
For example if we compare it to the internet, it is not very interesting to analyze TCP traffic itself for the TCP traffic for instance because it’s a very basic level. What’s interesting is what’s built on top of this. We already see things like DeFi protocols that are really the essence of blockchain and we are now extracting more meaningful events from there and building entire maps of how it is distributed, and how things collaborate with each other. This data has so many insights for new ideas, for new businesses, so we will be the most complete solution for this type of data. I would say that this is PARSIQ’s future.
Tom: Thank you guys, yeah any closing statements?
Anatoly: Hmm, Let’s Pars’em! (Laughs)
Tom: Let’s Pars’em! Yeah thank you Danny and Anatoly for being here on this very, very important week for PARSIQ. Tsunami launch was a success, early testing was a success. Now it’s just expanding and amplifying everything, and onwards and upwards!
To learn more about the Tsunami API visit these links:
PARSIQ is a full-suite data network for building the backend of all Web3 dApps & protocols. The Tsunami API, which launched in July 2022, provides blockchain protocols and their clients (e.g. protocol-oriented dApps) with real-time data and historical data querying abilities.
Website | Blog | Twitter | Telegram | Discord | Reddit | YouTube