PARSIQ
Published in

PARSIQ

PARSIQ Fireside Chat + Q&A

Special Tsunami API discussion with Tom, Danny and Anatoly

Introduction

Tom: Hi everyone, welcome to PARSIQ’s livestream, today we have a slightly different format from the standard AMA’s, we’re doing a Q&A plus a fireside chat discussion. This will be with myself and our CTO Danny, and due to popular demand, something that the community has been really waiting for, for a long time, we have PARSIQ Chief Blockchain Architect Anatoly.

Q: This is a popular question about the Tsunami API and the whole new direction of the PARSIQ network. Will the smart trigger and other core functionalities be discontinued. So Danny I think I’ll leave this one to you.

Danny: Alright, so we don’t have a plan to discontinue completely. Some of our clients, a lot of our clients still use the Smart Triggers, and there are a lot of smaller, B2C use cases as well. The new platform is, I would say it’s like from developers to developers, even in B2B context, it’s still from developers to developers. Our core, meaning the PARSIQ core platform allows a simple no code/ low code approach for smaller clients and easier use cases. So, I think it will not be going anywhere anytime soon. We will definitely try to re-onboard the bigger, bigger clients to our Tsunami API, and new set of products.

Q: What’s the main advantage PARSIQ has over competition, that allows Tsunami to overcome them in speed and precision? What stops them (or at least makes it difficult for them) to catch up with Tsunami?

Anatoly: I would say that firstly, we have our own, maybe it is not very popular in a decentralized world, the proprietary format, how we maintain the data. But I believe that at some moment, we will even publish all the data structures, how we process that data, how we’re storing it, but it really allows us to process data in an enlightened manner I would say.

Q:Is PARSIQ aggressively targeting particular projects to onboard Tsunami? It would be fantastic to be able to announce a top 100 market cap project to the list of users.

Tom: We do not discriminate. So, if someone wants blockchain data whether they’re big, small, DeFi, CeFi, they’re all welcome to use it. The list of alpha testers and the projects in the pipeline is all over the place. We have DeFi projects starting from big protocols, really, really big ones and those that have like 300 million in TVL and more, up to small ones, medium sized ones. There are centralized service providers, analytics platforms, CeFi companies, big and small. I think, yeah, we don’t discriminate. So, the idea is to get everyone easy access to blockchain data.

Q: Are PARSIQ able to disclose everyone outside of PARSIQ involved in the Alpha/Beta testing?

Tom: We put some initial names up on our website. So, obviously there are a lot more so yeah, one by one I think you will be seeing official announcements regarding some companies, some projects and protocols and apps who are using it. Of course, depending on the commitment and whether they want to be revealed or not.

Q: Tsunami is now live on multiple chains, which chain appears to be the most popular in terms of usage so far?

Danny: Probably I can take this one. I would say, the two main chains would be Polygon and Ethereum because, like serious big projects tend to launch there nowadays. Back in the days there was no alternative to BNB actually, so if protocols wanted to launch on a chain where a transaction doesn’t cost like 100 bucks it was BNB but nowadays it mostly goes towards Polygon. And there are still a lot of projects on Ethereum, so I will say these two main chains so far.

Q: Any news on Uniswap or AAVE and whether they will use Tsunami?

Tom: So, we have this concept of Data Lakes, dedicated API’s for translated process specific customer tailored data. We will build dedicated Data Lakes for some of the really big protocols like Uniswap and AAVE. We have connected and worked with those teams before, so we will definitely be looking to work with them on these Data Lakes because everyone benefits, their community, everyone using Uniswap data like price aggregators, analytics platforms, DEX traders and so on. Similar to AAVE, whether they will use it for their backend, that’s a good question. I guess that will depend on what the roadmap is, what’s the implementation right now and how well we build these Data Lakes.

Q: AAVE had custom triggers set up originally, will they move across to using Tsunami now?

Tom: So the custom AAVE triggers are still live on on the Core PARSIQ platform. But using the AAVE Data Lake that I mentioned, you will be able to get this data as well and much more.

Q: Have any DEXs or CEXs shown interest in using Tsunami?

Tom: Yeah, we have a couple of DEXs in the alpha tester pipeline, and not CEXs but we definitely have multiple CeFi companies. Some of them, quite large, have tested it and are happy.

Q: We need to be able to now act on our triggers! When will Ncase be available? Will it be included in Tsunami or be its own product?

Tom: Yeah, you can still use the triggers on the old platform, but once the real-time push based approach is implemented on Tsunami you can now set them up there as well. When will Ncase be available? So, we have no no set date right now because as you know, it’s all hands on deck with the PARSIQ Network which is Tsunami Data Lakes, Data Hubs, SDK, the whole roadmap has been published on our blog until next year, it’s included in my CEO Letter.

Q: What’s the plan for social media content? Is PARSIQ going to now host a bunch of videos about Tsunami? I.e. use cases, performance, all would make great videos.

Tom: So, it’s quite a developer oriented product. I think the main goal would be to publish documentation, expand documentation, and do some educational content. And of course, once we implement the SDK, there could be some really good hackathons for people to build Data Lakes on top of PARSIQ. Danny, do you have anything to add?

Q: I’m wondering if blockchain data becomes searchable, like the internet through Tsunami, what can be done with this? Any thoughts?

Anatoly: I would say that Tsunami, while being a developer oriented thing, also deserves some its own face. The best way data could be presented is through our own block explorer. That will absolutely unveil the power of what we can do, while processing blockchains. So yes, it should become surfable, like the internet.

Q: It has been alluded to that the PRQ token will be the utility token, or means of “use” by holding PRQ for access to API queries. Is this still true?

Tom: So yes, the same model that is implemented as with the PARSIQ Core platform, will be implemented for the PARSIQ Network. But in addition, I think this is much more interesting… We have the Data Lakes. So Data Lakes, dedicated API’s for custom-tailored, aggregated data for specific use cases, NFT Data Lake, Uniswap Data Lakes and so on, it could be countless. And there were multiple types of Data Lakes, Data Lakes that we build as the PARSIQ team and also those that third parties can build using our SDK. The idea here is that you would have to stake a certain amount of PARSIQ tokens. If you don’t have tokens, and don’t want to buy them, then you’re free to rent them from the IQ Protocol renting pool. But you have to have them either rented or purchased and to be able to deploy and run the data lake on the PARSIQ network. The idea is that you can make it public, the Data Lake that you built and deployed on PARSIQ. Or, you can make it private and implement some kind of payment fee for people to get access, where a revenue share model will be implemented for the person who built it and for the PARSIQ network.

Q: What kind of doors will Tsunami potentially open for the crypto space in terms of innovation and new products?

Anatoly: Hmm. It’s a question about the future. The ability to react quickly and precisely on top of the data is always a benefit. I believe that this is a question about what types of new things you can expect in the near future. It’s an open question but I believe that with such big amounts of data easily consumable, it will allow users to create their own fast prediction mechanism. For example, on such data you can easily apply different statistical tactics like principal component analysis like for example you are monitoring hundreds of cryptocurrency rates, but when applying principal component analysis to them you discover that there are only 3 factors and you can reconstruct them in real-time. It’s just an example of applications and probably it will also create different types of derivatives for example, a lot of different things.

Q: Can you please describe the backend architecture behind Tsunami API?

Danny: I would say no. I mean there are a lot of things going on. There’s a lot of software built around the nodes to make it consistent at all times. Because like, even if you run like a set of nodes or like a pool of nodes in one location, it doesn’t necessarily mean that they will act the same and some will fall behind, some will have some other errors. So, there is a lot built around that to make our data as you know, as fresh as possible and following the latest propagated block on the chain. There is a lot on the database layer, how we process blocks and how we store them, how we make it fast and efficient. There is a lot to do wiTsunami API itself, and to the Data Lakes, the SDK and Data Hubs SDK. So, there’s a lot to potentially talk about the tech, not really going to be able to answer it quickly

Q: Can the advantage of using Tsunami API be quantified in dev hours saved or maybe data storage costs?

Tom: Yeah, definitely, I think compared to things like building your own solution, running your own nodes or using node APIs and building non-scaleable listeners and so on, I think there could be a calculation done on dev-hours saved and how costs can be reduced.

Q: Will Tsunami API be able to work on private blockchains (on premise) is there any scope to do business here?

Danny: I mean, I can take this one. It all depends, there is no answer yes or no here. It once again depends on the use case, it is possible. If those private blockchains want to collaborate with us then technically everything is possible. It can definitely work on private chains if those chains want to integrate with us and want to collaborate on that.

Q: What’s the feedback been like for the Tsunami API + PARSIQ Network?

Tom: The whole feedback that we got in terms of how we built the product, how it works, how fast it is, how it solves some of the problems that people have with existing solutions, the ease of access and everything else. It’s all been super, super positive. I didn’t expect that. That’s why it’s easier to sell to potential users, potential projects. The combination of getting indexed historical data along with custom tailored, specific data and the real-time push based events is a pretty killer use case that I don’t think anyone has implemented like this, I figure it’s a huge market.

Q: The last question to Anatoly just in general, how do you see PARSIQ in the long-term future? How do you see it positioned in the market? Do you think by seeing so many people building on decentralized networks and distributed ledger technology, seeing Web2 companies come in and utilize this technology, everyone will need data, do you think the market will be big enough, what’s the opportunity here for PARSIQ to capitalize on?

Anatoly: I would say that I keep the same vision for PARSIQ as when I was asked a year or two years ago. It’s a complete solution like Google for blockchains, that aggregates all the data in both statistical aspect, dynamic aspects, and we are only at the very beginning of our journey. Blockchains they are now, just as Daniil said, despite 14 years or more, with us, blockchains they are still young. They’re still in the stabilizing phase where the world will select a set of technologies that will collaborate and become standard. We want to scan all this data and scan not only a lower layer that happens only on the blockchain level, but also all the processes on top of these things.

Conclusion

Tom: Thank you guys, yeah any closing statements?

To learn more about the Tsunami API visit these links:

About PARSIQ

PARSIQ is a full-suite data network for building the backend of all Web3 dApps & protocols. The Tsunami API, which launched in July 2022, provides blockchain protocols and their clients (e.g. protocol-oriented dApps) with real-time data and historical data querying abilities.

--

--

Full-suite data network used to build the backend for all web3 dApps & protocols.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store