KodaDot 2.0 — Beta
Since we’ve started doing KodaDot, we had taken tough decisions at the start. We’ve recycled the codebase from our well-memorized project vue-polkadot.js.org and recently said goodbye to the old code.
We will not forget, from where we came and who (Mattispaghetti) has brought us up here. ❤️🔥
New Era has begun — KodaDot — 2.0
We’ve been pushed to this by the natural evolution of our codebase, to speed up load times and deliver complex feature sets easier. Speed up sounds easy but involves a lot of complex measures to be taken on the background which is invisible to the user. That’s the heavy lifting we do for you.
Our current model of the SPA bundle is quite big and the design we’ve used at the start is good for small and medium-sized apps. Things have rapidly changed, KodaDot gained huge traction from the perspective of new users and features and a bunch of issues in the backlog. Currently, we have 256 issues and our codebase quite grew significantly.
The current bundle which is served from Netlify CDN edge servers to your client costs Internet tubes to deliver you each time 8.6MB, which isn’t the smallest. Imagine not everyone is on a 600Mbit fibre optic line and doesn’t have the lowest latency under <10ms. Those who want to use KodaDot, like incoming audiences to creators' profiles, have a real struggle.
Best, we are aware of it and we want to improve.
To prevent this issue, we’ve stepped up the game and started rewriting into Nuxt in the background. What it should bring to the table is we can choose between server-side generated, server-side rendered or hybrid serving, which is still prior experimentation which scenario fits best for our use case. For example, instead of loading the whole bundle, you will get only the pre-rendered page that you have requested
We’ve done the first public deployment today and you can help us to report bugs to start using beta.kodadot.xyz — you can learn more about the incentivized bug reporting process
Agenda of anticipated upgrades
Cut bundle size on the client isn’t only magic we’re are trying to tackle right now, there are much more coming. Briefly draft how it works on background. Your client fetches bundle from Netlify then it unpacks and renders on the client and starts resolving functions. Functions call remote indexer services and based on reply other functions fetch from storage (IPFS/Pinata/Arweave) your precious JPEG. That’s it. Simple as it sounds.
Now let’s breakdown, which component we want to improve and why.
Upgrade to faster queries
We are using right now SubQuery and we are in touch with SubSquid. With both teams, we have aimed to have more RPC nodes all around the world. In past, we’ve had one in Tokyo, which was recently moved to Europe. You’ve may notice faster load times. The goal is to have these RPC nodes distributed, but on background how subscription and routing inside the cluster is in the works and we expect major improvements at the end of the year, respectively Q1. By integrating SubSquid, we can bring you new features where SQ doesn’t fit, so we want to use the best of both for you to have comfort experience from browsing KodaDot.
With storage, it’s quite harder than it sounds. We’ve started using IPFS which has a great addressing layer, but terrible retrieval times and access time. That’s where we started internally working on PermaFrost. Apart from that, we’ve tried from begging of the year scale with the Pinata team, which put us dedicated nodes in Europe and North America, but we believe it’s quite not enough. Our goal is to cut access time for most of the world. A current limitation of IPFS is that we are quite crazy billed for what we store which isn’t efficient in terms of web3.
That’s why we’ve chosen to integrate Estuary from Protocol Labs. It leverages the properties of Filecoin and we are happy to integrate it. What is should bring to the table, access and retrieval times should be cut in half at least and you could have a maximum upload of 32G. We’ve been chosen to play a bit around and we are happy to load this to the beta, till we figure out more native Filecoin integration for us. We would like in the long run to combine the best of both, using Arweave for storing small files which are cheap upfront and with two-year horizon storage deals with a recurring subscription for files for up to 32G
This is still a bit in the back-office research and we know there will be less consensus with KodaDot getting mature. That’s why we would like to keep a lean boilerplate codebase for everyone, as we are doing this part of public good and introducing complex feature set through other channels. Keep tuned on this. Have some plugin in mind, but not sure if everyone would have the benefit from it? Land on our beta channel for this
When it will be beta in production?
The thing is as we are at the start of the web3 era, some things are not super easy to tackle and we want to set in stone designs with long-horizon decisions to avoid deep technical debt in the future. Simple.
Switching from beta to production will be announced, when we will feel quite confident it’s ready. Probably transition to the new stuff will happen smoothly, in the background. You will notice super fast loads. That would be it.
We’ve set into the stone working on the Beta version of the new KodaDot 2.0. We will be bringing a few more fixes from the future, to fix the present w/o saturated fats. Primary delivery is to bring to drive faster loads and be better lean with plugins.
Be prepared for the hints in noise, we will be tweeting about it.
We are hiring — you and your frens
As a growing team, we need to hire social capital to render our vision true. You and your frens can be part of it. Let us know if you know someone who would like to contribute to open-source web3 projects and change the world.
We are in it for open Metaverse
- Frontend Engineer — VueJS/Typescript
- Technical Product Manager
- Senior Business and Operations Manager
- Rust developer