Out of the Cave

Wyatt Meldman-Floch
5 min readJan 17, 2020

--

Hey Everybody!

It’s been a while since I have been active in the community and for good measure; I’ve devoted my energy solely to implementing the vision that brings us all together. But as main net nears, I’m eager to emerge from the proverbial “cave” and join a community that makes hard work worth it. As a bit of an ice-breaker, I’m going to share some updates on protocol development, why they matter and what I’m truly proud of. As you know the Constellation Protocol was developed from scratch, from the ground up. No forks, no shortcuts; just math and now code.

- What’s new and why does it matter?

There are many protocols with different approaches, so let me shed some light on the philosophy behind our protocol development.

The problem the data industry faces is similar to that of the early internet. On-prem mainframes and LAN networks are not complementary to CDNs or ISPs. ISPs solved the problem by connecting local computers to the wider internet my means of a standardized communications protocol. For the most part this made the LAN networks obsolete. They have been replaced by a protocol that could scale globally.

The same is true about Big Data: multi-billion-dollar batch processing pipelines serving offline outputs of data mining 101 homework, is not going to “fuel AI”. Any data engineer would tell you that to you to your face. And the solution literally requires a reformulation of the original decentralized network(read:internet)*. But look, that’s why I’m here. Isn’t that what the entire space is about?

The main challenge of a decentralized network at the literal `global` scale is: how you compromise. Provide scalability, interoperability and resilience, then you win, right? Unfortunately, it’s not that simple and just like our theorem-friend CAP tells us, not without tradeoffs. Or …, maybe not?

Let’s back up, wtf am I talking about? Basically, you get 3 and you choose 2, that’s CAP. And any distributed system is governed by these l̶a̶w̶s guidelines. Let me grab you an example: blockchain “scalability” is often touted in terms of Transactions per second (TPS). It is comparably easy to spin up a network with a small number of nodes and run consensus among those nodes. From a technical standpoint one could say this is a mildly more sophisticated version of an encrypted MySQL database. If that’s the case, then what’s the point? What’s the tradeoff? If I’m hosting a “secure” database and calling a blockchain how does this provide utility beyond existing tech? It matters if you achieve scalability, interoperability and resilience, right? But what does that even mean?

That means: your technology is a relative truth machine for the internet. Besides telling the truth about arbitrary data (consensus)[HARD] you need to, 1) integrate with any application out there (yes, and that’s actually what integrations mean) 2) fight attacks (resilience) and most importantly 3) work on literally *every fucking machine known to fucking mankind*. Can your protocol flexibly scale with devices over time — like the internet itself? Thus to my point, a number doesn’t evoke blockchain scalability: an equation does.

- The solution

The “Blockchain 2.0” movement was predicated on the “fat protocol”: a distributed system that operated in serial (lol). In a fat protocol all the resources of the entire network are shared between all network participants. Thus the more users, the less resources are available for each user.

What if this entire concept could be inverted: i.e. how can we integrate blockchain data into existing systems while respecting industry standard methods of scalability? Clearly the solution is algorithmic (read: an equation).

In a classical DAG networks, tps can be equated with a data “pipe-width” that is proportional with the number transactions submitted to a network. In the Constellation DAG the tps pipe-width is proportional to the number of *nodes* times a concurrency factor (parallel consensuses). The concurrency factor is a multiplier that increases the tps due according to tha parallelism of cosnensus. In other words, we found an improvement that made our cluster “horizontally scalable”, by the number of nodes and the blocksize.

- The latest milestones

I always do myself an injustice, but from what’s fresh in the noggin, I want to highlight a couple of the most important and recent milestones that we reached:

— Horizontal Scalability

This is what I’m most proud of. This is a fundamental reformulation of the limits of a decentralized global network. I’m publishing a video in the next few days demoing our solution to scalability by comparing the TPS and checkpoint blocks increase when additional nodes are added to the cluster. Stay tuned!

— Double spent protection

A major marker for any consensus protocol.

It prevents the accounts from “minting tokens” by sending conflicting transactions. Double spend protection is like a golden capstone on top of the other milestones and we had to basically re-implement our entire first iteration to achieve it using a scalable approach. The question was how to minimize the data that is sent over the network to optimize throughput. The last few months have been spent implementing a new approach to double spend protection using a definite topological order between transactions, which is kind of the “secret sauce” from which our success derives from.

— Reputation model & Cluster Alignment

We took a localized test-environment onto the level of a truly decentralized network operation. In that effort we created a data migration pipeline for the nodes to send their data to the block explorer. The same holds true for logs and metrics monitoring for decentralized main net nodes. Additionally we implemented a load balancer which optimizes wallet and exchange integration. The path is clear to onboard further node batches after the mainnet swap. I’m also quite excited about the governance aspects going forward.

- Outlook

It only goes up from here and literally, the sky’s the limit. We will continue to improve and finetune the protocol and grow the network. I’m continuously inspired and motivated by the support, contributions and excitement of our engineering team and community members like Vito, batch 0 node operators and Coranos. There’s SO much work that isn’t really visible and it takes more than a protocol to launch a main net. Without the support of our community, we wouldn’t be here. On a personal note: thank you for your support and making my dreams come true.

Stay tuned for a video demo and I’m excited to share more updates with you in the coming weeks on a more regular basis again!

* realtalk: Google

--

--