Joining CloudByte

Jeffry Molanus
CloudByte
Published in
3 min readJul 20, 2017

I have joined CloudByte / OpenEBS as CTO for a bunch of reasons, some personal, some professional but mostly technical. In this blog, I discuss our technology heritage and vision — since these, and the team that has built them and has proven the ability to make the visionary real — drew me to CloudByte / OpenEBS

While CloudByte today provides storage solutions built around open source technology like BSD and ZFS, in order to create sustainable value, a company has to do more than wrap up a bunch of well-known and well-proven open source technologies and build a sales team around it. Instead, systems businesses that endure deeply understand subsystems of their products and use this understanding to improve and augment them to create value that delivers a better experience for users and that cannot be easily mimicked by others.

One of our underpinnings that I know all too well is that the widely used ZFS file system does not handle IO problems concerning flaky hard drives, HBA’s or drivers. Instead, it assumes that the IO passed to the underlying subsystems is handled correctly and that it will get a response back, as it should. Unfortunately, this is not always the case. A specific manifestation of this experience lives in the form of the vdev dead man timer in the ZFS source code.

While it may sound arcane, this dead man timer is a concrete example of why understanding the behavior of underlying hardware and how it is managed by the kernel and related subsystems are crucial to being able to use software to turn commodity hardware into enterprise-grade storage. If you are using a ZFS based system that does not have a great approach to deal with faulty IO at the hardware level then, unfortunately, you may be experiencing instability and worse. This is why one paradox of Software Defined Storage (SDS) is that it needs to deal with the hardware.

This deep storage experience and knowledge, of course, is not all that CloudByte has to offer. Since the very start of the company, the CloudByte team has believed in taking storage and containers and fusing the two together.

ElastiStor today runs its storage services in containers (jails) and has QoS by default so you can run multiple tenants on the same storage controllers in isolation with granular management capabilities. People who know storage will appreciate the complexity that comes along with running storage services in jails or containers let alone bringing in QoS into the mix.

At a high level, such containers both provide the file and block services and also contain the actual data. This as a whole is the managed entity. This means that ElastiStor not only replicates the data from site A to site B but also replicates the storage services that go with it (and make that data more useful). As a result, we can implement the concept of DR VSM’s on a per workload and SLA basis, making sure we can fulfill the set QoS levels as we replicate these instances across DCs. I want to emphasize that this is not a simple matter of abstracting shares and LUNs in a look and feel of multi tenancy but actual containerized isolation.

With the 2.1 release, ElastiStor customers also got initial support for synchronous replication across ZFS pools and continued performance improvements and some other fixes, many in response to the requirements of our large scale customers such as eSilicon or Netmagic.

While in this blog, I have gone through the heritage, in the next blog I will go into more details on how we are breaking the status quo of the storage technology. I’ll tell you this much now though; it’s not going to be the same old same old scale out to millions of IOPS, faster than Ceph distributed storage systems, but radically different thanks in part to embracing containers again as our foundation. I would not have joined if we did more of the same in an already crowded storage market.

As a final note, thank you to the members of the OpenEBS and ElastiStor communities that have welcomed me to the team. I look forward to meeting more of you and collaborating with you. And, of course, many thanks to Uma, Kiran, Evan and the entire ElastiStor team. As we like to say around here, Game On!

--

--