Infrastructureless Computing

@ChrisMatthieu
computes
Published in
2 min readApr 2, 2018

Serverless computing (i.e. AWS Lambda and Azure Functions) is all the craze these days. The appeal of running code without provisioning or managing servers is promising. What if you could accomplish the same goal but within your own organization without adding or managing the infrastructure?

Computes, inc. is excited to introduce what we call infrastructureless computing — everything computes everywhere — no infrastructure required!

We have built a decentralized and distributed mesh computing platform that is capable of running massively parallel computations as well as serial-based machine learning algorithms everywhere (cloud, fog, edge, and mist). This approach allows you to run AI algorithms as close to the source of data as possible for real-time, low-latency results. Here’s a great post by Jade Meskill(our CTO) on Decoupling Computations from Data Transformation.

Computations (algorithms) run in Docker containers on Windows, Mac, and Linux workstations, servers, VMs, and even IoT devices. We call these Computes nodes, nanocores. Our nanocores can discover other nanocore nodes on the same private mesh computer network and connect, communicate (via peer-to-peer), and compute together as if they are physical cores within a single supercomputer.

Our nanocore workload is managed by our decentralized queuing system called Lattice. Lattice distributes computations in the right order (serial or parallel) to the right core in the right location at the right time. Since our nanocores run everywhere (any operating system and in any platform — cloud, fog, edge, mist), they essentially are universal computes. This allows Lattice to run algorithms as close to data sources as possible rather than trying to move all of your data to the cloud to run algorithms in a serverless environment. Lattice allows you to run your AI algorithms everywhere without worrying about the infrastructure.

Since Computes is decentralized, there is no additional infrastructure to deploy or manage. The nanocore nodes manage themselves. Unlike serverless, there is no upfront setup of algorithms or API gateways or short TTLs (time-to-lives). Computes’ computations are truly dynamic. Computes tasks can even create additional tasks based on results of the previous task and feed the next task output from the previous task.

A side-effect from our decentralized queuing system is that our private mesh computer is essentially fault-tolerant! That’s right. As long as one node is running somewhere, Lattice automatically syncs the queues with each nanocore node as they re-establish connection to the Computes mesh network. Even if all nanocore nodes are taken offline, they will self-heal themselves and their queues as they start coming back online.

Contact us (hello@computes.com) for more details. Stay tuned to this blog for more exciting information about our new technology stack and development progress! You can also reach us on Twitter, Facebook, and GitHub.

--

--

@ChrisMatthieu
computes

Builder of companies, robots, supercomputers, & motorcycles. @xrpanet & @twelephone CEO. Formerly @magicleap @computesio @citrix @octoblu @nodester @teleku