How to Refactor Public Cloud
Daniel Sarfati, Product Director
The global cloud market is expected to generate at least $206B in revenue this year, even as customers overwhelmingly fear overprovisioning and estimate that up to 32% of cloud spend goes to waste. It’s only natural to assume that rational actors will embrace new technologies and providers to lighten the load.
Thanks to the broad adoption of containerization, we’re headed for a fragmentation of the landscape, wherein more and more customers may turn to alternative infrastructure.
Containers Made Multicloud Possible
Nowadays, it’s rare to find a company without a hybrid or multicloud configuration, but that wasn’t always the case. Vendor lock-in used to be a fact of life.
Enterprises anxious to modernize during the heyday of Web 2.0 didn’t have the time to compare cloud solutions once they were up and running, and neither did the up-and-comers nipping at their heels. For enterprises and innovators alike, it was oftentimes easier to tack on a service from a current vendor than shop around for the most cost-effective solution.
Containerization changed all that. The portability and immutability of the format made it possible to package software with its dependencies and execute as intended wherever it was sent.
As web applications decoupled, software developers began to take a renewed interest in optimizing cloud spending by aligning their resource strategies to discrete use cases. The benefits of immutability and portability translate to zero switching cost. Now nearly every cloud customer on the planet relies on multicloud infrastructure powered by containerization technologies and virtual machines.
By making better, more needs-based choices, cloud customers came to acknowledge a fundamental flaw in the value proposition of public cloud: complexity inevitably leads to overprovisioning.
Serverless Wasn’t Painless
In the last half decade, centralized cloud providers have tried to adapt to reality. The Big Three released a spate of so-called “serverless” container orchestration services that ostensibly sought to address a growing appetite for simplicity. Billed universally as out-of-the-box solutions, these products promised to streamline deployment by eliminating the operational burden of server management and allowing customers to focus solely on app development.
I assume we’re all pretty familiar with how marketing works.
Leaving aside, for the moment, that “serverless” is a horribly imprecise term of art, the first generation of fully managed orchestration services didn’t do enough to distinguish themselves as standalone solutions.
Added perks like on-demand elastic bursting, event-driven autoscaling, and pay-as-you-go pricing certainly sweetened the deal, but folks lost in the wildlands of “everything-as-a-service” couldn’t readily grok the value proposition. If the provider continued to manage the underlying infrastructure, wasn’t serverless just a nuanced version of the conventional cloud deployment workflow?
That’s not to say those products had no merit. Fully managed orchestration is a valuable service for specialized and discrete use cases, and reclaimed development time was a sufficient incentive for many notable enterprises to embrace it. But those initial offerings didn’t provide the cost optimization they were after.
Fully managed orchestration failed to curry early favor because the first products on the market were too much like everything else. They came laden with fluctuating and extravagant costs, unforeseen operational overhead, and a tendency toward vendor lock-in. Worst of all, they didn’t actually alleviate pain points. They just redistributed them!
No category of cloud products has fallen short of its promise quite like fully managed serverless. Instead of streamlining deployment and unburdening developers, most of those early solutions inadvertently replaced the onus of server management with a multitude of additional tasks.
As Corey Quinn of Last Week in AWS explains:
The bulk of your time building serverless applications will not be spent writing the application logic… Instead you’ll spend most of your time figuring out how to mate these functions with other services from that cloud provider.
In practice, functions-as-a-service (FaaS) products like AWS Lambda require such declarative execution patterns that they transmute hours better spent on business logic into a full-time business of managing functions to get the job done as intended. (You could argue that Lambda does help to reduce a codebase to its essential business logic, but that’s like saying the monkey’s paw really did grant wishes.)
Can you blame anyone for dismissing fully managed orchestration from the start? Detractors looked at the gooey, “serverless” moniker as a gateway drug to vendor lock-in, while more charitable critics begrudgingly admitted that it could be a stopgap solution for cases where your go-live was yesterday, but it wasn’t something you’d choose to rely on in the long haul.
I’m inclined to agree — at least as the assessment pertains to those early implementations. Engaging with centralized serverless invariably forced you to sacrifice autonomy, limit versatility, and incur unnecessary expense. Put another way, they suffered the classic ills of public cloud.
The CaaS Alternative
To my mind, every industry trend suggests a prevailing sense of cloud fatigue:
- Uncontrolled costs have shifted enterprise priorities from infrastructure provisioning to infrastructure reduction.
- Multicloud and hybrid setups rule the day as companies reinvest in personalized DevOps.
- Younger companies and “bruh-culture” startups, being naturally leery of certification-happy Amazonians, are perfectly content to dabble with alternatives.
Budgetary spillover has undermined the whole argument for leaving bare metal in the first place! Now that containers have broken the floodgates, customers are well provisioned to divert resources to novel infrastructure that’s more optimized for containerized deployment. And there’s nothing more frightening to tier-one vendors than simple, affordable alternatives.
For evidence, you’d need only consider the newest fully managed products from the Big Three. Long gone are the daisy-chained function outputs and thorny, plug-and-play backends. Newer containers-as-a-service (CaaS) products like AWS Fargate and Azure Container Instances have reduced the feature set to exactly what’s on the box — namely infrastructure, orchestration, and event-driven scaling.
If that sounds like “Docker images on demand,” that’s because it is. It could even be the future.
Companies don’t want to spend inordinate amounts monitoring systems that cost peanuts to run. Fully managed CaaS orchestration is especially useful for discrete use cases that don’t require access to the underlying nodes, such as long-running data processing batch jobs or 3-D accelerated rendering queues, but it’s equally suited to autoscaling decoupled microservices.
In a recent survey by Datadog, a significant share of cloud customers signaled interest in migrating Kubernetes clusters to fully managed serverless. That’s great news for the paradigm as a whole, but the success of modern serverless shouldn’t be confused for a real shift in provider philosophies. All signs point to business as usual for the Big Three.
Problems of Centralized Cloud
The advent of multicloud may have forced big-box cloud providers to play nice, but their recent foray into fully managed orchestration should be viewed with caution. Like the patented snare of a free introductory tier, sales materials that scream “no infrastructure expertise required’’ make it plain that rapid onboarding and lock-in continue to be the design pattern. Plenty of younger companies will find themselves too far gone to turn around. (Just imagine riding shotgun in a car that requires “no driving expertise.”)
Then there’s the unforeseen costs. Snappy lines like “scale down to zero” sound great, but the complexity of centralized vendor ecosystems also inevitably downstreams to price. Massive physical infrastructure directly inflates the cost of compute with allowances for operational overhead, site planning, and continuous hardware upgrades — not to mention some extra padding to subsidize the development of hundreds of productified services.
Even if a particular solution does facilitate development, customers end up overpaying to keep the monolith of public cloud on life support. The usage-based fees for a container product from AWS can be several times more costly than a comparable service from alternative providers. Why pay for a household name when you don’t need the full line of accessories?
If you listen to conventional wisdom, you might argue that you’re paying for the most secure cloud available. You’d be wrong, but it’s certainly a myth centralized providers like to promulgate. The fact is, the near-unavoidable overprovisioning that comes with public cloud introduces vulnerabilities and performance issues. Developers deserve a better solution.
Meet Salad Container Engine
Almost twenty years into the cloud experiment, a sea change in consumer computing hardware has virtually eliminated the strategic advantage of choosing a tier-one provider. Commercially available home operating systems now ship with native hypervisor isolation, virtualization support, and service-layer APIs to safely expose machine resources — which are all the ingredients you need to standardize a distributed compute environment for containers.
At Salad, we’ve built a decentralized infrastructure layer composed of heterogeneous consumer hardware. Each and every node on our network represents a real PC whose underutilized compute resources have been voluntarily contributed. Our network activates latent processing power and bandwidth as dedicated instances of elastic cloud compute.
With the help of hundreds of thousands of private individuals, Salad Container Engine (SCE) offers straightforward orchestration that performs as reliably as hyperscale cloud for less than half the cost.
To facilitate exits from expensive vendors, we’ve designed a fully managed orchestration service that:
- adapts to developer workflows, rather than coercing its own,
- optimizes performance with fully dedicated infrastructure,
- provides affordable solutions that aren’t prone to lock-in, and
- integrates with standard tooling and multicloud configurations.
You should never be hindered by your infrastructure. Developers using SCE can migrate and deploy in minutes, using their preferred and custom workflows. Our pricing is transparent, predictable, and fixed.
Remember this year’s forecast for cloud revenues? When you factor in $397.5B in end-user spending, that $206B projection makes for a cozy 51% margin across all providers.
Seems to me it’s time for a leaner cloud.
The Salad team is hiring. If you are a network or backend engineer interested in developing containerized orchestration solutions on a decentralized infrastructure layer, please visit our careers page to browse available positions.