The state of PaaS 2016
If you’re ignoring PaaS because early offerings didn’t meet your needs or because you’re more focused on operations than developers, you should look again. PaaS helps ops efficiently give developers the tools they need while also managing the underlying container infrastructure.
Circulating in drafts beginning in 2009, some variant of the NIST Cloud Computing definition used to be de rigueur in just about every cloud computing presentation. Among other terms, this document defined Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) and, even as technology has morphed and advanced, this is the taxonomy that we still largely accept and adhere to today.
That said, PaaS was never as crisply defined as IaaS and SaaS because “platform” was never as crisply defined as infrastructure or (end-user) software.
For example, some platforms were specific to a SaaS, such as Salesforce.
Others, specifically the online platforms that were most associated with the PaaS term early on, were tied to particular languages and frameworks. These PaaS archtypes were very opinionated. For example, the original Google App Engine only supported an environment based on a Python variant. Heroku was all about Ruby.
Heroku’s twelve-factor app manifesto was an additional type of opinion; write your apps this way or they won’t really be suitable for the platform. These platforms may not have been just for hobbyists, but they were certainly much more suited to developer prototyping and experimentation than production deployments.
At the same time, the platform moniker was used more broadly to cover the integration of a range of middleware, languages, frameworks, tools, and architectural features (such as persistent storage) that a developer might use to create both web-centric and more traditional enterprise applications. Furthermore, a PaaS such as OpenShift remained not only polyglot but also allowed for an increasing range of deployment types both on-premise and in multi-tenant and dedicated online environments. (As well as on developer laptops using the upstream open source OpenShift Origin project.)
However, the various approaches to PaaS did have a common thread. They were bundles of technology largely framed as appealing to developers.
The developer angle was never the whole story though. Back in 2013, my Red Hat colleague Gunnar Hellekson talked with me about some of the operational benefits of a PaaS in government:
One of the greatest benefits of a PaaS is its ability to create a bright line between what’s “operations” and what’s “development.” In other words, what’s “yours” and what’s “theirs.”
Things get complicated and expensive when that line blurs: developers demand tweaks to kernel settings and specialized hardware — customizations that fly in the face of standardization and automation efforts. At the same time, ill-defined roles lead operations to create inflexible rules for development platforms that prevent developers from doing their jobs. PaaS decouples these two, and permits each group to do what they’re good at.
If you’ve outsourced your operations or development, this problem gets even worse because any idiosyncrasies on the ops or the development side create friction when sourcing work to alternate vendors.
A PaaS makes it perfectly clear who’s responsible for what: Above the PaaS line, developers can do whatever they like in the context of the PaaS platform; it will automatically comply with operations standards. Below the line, operations can implement whatever they like, choose whatever vendors they like, as long as they’re delivering a functional PaaS environment.
We spend a lot of time talking about why PaaS is great for developers. I think it’s even better for procurement, architecture, and budget.
Today, with the rise of DevOps on one hand and containers on the other, it’s increasingly clear that a PaaS can be the sum of parts that are of direct interest mostly to developers plus parts that are of direct interest mostly to operations.
In the context of PaaS, DevOps both leads to change and reflects change in a couple of major areas.
First is the number of tools that organizations are bringing into their DevOps (or DevSecOps if you prefer) software delivery workflow. Most obvious is the continuous integration/continuous delivery pipeline, most notably with Jenkins. But there are also any number of testing, source code control, collaboration, and monitoring tools that need to be integrated into the workflow. At the same time, developers still want their self-service provisioning with an overall user experience that’s tailored to how they work. A PaaS is an obvious integration and aggregation point for all this tooling.
DevOps is also changing the way in which developers and operations work with each other. Early DevOps discussions often focused on breaking down the wall between Dev and Ops. But this isn’t quite right. DevOps does indeed embody cultural elements such as collaboration and cooperation across teams — including dev and ops. But we should also recognize that the best form of communication is sometimes eliminating the need to communicate at all.
To the degree that ops can build a self-service platform for developers and get out of the way, that can be more effective than steamlining dev and ops interactions. I don’t want to communicate more effectively with a bank teller; I want to use an ATM (or skip cash entirely).
Containers have also influenced how some organizations are thinking about PaaS. Many PaaS solutions (including OpenShift) have been based on containers from the beginning. But each platform did its own implementation of containers; in OpenShift it was Gears, in Heroku it was Dynos, in CloudFoundry it was Warden (now Garden) containers.
As the industry moved to a container standard (Docker-format with standardization through the Open Container Initiative (OCI)), OpenShift moved with it. Red Hat has helped drive that movement along with many others though not all PaaS platforms have participated in the shift to standards.
With container formats, runtimes, and orchestration increasingly standardized through the OCI and Cloud Native Computing Foundation (where kubernetes is hosted), there’s increasing interest from many ops teams in deploying a tested and integrated bundle of these technologies outside of any specific development environment initiatives within their companies.
That’s because the huge amount of technological innovation happening around containers and DevOps can be something of a double-edged sword. On the one hand it creates enormous possibilities for new types of applications running on a very dynamic and flexible platform. At the same time, channeling and packaging rapid change happening across a plethora of open source projects isn’t easy . Expending a lot of effort on customized infrastructure can also end up being a distraction from the ultimate business goals.
As a result, at Red Hat, we often talk to customers who view OpenShift primarily through the lens of a container management platform rather than the more traditional developer-centric PaaS view. There’s still a developer angle of course — a platform isn’t much use unless you’re going to run applications on it. But sometimes there are already developer tooling and workflows in place. In this case, the pressing need is to deploy a container platform using Docker-format containers and kubernetes orchestration. Without having to assemble them from upstream community bits and support them in-house.
An integrated platform leads to real savings. For example, based on a set of interviews, IDC found that:
IT organizations that want to decouple application dependencies from the underlying infrastructure are adopting container technology as a way to migrate and deploy applications across multiple cloud environments and datacenter footprints. OpenShift provides a consistent application development and deployment platform, regardless of the underlying infrastructure, and provides operations teams with a scalable, secure, and enterprise-grade application platform and unified container and cloud management capabilities.
Among its quantitative findings was 35 percent less IT staff time required per application deployed.
In short, PaaS remains a central part of the cloud computing discussion even if the name is sometimes discarded for something more specific or descriptive such as container platform. What’s perhaps changed the most is the recognition that PaaS isn’t just a tool for developers. It’s also a way for ops to enable developers most efficiently and to manage the underlying container infrastructure.
Originally published at bitmason.blogspot.com on November 1, 2016.