SaaS 2.0 — What Container Technology Means for the Delivery of Software

Editor’s note: Dimitri is a seasoned product and go-to-market executive, serial entrepreneur, angel investor, and Work-Bench mentor. His most recent startup, Layer 7, was acquired by CA Technologies for $155 million.

Every few years the way that software gets delivered undergoes a fundamental change. One of the biggest changes in the past decade was the delivery of software as service (SaaS). SaaS made software consumable through a browser without the need of installing complex software or expensive servers locally. Today, SaaS is just one variant of “cloud,” all of which have upended the economics of how software gets delivered. With cloud there is a separation between the underlying compute infrastructure and the applications that ride atop of it. Compute is now frequently engineered as a shared resource freeing the application builder to focus on functionality and not on commodity services that could best be shared across applications.

This ability to abstract (or virtualize) away the infrastructure from functionality made it possible for developers to not just build the software but also deploy it into production on a continuous basis. With cloud there is no longer a need to separate development from the deployment and operation of applications. This new style of building, deploying and running applications by developers without the need to get support from IT has given rise to a style of creating and operating software called DevOps.

While DevOps to the cloud is transforming how software gets built and delivered in SaaS it still faces a sticky problem: every cloud is different in how its virtualizes the underlying compute infrastructure. Today writing software for one cloud infrastructure is different from writing it for another. That creates inefficiency since it makes it hard to migrate software between clouds or orchestrating software across clouds. It also forces developers to “lock-in” to a specific cloud or virtualization framework adding cost and complexity.

That’s all about to change. Software containers are about to give SaaS some new Sass. Software containers are a way to package small self-contained software components into movable containers that run on bare operating systems like Linux with all the resource sharing benefit of costly virtualization technology but without the overhead tax and lack of portability.

Docker the Tupperware of Software Containers

While the idea of using containers native to operating systems like Linux and Unix is not new, doing it in a simple way that DevOps folks can leverage took the accidental discovery of a company called Docker. Originally named DotCloud, the company attempted to build a cloud that any devop could deploy to any flavor of software they wanted to run. To do this, they create a framework for packaging software into self-contained, mutually isolated containers that could nevertheless share resource and be portable across environments. Delivered as an open-source project, this framework became Docker and has shown runaway success in the short couple of years since it was introduced.

It appears developers have fallen in love with Docker because it has done for software containers what Tupperware did for food containers: it opened it up to the masses in a simple, easy to consume way. If software is eating the world, software container packaging technology like Docker is about to transform how the food is delivered.

Prepping a big meal with containers

Software containers solve some critical issues for building, deploying and operating software at scale. Firstly it eliminates the need for specialized virtualization technology like VMWare reducing the cost of delivering cloud-like software. Secondly it is easily ported across clouds so that containers running in one datacenter could be moved elsewhere depending on where it could be run the cheapest or where it could make the most impact. But containers don’t just make software more portable. Containers also make software more agile.

Over a decade ago a concept was introduced in software design called “service orientation”. Service orientation promoted the idea of breaking large monolithic software applications into discrete single purpose software components that could be more flexibly stitched together into a business process. The hope was that by decomposing big software into smaller, atom sized pieces; new applications could be assembled on the fly and so make it easier to respond to changing customer needs (in the same way that multiple molecular compounds could be assembled from a smaller number of root atoms). The way software orientation was first implemented years ago — through a set of protocols collectively called “Web Services” — however failed to deliver on that vision. The protocols were too complex to instrument, often incompatible between vendors and failed to work on small devices like smart phones. However with the advent of lightweight mobile friendly API technology (my last company enabled organizations to build and manage APIs) a framework emerged for achieving service orientation at Internet scale.

APIs provide software a standard way to expose its inner workings to other pieces of software. APIs lets software communicate with other software and when these new API oriented application services get combined with containers true service orientation becomes possible. Containers can now provide the wrapper for discrete software components that expose their data and functionality to other componentized software containers through a discrete API opening. These movable containerized software components can then be assembled after the fact into applications where each component talks to the other via an API. This new style or architecture, derived from service orientation and empowered through containers, is called “micro-services”. Containers make micro-services possible and practical. Micro-services enable agile software delivery to simplify the creation of the next great Web, Mobile and the Internet of Things applications. Software is eating the world and containers make it easier for developers to be fine cooks.

A final word on security

Companies like Docker and CoreOS are putting into the hands of devs the tools to build and deliver better cloud software. However containerization also has the potential to change how software gets secured. For a long time an industry has emerged around protecting applications. Network solutions like firewalls and IPS products protected communication to applications across the organizational boundaries and endpoint solutions have protected the applications themselves. However as the boundary between what is inside and outside the firewall has become blurred the need to protect the application has become both harder and more pressing. Containers afford the hope of a new application security paradigm for software.

As the traditional organizational edge becomes porous a new border is needed to protect an application or application component. Containers like cell membranes provide a highly localized possibility for securing discrete software components isolating every component from the outside world and regulating interactions across software at the API layer. While it remains early days for container security, the possibility exists that the paradigm shift will not just be ‘contained’ to how software gets built and deployed but how it also gets secured. CoreOS has focused on building security into their offering from Day 1 and it will be interesting to see how enterprise adoption for both formats plays out.

If you enjoyed reading this most, sign up for our newsletter, follow us on Twitter, and connect on LinkedIn!

Note: CoreOS is Work-Bench member company.

comments powered by

Like what you read? Give Work-Bench a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.