It’s turtles all the way down for OpenStack

John Mark Troyer
The TechReckoning Dispatch

--

Hi friends,

It turns out I had some thoughts on OpenStack after talking with a lot of interesting people at the latest summit in Boston as a co-host of theCUBE a few weeks back.

My tl;dr on OpenStack: It’s still hard to set up and manage. If the project improves in these areas, then the economics of productivity will drive more usage. If not, then people will use OpenStack only when they have very specific needs and skills. The project seems aware of these issues, but the platform is complex and there are a lot of stakeholders and use cases. As usual, “easy” is not simple.

Some observations on OpenStack

OpenStack is real. I talked with attendees with production deployments and others who were ready to start their brand new OpenStack deployments. The average age of a cloud in the latest OpenStack survey was under 1.7 years, and the median company was using OpenStack for 60–80% of their cloud infrastructure. With about 5000 folks, attendance was down but it didn’t feel like some dying remnant of an old project. A few people apologized for OpenStack being “boring,” but as all infrastructure folks know, boring is good.

The OpenStack community is self-aware but still lacks a singular sense of purpose. It’s been a project notoriously plagued by questions of what they’re building and why. At the opening keynote, the foundation spent the first 15 minutes talking about how OpenStack wasn’t dead and what marketing and technical arterial stents it was inserting to keep the OpenStack pulse detectable. This could be seen as introspective and self-aware, or it could be seen as defensive, but it certainly was unlike the opening of any corporate keynote I’ve ever seen. It’s hard to tease out the ultimate OpenStack vision: “open infrastructure” is great but leaves a lot of details to be determined.

OpenStack is useful. There are increasingly clear models for deciding when to rent vs buy cloud capacity. I saw a few different drivers of OpenStack users at the conference:

  • Special requirements in latency, hardware, tops, etc. that are hard or very expensive to get in a public cloud
  • Need for the privacy, control, or compliance benefits of a non-public cloud
  • Cost reduction, especially for apps with large, steady resource or data needs
  • Academic research, especially when you want to get under the hood in a way you can’t in a public cloud

OpenStack on the Edge. This was the coolest thing I saw. Edge in this case with the telco meeting can Verizon has a box you can put in your own facilities where, instead of running an embedded OS or even a Linux kernel, they’re running a full (but stripped down and containerized) OpenStack cloud. They can use this platform to manage services and push out entire new services (apps), and since they run OpenStack in their network core, they can manage this cloud-of-clouds under the same umbrella. (Beth Cohen’s keynote segment, interview on theCUBE)

OpenStack can be used by non-rocket scientists. Using a strong partner or especially using a managed service from Rackspace, Platform9, Canonical and others makes OpenStack viable even if your team is not a bunch of wizards. Consuming platforms as a managed services (deployed either on public or private clouds) in general is pretty hot — worth paying attention to.

OpenStack wants to integrate with other open source projects. OpenStack has been notoriously insular. An example is Keystone, its identity service (and curiously, its service catalog). As you might imagine, there are many other ways to do these things, and the world probably didn’t need another way to do it.

As an example of how things might work in the future, OpenStack will likely use etcd for distributed locking instead of writing its own distributed key-value store. (Etcd is used many other places, including Kubernetes. Writing a new distributed key-value store in 2017 is about as crazy as writing your own encryption stack. It’s mostly a solved problem and it you write a new one you’re going to screw it up.) That’s a positive sign.

As a sign of this ecosystem friendliness, organizations like Cloud Foundry, CNCF, OpenDaylight, and OpenNFV were there at the show with “Open Source Day” sessions. The cynics view this as a weakness, but let’s view it as a strength and a good outcome.

OpenStack wants to be simpler and more modular. They are pulling back from their “big tent” stance and emphasizing the core modules, while deprecating others that aren’t as important. We’re still talking 6–10 projects in a typical installation, with lots of options, so the “simple” map they showed in the keynote still looks complicated to an outsider. They’d also like to see individual OpenStack projects be useful outside of OpenStack: e.g., using Cinder and its ecosystem of drivers as a block storage service for containers. I gather, however, that the individual OpenStack projects are still pretty interdependent.

OpenStack *and* containers, not *or* containers. Of course, the biggest open source ecosystem conversation was around containers and especially Kubernetes. OpenStack primarily concerns itself with managing infrastructure resources like storage and networking. Container platforms mostly take those resources for granted. Containers, on the other hand, are a great way to package and distribute applications. Containerizing OpenStack can help with things like upgradability and resiliency of the OpenStack components themselves (at the cost of another layer of complexity). Questions remain around final evolved form of this kaiju — does it use Kubernetes or “plain old containers”, does it containerize just the control plane or also the workloads, and where does it sit on the so far un-emerging consensus on container storage and networking? But it will no doubt be monstrous and amazing.

On top of OpenStack, it’s looking like some form of container system, probably Kubernetes-based, will be how apps will be deployed and managed. So yes, we will end up with some sort of multiply nested set of VMs and containers, but hey, it’s 2017 and we are OK with our infrastructure being an infinite stack of Gamera monster turtles all the way down.

OpenStack may have a skills issue. Because it is customizable and modular, each OpenStack installation is different. Everybody in the room should be thoughtfully nodding and saying ‘uh-oh’ at this point. Companies want to be able to hire people with transferable skills, and if each OpenStack engineer has worked with a different networking or storage setup, that common skill set may not be present. The trend in infrastructure is towards more automation, more value right off the shelf rather than custom, more converged and pre-packaged solutions, fewer nerd knobs — in short, more cars and fewer pallets of car parts. If OpenStack remains an enterprise architect’s custom hot rod, it will be hard to fit it into the modern IT assembly plant.

OpenStack and ideology. A few speakers did preach the “open source good, commercial software bad” thing. I love open source for many reasons, but I find this characterization unhelpful. Open source has shaken up proprietary software business models, but vendors still need to get paid — very few companies should be taking a complex open source project for mission-critical use without a vendor to package, integrate, offer training, and support it.

Surprise keynote guest Edward Snowden also made the connection between the current centralized internet services (like Facebook, Google, Amazon, and Microsoft) and a lack of privacy and control in society in general.

Security shouldn’t be an issue in the public cloud — your apps should be more secure running in AWS than they are in your marginally-equipped data center with your surly security staff. But you’ve got to admit that putting all our eggs in just a few giant cloud baskets gives those basket owners — a few giant commercial and governmental actors — a lot of power over your apps and data, given what we know in a post-Snowden world. Monoculture agricultural crops are biologically unsustainable, and there’s an argument that monoculture clouds may be as well.

(Funny story behind Snowden’s appearance and a domain name — see Mark Collier’s story in this video.)

Takeaway. There are real reasons to choose putting apps in a private cloud, and companies are going to continue to use a mix of apps hosted all over the place. OpenStack is a very viable choice if you need a private cloud, but you’ve got to really want to use it. If there is going to be a “universal deployment platform” it’s looking more like Kubernetes than VMware or OpenStack.

As Ben Kepes said in this DockerCon recap, “OpenStack as a project is a real good thing. OpenStack as a business maybe not so much.”

As far as keeping it easy to consume, though, keep your eye on “managed private cloud” that is hosted either publicly or privately — this could be an increasingly important consumption model because it’s easy to get started and easy to run. Please throw out any old-fashioned IaaS, PaaS, SaaS assumptions you’re working with. There are many kinds of aaSes these days.

Thanks to Stu and whole SilconAngle and Wikibon gang for inviting me to be part of theCUBE!

Originally published as part of The TechReckoning Dispatch, Vol. 4, № 3, May 30, 2017. Subscribe here.

--

--

John Mark Troyer
The TechReckoning Dispatch

Techie, talker, influencemarketingcouncil.com, Chief Reckoner at techreckoning.com, Geek Whisperers podcast. Enterprise tech is the best tech.