Kubernetes Trends at KubeCon 2017

James Bowes
manifoldco
Published in
6 min readDec 19, 2017

I attended KubeCon 2017 in Austin, Texas (Go Bruins!). This post is a summary of my experience there. If there’s anything not covered that you are curious about, just ask 🤓

KubeCon Was Huge

The fact the conference sold out with over 4200 attendees kept coming up from speakers and in hallway conversations. The last KubeCon in Europe was around 1000 people. This years attendance was more than all previous KubeCons combined. Everyone was blown away by the growth in attendance.

While it is impressive, it mirrors the uptake of Kubernetes itself, so it shouldn’t be too shocking. I won’t be surprised if attendance doubles for KubeCon 2018 in North America.

Trending: Service Brokers

I kept a running tally of how often Open Service Broker API was mentioned in talks. It came up a lot! It felt almost like Google, Microsoft, and IBM representatives were required to mention it. The Kubernetes service-catalog component is how they’re planning to upsell you on their cloud-specific offerings, so I guess that makes sense.

A lot of the speakers mentioning OSB (Open Service Broker) API had a misunderstanding of how it’s being implemented. In talks and in write-ups, you’ll see mention of how it helps avoid vendor lock-in. The pitch goes like this:

Your application just asks for a database and it gets it! Then you can copy your application to another cloud, and it will get another database there!

The truth is that your application doesn’t ask for a “database”, it asks for a Google Cloud Platform CloudSQL MySQL database. That’s not portable! Hopefully, in the future, OSB API will gain a taxonomy for common types, allowing your application to just ask for a MySQL database, and have the kind and size automatically determined by the environment you’re running in, and some policy set by your team administrator.

Now, even if there were a taxonomy for common types, Google, Microsoft, etc do not push for you to use things like MySQL through OSB API. They want you to use Bigtable instead of MySQL. This is visible in most of their examples, and in the order that they’re implementing services in their service brokers.

CoreOS also announced their Open Cloud Services, which, while it is a catalog of services, does not use OSB API.

What CoreOS is offering is a way to run and manage some popular Open Source projects within your cluster. The part that OSB API provides is minimal to this. Really, they’re competing with something like Ansible to handle proper upgrades and changes to running systems, and with Real Human Sysadmins. Still, it would be nice if Open Cloud Services could use OSB API where applicable.

Trending: Service Meshes

Background

Before the Istio mini-summit on Tuesday I was lukewarm on service meshes.

The advertised selling points I had heard about service meshes before were that, without modifying your application, you would automatically get:

  • telemetry (request times, error rates)
  • tracing (request X went through service Y then Z)
  • resiliency (retries and backoff in the face of errors from a remote service)

The secret is that you do need to modify your application to get tracing, and meaningful telemetry data needs to know about some application level info (tracking by a pure URL isn’t useful: most REST APIs have unique URLs for each “thing” they store. You want to track all of the related things together).

So given that you still needed to modify your application, I’ve been happy with using libraries within the application to get these features, when they’re needed.

I still feel this way, but now I’ve learned about the coolest feature of service meshes: trusted workload identity, and application level policy.

Workload identity (via SPIFFE) allows any workload (application) to verify the identity of a requestor within the service mesh. This means that a billing microservice could know for certain that requests that appear to come from the store microservice to look up some customer actually are from the store microservice. This might sound easy, but to do it in a way that is automated is notoriously difficult. SPIFFE provides a pattern for doing this.

With Istio knowing exactly which two services are involved in a request, at the application/docker container level, and with it operating at the application layer, you’re able to define policies like:

  • only the store microservice can create new charges in billing
  • only the billing service is allowed to speak to Stripe (egress rules based on DNS)

This is pretty cool compared to operating at the IP address level, and only on ports!

The Trend

We even got a new service mesh! Conduit was announced. It’s from the makers of Linkerd (one of the first service meshes, if not the first). It’s still early days for Conduit; you can’t really use it in production, but you could certainly make some meaningful contributions to the project 🙂.

In addition to being featured in talks, many vendors in the hall were showing off application layer (layer 7 on the OSI model) routing and policy features, either via Istio or through their home-spun solution. If it had to do with networking security, someone was claiming a service mesh would replace it.

You don’t need a VPN to connect your data centers; use a service mesh!

Throw away that expensive barracuda firewall appliance; use a service mesh!

No claims were outlandish, but all of the excitement and bold claims were still a bit funny.

I wager service meshes and their security features will become a big trend in enterprise sales and services in the coming years, supplanting traditional firewall software/appliances and existing Software Defined Networking products.

Trending: Kubernetes Extensibility

Kubernetes has always had a good story for extensibility at its lower levels. You are able to use plug in interfaces to connect to different cloud providers, use different networking layers, or run a different container runtime.

Only recently has higher level extensibility become a focus (within the last year, maybe), and it’s just now becoming usable in things like Custom Resource Definitions (which we use for our Manifold kubernetes-credentials integration).

Before, if you wanted to extend Kubernetes to add custom features like a new workload type, you’d have to fork the code to add it. This is exactly what Red Hat has done with OpenShift (though I gather they are working towards extracting out their unique code).

The sentiment among members of the Kubernetes community is that Kubernetes can now be a boring stable core. On top of this core, other developers are free to build new products or custom solutions. Below this core, other developers can broaden the scope of platforms that Kubernetes runs on. They are pretty adamant that Kubernetes should focus on only adding changes that allow other developers to build new features on top of Kubernetes, instead of building these features in Kubernetes itself.

On the other hand, few vendors in the hall, or talks in the “Extending Kubernetes” track took advantage of the new extensibility mechanisms like CRDs; most people are still building scripting beside Kubernetes rather than logic that runs within it. My hypothesis is that while even though most of these extensibility mechanisms that they could use are very new, they’re not using them because they are trying to support other systems like Mesos and Docker Swarm, as well.

Bats!

Did you know Austin has a giant bat colony under one of the bridges? I didn’t! Mama bats come to Austin from Mexico in the spring, give birth to their babies, and raise them on tasty Austin bugs over the Summer. Unfortunately, the mama and baby bats all flew back to Mexico before I got there.

Food!

Not pictured: genuine Texas Red Chili

It was great to try authentic Texas BBQ. It was very delicious, but after my time living in Raleigh, I’ll always be an eastern North Carolina BBQ lover first and foremost. Can’t beat that vinegar sauce!

KubeCon was a great time. I look forward to what next year brings for Kubernetes, and hope to see you at KubeCon2018!

--

--

James Bowes
manifoldco

Principal Architect at Snyk. literal cat herder.