Reflections of KubeCon 2018

George Braxton IV
cloud native: the gathering
5 min readDec 20, 2018

Reflections of KubeCon 2018, Seattle, WA

KubeCon + Cloud-NativeCon returned to Seattle this year. Outside of the conference center was continual light rain while inside the conference center was a torrential storm of excited developers and industry professionals about the ever-growing portfolio of cloud-native open source technologies, particularly its flagship, Kubernetes. The CNCF saw its membership grow to add several high-profile companies such as CapitalOne joining industry giants such as Google, Red-Hat, Cisco, AWS, IBM and Intel. With now well over 300 members, this unification crates a wave that shapes the direction the industry moves in cloud related technology.

The conference itself has come a long way, seeing exponential growth in the number of attendees, up to 8000 from just 4000 last year. As a Kubernetes aficionado/enthusiast, there is a sort of energizing effect of being surrounded by so many people with the same excitement for Kubernetes and its future.

The Buzz

The buzz around service-mesh continued from its initial explosion last year in Austin. Last year, the consensus was “I’m interested and plan to experiment with service mesh in 2018.” This year, the consensus is “I’ve already experimented with it now how can I expand my usage of it ‘the right way’ and get it into production.” There were close to two dozen talks plus a full day of EnvoyCon that centered around service meshes this year. Notably, were many more user journeys explaining company’s experiences in getting service mesh’s in production and troubleshooting it. Companies like Lyft, Stripe, Square, Alibaba, Pinterests, and Ebay gave great insight into their use of service meshes which included unique troubleshooting techniques, approaches to multi-cluster setups using traffic shaping, and use of reactive streams.

Another topic of focus this year was serverless on kubernetes. While several solutions for running serverless workloads in Kubernetes exists, Knative seemed to blast out in front with several sessions and keynotes that gave it praise. The flexibility of Knative was really highlighted by showing extensive set of buildpacks, the robust eventing system and the fact that multiple cloud providers, in addition to Kubernetes, support Knative functions, buildpacks, and events.

Running machine learning workloads on Kubernetes also took made its presence known in this year’s buzz. Kubeflow has taken-off to be the defacto solution for running ML on k8s. There were dedicated sessions to not just running ML workloads but also on CICD for ML, monitoring ML, and integration with other tooling such as Knative to create a AI CI pipeline.

Custom resources definitions (CRDs), custom controllers, admission controllers, and operators were the other hot topics of the conference. There were many examples of using CRDs to represent components external to Kubernetes and controllers that modify those external system to reflect the desired state defined in the CRDs. While I was aware of the built-in admission-controllers and their purpose, I hadn’t thought about practical use-cases for custom admission-controllers such as automatically adding annotations to ‘LoadBalancer’ service types so that they are always internal only. This sort of use-case, for company policy enforcement, led to the development of the Open Policy Agent, an admission controller agent which can use custom resources to configure policies so that you don’t have to code a full admission-controller in order to enforce a policy.

Keynotes

Speaking of custom resources, one of the keynotes announced that the roadmap for Kubernetes is to make built-in resources the same as custom resources. What this does is makes sure that all the features supported by built-in resources will be supported by CRDs. This is a huge advancement in the initial goal of keeping Kubernetes extendable.

Another huge announcement out of the Keynotes was the donation of Etcd to the CNCF. This makes sure that Etcd develops in the direction of the community as a whole instead of a single company. Not sure if there was concern about various mergers with the previous shepherds of this key project used by Kubernetes.

The keynotes wouldn’t be the same without a Kelsey Hightower demo and as usual, he delivered. This year he showed how easy it could be (or it is) for a container workload to be run as a serverless workload. As a side note, I wasn’t aware of the tool dive, https://github.com/wagoodman/dive, until Kelsey’s demo. This tool allows you to view what is inside each layer of an container image just using the image tag.

Sessions

The range of topics for the sessions I attended were all over the map. I avoided many of the ‘introduction to…’ type topics in favor of just reading docs for tech mentioned that I wasn’t too familiar with. I tended towards niche session that shared some inside knowledge into debugging a particular component such as “Debugging Etcd” by Joe Betz and Jingyi Hu from Google. This session explained the inner workings of etcd which gave me a better understanding of how performance can suffer and some pitfalls that can cause an Etcd cluster to fail.

I did attend a few deep-dives into a security topics like the few I attended that focused on identity management with SPIFFE. SPIFFE is a standard for defining the identity of an application and the trust domains that are the authority. I found out at one of these sessions that I’ve already come across its use in things like Kubernetes service accounts. I also learned about Spire which is an identity trust tool that exposes an API for establishing trust using SPIFFE and allows trust to be established between multiple clusters and system outside of Kubernetes.

Sandboxing also gained a lot more attention this year which led me to a couple of sessions that took a deep dive into kata containers as well as gVisor. One of these session took a deep comparison of both technologies and explained the advantages/disadvantages of both. This session did a good job of showing which systems calls were intercepted vs virtualized using both technologies. Some of the lower level content went over my head but the comparisons as a whole made sense. For organizations that are looking for a more secure or isolated container runtime, search for some of these sessions online.

Wrap-Up

The sheer volume of useful knowledge and information at this conference exceeded what I could consume during one week but luckily every single session, keynote, and lightning talk is available online for stream here https://www.youtube.com/playlist?list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU. Even though I attended close to 40 sessions, I have a backlog of an addition 40 that I wished I attended but will have to watch online. The growth of this conference can be seen on many levels. The increase in attendance, CNCF membership, sessions/workshops, and vendor booths are the obvious examples. Coffee was finally ‘highly available’, box lunches, and the conference party at the Space Needle definitely topped last years pub-crawl.

I want to take a moment to thank the vendors who hosted various lounges and parties and a very special thank you to Mesophere for hosting ‘Ice-Kube Con’, featuring Ice-Cube of course (never has there been a cooler play on words).

See you next year KubeCon in San Diego!

--

--