Helm summit was held in Amsterdam last week. It was a great gathering of around 150 Helm enthusiasts representing various Kubernetes users and providers. Here are five key takeaways from this summit.
1. Helm 3.0 is coming with better security, better crd support, and some breaking changes:
Some of the key members of Helm’s core team were present throughout the summit. Through their presentations and hallway conversations it was clear that Helm 3.0 is going to be an important milestone for the project. Most of you have probably heard that Tiller — the server-side component of Helm — is going away in Helm 3.0. There are other important things coming to Helm 3.0 such as better security controls and better support for Custom Resource Definitions (CRDs). Here are the three key aspects that were prominent:
- For security, by default the set of pre-configured permissions for users are going to be minimal. Unlike Tiller, which used to have cluster-wide admin privileges by default, in Helm 3.0 you will have to explicitly grant any required permissions to the User or Service Account that will be configured to use Helm. This change ensures that cluster administrators make conscious decisions about the security of their clusters.
- Second major change is better support for CRDs. In current version of Helm, CRD installation is supported through the crd-install hook defined as an annotation. Not all CRD developers and Operator developers are using it though. This makes such Helm charts susceptible to installation errors as installing the CRD before installing Custom Resource manifests is critical for correct chart installation. Helm 3.0 has made the CRD support explicit. There is going to be a ‘crd’ directory inside the charts directory. We will need to place all our CRD YAMLs inside this directory. Helm will be processing this directory before installing any other parts of the chart. This will ensure that all the CRDs are installed before installing any Custom Resource manifests.
- Third, there are going to be some breaking changes to the CLI experience. For example, currently when a chart is installed, a random release name is generated unless a name is provided as input. In Helm 3.0, this experience has been reversed. The name parameter is made compulsory. If you like the current behavior of having random name for your releases, you will need to get that by providing a new flag as input.
2. Consolidation of cloud native artifacts
There were several sessions that focused on the problems related to Helm chart storage. Author of Chartmuseum, Josh Dolitsky, led these sessions presenting the problem, existing solutions, and how the broader thinking is evolving in this space. The main takeaway here is that various artifacts that one needs to deal with in cloud native approach, such as Docker images, Helm charts, Kustomize patch files, etc. need to be handled in a uniform way. A community project for storing all these artifacts in a single registry has been started called ‘ORAS’ — OCI Registry as Storage. For Kubernetes users this definitely is a step in the right direction as it will consolidate various artifacts in a single registry with support for things like repository segregation, access controls, etc.
3. Helm and Operators
There were several talks about Custom Controllers and Operators. CloudARK’s talk was focussing on the best practice guidelines for creating Helm charts for Operators. RedHat’s team presented on Operators and Operator Hub. There were sessions from Workday, Weaveworks, University of Notre Dame that discussed Kubernetes native approach of continuously reconciling your Helm chart releases in a cluster through a process called GitOps. The key takeaway from all these presentations were re-enforcing the point that Helm and Operators are complementary things. While one is focused on templatization and ease of managing artifacts (Helm), the other is focused on managing the day 2 operations of third-party softwares like relational/non-relational databases running on a Kubernetes cluster (Operator).
4. Helm chart management issues
When it comes to large enterprise applications, a single Helm chart is not enough. You will need several charts. GitLab’s presentation was an eye opener in this regard. They have many charts and average line count of a chart is also quite large (several thousand lines). Managing all these charts is a problem in itself. There were two interesting presentations that are trying to address different aspects of this problem. At one end was presentation from IBM team about their internal tool that simplifies searching for Helm charts using different criteria. Their focus seemed to be on solving the problem of DevOps engineers who will be selecting and installing charts in their clusters. At the other end was presentation from Replicated team who is trying to solve the problem of managing customizations to Helm charts without creating copies or forks. Their approach separates base Helm chart and combines it with patch files approach from kustomize to create custom Helm charts. We expect to see significant activity in this space as different providers focus on different aspects of Helm charts affecting their management. For instance, our focus at CloudARK is specifically on Operator Helm charts that are annotated with specific platform-as-code annotations to enable discovery and ease of use of Custom Resources.
5. It is a welcoming community
Helm maintainers and key members of the community were very welcoming. They were approachable and open towards any and all kinds of discussions and questions such as helm graduation timeline, thoughts about helm and kustomize, co-locating the event with Kubecon, etc. They talked about the process of contributing to Helm. It didn’t seem too complicated. Helm project has not yet adopted the KEP process (Kubernetes Enhancement Proposal). This might change though after the project graduates from incubating status.
CloudARK at Helm Summit
Our presentation at the summit focussed on the guidelines and best practices for creating Helm charts for Operators. At CloudARK, our iPaaS offering enables DevOps teams to assemble their custom Kubernetes platform stacks without any tie-in to Kubernetes provider or proprietary interfaces. Required CRDs/Operators are packaged as Helm charts with special focus on the interoperability between Custom Resources from different Operators. Our presentation was based on the learnings from developing several Operators ourselves and from analyzing more than a hundred community Operators towards the goal of delivering an enterprise-ready Kubernetes Native custom platform layer.
The conference location in Amsterdam was scenic overlooking one of the numerous waterways that exist in Amsterdam.
Helm project is on the verge of becoming a top-level CNCF project. It has matured over the last several years and has a strong community and wide adoption. If you are not using it yet, give it a try. It provides one of the simplest ways to templatize and manage your Kubernetes artifacts. If you are already using it, Helm 3.0 should address a lot of your concerns around security and provide explicit support for Kubernetes extensibility through CRDs.