Making Kubernetes Developer-Friendly

Francis Lee
AI+ Enterprise Engineering
5 min readMar 23, 2021

I’ve been given different titles for my work in the Cloud era (Cloud Consultant, Cloud Architect, Cloud Advisor, etc), but fundamentally I am a software engineer in the Cloud-era domain.

“I code. Period. That’s my core and essence. I solve business problems through programming.”

And Cloud is supposed to be about software. Virtualization, Containers, etc are all about software mimicking hardware and networking. Everything is an API — that’s the Cloud mantra. Kubernetes has made it easy to manage containers lifecycle and OpenShift orchestrates the deployment consistently across the cloud platform, whether it is on Pubic Cloud Service Providers (CSPs) or on-premise private clouds. IBM/Red Hat have been engaging with enterprises to deploy OpenShift and Kubernetes, with improved container management, resiliency and portability of the containers.

“So why is it still so difficult to code for the Cloud?”

I am still spending a lot of effort in the scaffolding and plumbing for the infrastructure instead of developing codes to solve the business problem. Yes, I acknowledge that both Kubernetes and OpenShift does eliminate a lot of the necessary foundation work for me, but I still have to spend time configuring and managing Kubernetes for infrastructure deployment, etc.

“As a software engineer, I should not need to manage infrastructure. That’s the Cloud Service Provider’s job. I should spend my effort focusing on business challenges— not infrastructure operations like scalability, resiliency, ensuring that they are running, etc. I am not a Sys-Admin.”

Eventually, the Cloud Service Providers (CSPs) heard us. Serverless Computing is the answer. AWS Lambda, Azure Functions, Google Functions and IBM Cloud Functions were born for this situation. Just need to add business-logic code; the Cloud Service Provider will take care of the rest for you. No need to worry about infrastructure. It is Functions-As-A-Service (FaaS). Serverless and FaaS provide new tooling for the developer when designing solutions.

“Sounds great … But is there a catch?”

I wouldn’t call it a catch. Simply put, the CSPs packaged the relevant infrastructure and software-application runtimes for you to ‘simply add code’. What this means is that you will need to develop within the CSP’s resource definitions (or limitations, depending on your PoV) that the CSP has allocated for their FaaS. It could be CPU, memory or even the time duration in which your code/function should finish its work. For e.g. AWS Lambda gives 15 minutes duration for your Lambda code before it times-out; IBM Cloud Functions has a time-out of 10 minutes. Also, each CSP’s version of FaaS has its own programming-model so the code you develop isn’t exactly portable. And you need to adhere to the FaaS’s specific supported runtime. Different CSPs have different supported runtimes.

“You mean my code execution needs to be completed within some time-limit and I can’t have the same serverless code running on different CSPs? So I will be arm-locked into a CSP?”

Essentially you are developing snippets of code functionality to be used in a CSP’s constructed platform for FaaS using their supported runtimes. You don’t have a choice but to use the CSP’s programming-model which isn’t transportable across to other CSPs without some code modifications and re-modelling. For e.g. you will need to change your IBM Cloud Functions (based on open-source Apache OpenWhisk) before it can be used in the proprietary AWS Lambda. Fundamentally, if you use AWS Lambda or Microsoft’s Functions, you are using their proprietary platform and your serverless code isn’t easily transportable.

“That’s not really cool. Why can’t there be a standardized approach to running serverless, regardless of the CSP or even on-premise?”

Well, you’re in luck. There is another avenue. Google, IBM, Red Hat, Pivotal and SAP got together to collaborate on an open-source serverless technology based on Kubernetes: Knative. It's a serverless container approach, has a programming-model (Serving/Eventing) based on CloudEvents.io that is cross-platform and transportable to all the CSPs, as long as they run Kubernetes. It comes packaged with Istio as the service-mesh, so your codes in your containers can communicate with each other.

“So as long as the CSP has Kubernetes, Knative will give me the serverless construct? What about the time limitations that were mentioned earlier?”

As Knative isn’t exactly like a managed FaaS but rather sits on top of Kubernetes as serverless containers, you won’t be challenged by the CSP’s resource limitations like execution-time duration, etc. You are also not bound to specific runtimes like in the FaaS. By leveraging OpenShift and Knative, developers can now deploy serverless code consistently to any cloud platform, whether it is hosted by the CSPs or on-premise, just like how OpenShift provides advantages for container deployments in a Hybrid-Cloud approach. In fact, OpenShift Serverless is Knative on OCP. With OpenShift Serverless, enterprise developers now have a consistent and standardized abstraction programming model (CloudEvents.io) to deploy their enterprise logic, while their counterparts in the systems-administration and operations look after OCP operations on either on-premise resources or in the CSP’s services. To the development team, this is the utopia of application portability across all hybrid/multi-cloud environments.

If you prefer the abstraction of FaaS instead of a Serverless Container, you can further deploy Apache OpenWhisk to run on top of Knative.

With the cross-cloud-platform capabilities of OpenShift/Knative, developers can now better design their solution to use containers, serverless-containers or serverless-functions and not limit their design and solutions to a proprietary platform, and without the artificial resource-limits that CSPs requires.

IBM’s envisioning of how OpenWhisk and Knative working together

Depending on your CSP, Knative may be proffered differently. IBM Cloud offers Knative as a managed service called IBM Cloud Code Engine. Google has a managed Knative service called Cloud Run. Knative on AWS and Azure Cloud at the moment requires some extra-steps, like installing Knative on top of AWS EKS and Azure AKS. Open source Knative’s documentation provides steps to installing Knative packaging onto Kubernetes if it isn’t already offered as a managed service from your CSP.

Summary:

Enterprise developers code and focuses on the software architecture, the algorithms and the data structures required to solve business problems. Operating and Managing Kubernetes is hard and not for mere mortals. OpenShift/Knative provides a cross-platform (Hybrid-Cloud wise) serverless container mechanism with a consistent programming model for developers, enabling them to concentrate on coding and solving business problems which is what matters to them. OpenShift/Knative flattens the infrastructure differences and container-management complexity.

--

--