Comparing Kubernetes to Pivotal Cloud Foundry — A Developer’s Perspective

Oded Shopen
Dec 29, 2017 · 21 min read

Author’s disclosure: I’m currently a Sr. Platform Architect at Pivotal, however I’ve written this article almost a whole year before joining the company. It is based on my own unsolicited experience working with the platform as a vendor for a third party.


I’ve been developing on Pivotal Cloud Foundry for several years. Working with the Spring boot stack, I was able to create CI/CD pipelines very easily and deployments were a breeze. I found it to be a truly agile platform (is that still a word in 2018?).

On the other hand, Kubernetes is gaining serious traction, so I’ve spent a few weeks trying to get a grasp on the platform. I’m in no means as experienced working with Kubenertes as I am in Cloud Foundry, so my observations are strictly as a novice to the platform.

In this article I will try to explain the differences between the two platforms as I see them from a developer’s perspective. There will not be a lot of internal under-the-hood architecture diagrams here, this is purely from a user experience point of view, specifically for a Java developer that is novice to either platform.

Such a comparison is also inherently subjective, so although I will try to provide objective comparison for any aspect, I will also give my personal thoughts on which aspect works best for me. These may or may not be conclusions you would agree with.


Overview

Cloud Foundry

Kubernetes

Kubernetes is first and foremost a container runtime. Although not limited to it, it is mostly used to run docker containers. There are several solutions that offer a PaaS experience on top of Kubernetes, such as RedHat OpenShift.

Similarities

  • Both Cloud Foundry and Kubernetes are designed to let you run either on public cloud infrastructure (AWS, Azure, GCP etc.), or on-prem using solutions such as VMWare vSphere.
  • Both solutions offer the ability to run in hybrid/multi-cloud environments, allowing you to spread your availability among different cloud providers and even split the workload between on-prem and public cloud.
  • Starting with the latest release of Pivotal Cloud Foundry, both solutions support Kubernetes as a generic container runtime. More on that below.

PaaS vs IaaS+

Cloud Foundry

As a developer, the biggest differentiator for me is how Cloud Foundry takes a very Spring-like, opinionated approach to development, deployments and management.

If you ever used Spring Boot, you know that one of its strengths is the ability to auto-configure itself just by looking at your maven/gradle dependencies. For example, if you have a dependency on mysql JDBC driver, your Spring Data framework would auto-configure to use mysql. If no driver is provided, it would fallback to h2 in-memory database.

As we’ll see in this article, PCF seems to take a similar approach for application deployment and service binding.

Kubernetes

Supported Containers

Kubernetes

Anyone who had a chance to write a Dockerfile knows it can be either a trivial task of writing a few lines of descriptor code, or it can get complicated rather quickly. Here’s a simple example I pulled off of github, and that’s a fairly simple example:

Running nodejs and Mongo DB in a docker container

This example should not seem intimidating to the average developer, but it does immediately show you there is a learning curve here. Since Docker is a generic container solution, it can run almost anything. It is your job as the developer to define how the operating system inside the container will execute your code. It is very powerful, but with great power comes great responsibility.

Cloud Foundry

Cloud foundry itself actually predates Kubernetes. The first release was in 2011, while kubernetes is available since 2014.

More importantly than the container runtime being used, is how you create a container.

Let’s take the example of a developer that needs to host a Spring Boot Java application.

With docker, you should define a Dockerfile to support running a java-based application. You can define this container in many different ways. You can choose different base operating systems, different JDK versions from different providers, you can expose different ports and make security assumptions on how your container would be available. There is no standard for what a java-based Spring Boot application container looks like.

In Cloud Foundry, you have one baseline buildpack for all java-based applications, and this buildpack is provided by the vendor. A buildback is a template for creating an application runtime in a given language. Buildpacks are managed by cloud foundry itself.

Cloud Foundry takes the guesswork that is part of defining a container out of the hands of the developer. It defines a “standard” for what a java-based container should look like, and all developers, devops teams and IT professionals can sync-up with this template. You can rest assured that your container will run just like other containers provided by other developers, either in your existing cluster or if you’ll move to a public cloud solution tomorrow.

Of course, sometimes the baseline is not enough. For example — you might want to add your own self-signed SSL certificates to the buildpack. You can extend the base buildpack in these scenarios, but that would still allow you to use a shared default as the baseline.

Continuing with the opinionated approach, Cloud foundry can identify which buildpack to use automatically, based on the contents of the provided build artifact. This artifact might be a jar file, a php folder, a nodejs folder, a .NET executable etc. Once identified, cloud foundry will create the garden container for you.


All this means that with PCF, your build artifact is your native deployment artifact, while in Kubernetes your build artifact is a docker image. With Kubernetes, you need to define the template for this docker image yourself in a Dockerfile, while in PCF you get this template automatically from a buildpack.

Management Console

Cloud Foundry

  • Ops Manager is targeted at the IT professional that is responsible for setting up the virtual machines or hardware that will be used to create the PCF cluster.
Image for post
Image for post
  • Apps Manager is targeted at the developer that is responsible for pushing application code to testing or production environments. The developer is completely unaware of the underlying infrastructure that runs its PCF cluster. All he can really see is the quotas assigned to his organization, such as memory limits.
Image for post
Image for post

Kubernetes

Image for post
Image for post

As you can see from the left-hand side, there is a lot of data to process here. You have access to persistent volumes, daemons, definition of roles, replication controllers etc. It’s hard to focus on what are the developer’s needs and what are the IT needs. Some might tell you this is the same person in a Devops culture, and that’s a fair point. Still, in reality-it is a more confusing paradigm compared to a simple application manager.

Command Line Interface

Cloud Foundry

For example, if you are in a folder that contains a spring boot jar file called myapp.jar, you can deploy this application to PCF with the following command:

cf push myapp -p myapp.jar

That’s it! That’s all you need. PCF will lookup the current working directory and find the jar executable. It will then update bits to the platform, where the java buildpack would create a container, calculate the required memory settings, deploy it to the currently logged-in org and space in PCF, and set a route based on the application name:

wabelhlp0655019:test odedia$ cf push myapp -p myapp.jarUpdating app myapp in org OdedShopen / space production as user…OKUploading myapp…Uploading app files from: /var/folders/_9/wrmt9t3915lczl7rf5spppl597l2l9/T/unzipped-app271943002Uploading 977.4K, 148 filesDone uploadingOKStarting app myapp in org OdedShopen / space production as user…
Downloading pcc_php_buildpack…
Downloading binary_buildpack…
Downloading python_buildpack…
Downloading staticfile_buildpack…
Downloading java_buildpack…
Downloaded binary_buildpack (61.6K)
Downloading ruby_buildpack…
Downloaded ruby_buildpack
Downloading nodejs_buildpack…
Downloaded pcc_php_buildpack (951.7K)
Downloading go_buildpack…
Downloaded staticfile_buildpack (7.7M)
Downloading ibm-websphere-liberty-buildpack…
Downloaded nodejs_buildpack (111.6M)
Downloaded ibm-websphere-liberty-buildpack (178.4M)
Downloaded java_buildpack (224.8M)
Downloading php_buildpack…
Downloading dotnet_core_buildpack…
Downloaded python_buildpack (341.6M)
Downloaded go_buildpack (415.1M)
Downloaded php_buildpack (341.7M)
Downloaded dotnet_core_buildpack (919.8M)
Creating container
Successfully created container
Downloading app package…
Downloaded app package (40.7M)
Staging…
— — -> Java Buildpack Version: v3.18 |
https://github.com/cloudfoundry/java-buildpack.git#841ecb2
— — -> Downloading Open Jdk JRE 1.8.0_131 from https://java-buildpack.cloudfoundry.org/openjdk/trusty/x86_64/openjdk-1.8.0_131.tar.gz (found in cache)
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.1s)
— — -> Downloading Open JDK Like Memory Calculator 2.0.2_RELEASE from https://java-buildpack.cloudfoundry.org/memory-calculator/trusty/x86_64/memory-calculator-2.0.2_RELEASE.tar.gz (found in cache)Memory Settings: -Xmx681574K -XX:MaxMetaspaceSize=104857K -Xss349K -Xms681574K -XX:MetaspaceSize=104857K
— — -> Downloading Container Security Provider 1.5.0_RELEASE from https://java-buildpack.cloudfoundry.org/container-security-provider/container-security-provider-1.5.0_RELEASE.jar (found in cache) — — -> Downloading Spring Auto Reconfiguration 1.11.0_RELEASE from https://java-buildpack.cloudfoundry.org/auto-reconfiguration/auto-reconfiguration-1.11.0_RELEASE.jar (found in cache)Exit status 0
Uploading droplet, build artifacts cache…
Uploading build artifacts cache…
Uploading droplet…
Staging complete
Uploaded build artifacts cache (109B)
Uploaded droplet (86.2M)
Uploading complete
Destroying container
Successfully destroyed container
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

Although you can start with barely any intervention, this doesn’t mean you give up any control. You have a lot of customizations available in PCF. You can define your own routes, set the number of instances, max memory and disk space, environment variables etc. All of this can be done in the cf cli or by having a manifest.yml file available as a parameter to the cf push command. A typical manifest.yml file can be as simple as the following:

applications:
- name: my-app
memory: 512M
instances: 2
env:
PARAM1: PARAM1VALUE
PARAM2: PARAM2VALUE

The main takeaway is this: with PCF, provide the information you know, and the platform will imply the rest. Cloud Foundry’s haiku is:

Here’s my code

Run it on the cloud for me.

I don’t care how.

Kubernetes

For starters, a basic assumption is that you have a private docker registry available and configured (unless you only plan to deploy images available on public registries such as docker hub). Once you have that registry up and running, you will need to push your Docker image to that registry.

Now that the registry contains your image, you can initiate commands to kubectl to deploy the image. Kubernetes documentaiotn gives the example of starting up an nginx server:

# start the pod running nginx
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
deployment "nginx-app" created

The above command only spins up a kubernetes pod and runs the container.

A pod is an abstraction that groups one or more containers to the same network ip and storage. It’s actually the smallest deployable unit available in Kubernetes. You can’t access a docker container directly, you only access its pod. Usually, a pod would contain a single docker container, but you can run more. For example, an application container might want to have some monitoring dameon container in the same pod.

In order to make the container accessible to other pods in the Kubernetes cluster, you need to wrap the pod with a service:

# expose a port through with a service
$ kubectl expose deployment nginx-app --port=80 --name=nginx-http
service "nginx-http" exposed

Your container is now accessible inside the kubernetes cluster, but it is still not exposed to the outside world. For that, you need to wrap your service with an ingress.

Note: Ingress is still considered a beta feature!

I could not find a simple command to expose an ingress at this point (please correct me if I’m wrong!). It appears that you must create an ingress descriptor file first, for example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80

Once that file is available, you can create the ingress by issuing a command

kubectl create -f my-ingress.yaml

Note that unlike the single manifest.yml in PCF, the deployment yml files in Kubernetes are separated — there is one for pod creation, one for service creation and as you saw above — one for ingress creation. A typical descriptor file is not entirely overwhelming but I wouldn’t call it the most user friendly either. For example, here’s a descriptor file for nginx deployment:

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

All this to say — with kubernetes, you need to be specific. Don’t expect deployments to be implied. If I had to create a haiku for Kubernetes, it’ll be probably something like this:

Here’s my code

I’ll tell you exactly how you should run it on the cloud for me

And don’t you dare make any assumptions on the delployment without my written consent!

Zero Downtime Deployments

2019 update: As of Pivotal Cloud Foundry 2.4, native zero-downtime deployments are available out of the box!

Cloud Foundry

  • Starting point: you have myApp in production, and you want to deploy a new version of this app — v2.
  • Deploy v2 under a new application name, for example — myApp-v2
  • The new app will have its own initial route — myApp-v2.mysite.com
  • Perform testing and verification on the new app.
  • Map an additional route to the myApp-v2 application, using the same route as the original application. For example:
cf map-route myApp-v2 mysite.com —hostname myApp
  • Now requests to your application are load balanced between v1 and v2. Based of the number of instances available to each version, you can perform A/B testing. For example — if you have 4 instances of v1 and 1 instance of v2, 20% of your clients will be routed to the new codebase.
  • If you identify issues at any point — simply remove v2. No harm done.
  • Once you are satisfied, scale the number of available instances of v2, and reduce or completely delete the instances of v1.
  • Remove the myApp-v2.mysite.com route from v2 of your application. You have now fully migrated to the new codebase with zero downtime, including sanity testing phase and potentially A/B testing phase.

Note: The cf cli supports plugin extensions. Some of them provide automated blue-green deployments, such as blue-green-deploy, autopilot and zdd. I personally found blue-green-deploy to be very easy and intuitive, especially due to its support for automated smoke tests as part of the deployment.

Kubernetes

kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jon/kubernetes-bootcamp:v2

The command above tells kubernetes to perform a rolling update between all pods of the kubernetes-bootcamp deployment from its current image to the new v2 image. During this rollout, your application remains available.

Even more impressive — you can always revert back to the previous version by issuing the undo command:

kubectl rollout undo deployments/kubernetes-bootcamp

External Load Balancing

If we’ll take an external view of the levels of abstraction that are needed to reach your application, we can describe them as follows:

Kubernetes

ingress → service → pod → container

Cloud Foundry

route → container

Internal Load Balancing (Service Discovery)

Cloud Foundry

  • Route-based load balancing, in the traditional server-side configuration. This is similar to external load balancing mentioned above, however you can specify certain domains to only be accessible from within PCF, thereby making them internal.
  • Client side load balancing by using Spring Cloud Services. This set of services offers features from the Spring Cloud frameworks that are based on Netflix OSS. For service discovery, Spring Cloud Services uses Netflix Eureka.

Eureka runs as its own service in the PCF environment. Other applications register themselves with Eureka, thereby publishing themselves to the rest of the cluster. Eureka server maintains a heartbeat health check of all registered clients to keep an up-to-date list of healthy instances.

Registered clients can connect to Eureka and ask for available endpoints based on a service id (the value of spring.application.name in case of Spring Boot applications). Eureka would return a list of all available endpoints, and that’s it. It is up to the client to actually access one of these endpoints. That is usually done using frameworks like Ribbon or Feign for client-side load balancing, but that is an implementation detail of the application, and not related to PCF itself.

Client-side load balancing can theoretically scale better since each client keeps a cache of all available endpoints, and can continue working even if Eureka is temporarily down.

If your application already uses Spring Cloud and the Netflix OSS stack, PCF fits your needs like a glove.

Kubernetes

The major benefit that Kubernetes’ load balancing offers is not requiring any special client libraries. Eureka is mostly targeted at java-based applications (although solutions exist for other languages such as Steeltoe for .NET). With Kubernetes, you can make load-balanced http calls to any Kubernetes service that exposes pods, regardless of the implementation of the client or the server. The load-balancing domain name is simply the name of the service that exposes the pods. For example:

  • You have an application called my-app in namespace zone1
  • It exposes a GET /myApi REST endpoint
  • There are 10 pods of this container in the cluster
  • You created a service called my-service that exposes this application to the cluster
  • From any other pod inside the namespace, you can call:
GET https://my-service/myApi
  • From any other pod in any other namespace in the cluster, you can call:
GET https://my-service.zone1/myApi

And the API would load-balance over the available instances. It doesn’t matter if your client is written in Java, PHP, Ruby, .NET or any other technology.

2018 update: Pivotal Cloud Foundry now supports polyglot, platform-managed service discovery similar to Kubernetes, using Envoy proxy, apps.internal domain and BOSH DNS.

Marketplace

  • Spring Cloud Services, which provides access to Eureka, a Config Server and a Hystrix Dashboard.
  • RabbitMQ message broker.
  • MySQL.

Third party vendors can implement their own services as well. Some of the vendor offerings include MongoDB Enterprise and Redislabs for Redis in-memory database. Here’s a screenshot of available services on Pivotal Web Services:

Image for post
Image for post

IBM Bluemix is another Cloud Foundry provider that offers its own services such as IBM Watson for AI and machine learning applications.

Every service has different plans available based on your SLA needs, such as a small database for development or a highly-available database for a production environment.

Last but not least, you have the option to define user-provided services. These allow you to bind your application to an existing service that you already have, such as an Oracle database or an Apache Kafka message broker. A user provided service is simply a group of key-value pairs that you can then inject into your application as environment variables. This offloads any specific configuration such as URLs, usernames or passwords to the environment itself — services are bound to a given PCF space.

Kubernetes

Note that since it can run any docker container — Dockerhub can be considered as a kubernetes marketplace in a way. You can basically run anything that can run in a container.

Kubernetes does have a concept similar to user-provided services. Any configuration or environment variables can exist in ConfigMaps, which allow you to externalise configuration artifacts away your container, thus making it more portable.

Configuration

In Kubernetes, you can still run a config server as a container, but that would probably become an unneeded operational overhead since you already have built-in support for ConfigMaps. Use the native solution for the platform you go with.

Storage Volumes

In PCF, your application is fully stateless. PCF follows the 12-factor apps model and one of these models assumes that your application has no state. You should theoretically take the same application that runs today in your on-prem data center, move it to AWS, and provided there is adequate connectivity, it should just work. Any storage-based solution should be offloaded to either a PCF service, or to a storage solution outside the PCF cluster itself. This may be regarded as an advantage or a disadvantage depending on your application and architecture. For stateless application runtimes such as web servers, it is always a good idea to decouple it from any internal storage facility.

Onboarding

Kubernetes

Just taking a look at the excellent Kubernetes Basics interactive tutorial shows the level of knowledge required on the platform. For a basic on-boarding, there are 6 steps, each one of them containing quite a few commands and terminologies you need to understand. Trying to follow the tutorial on a local minikube vm instead of the pre-configured online cluster is quite difficult.

Cloud Foundry

If we’ll take a look at PCF’s “Getting Started with Pivotal Cloud Foundry”, it’s almost comical how little is required to get something up and running. When you need more complex interaction, it’s all available for you, either in the cf cli, as part of a manifest.yml, or in the web console, but this doesn’t prevent you from getting started quickly.

If you mainly develop server-based applications in java or nodejs, PCF gets you to the cloud simply, quickly and more elegantly.

Vision

Kubernetes

But with such a diverse and thriving eco-system, the path forward can be one of many different directions. It really does feel like a Google product in a way — maybe it will remain supported by Google for years, maybe it will change with barely any backwards compatibility, or maybe they’ll kill it and move to the next big thing (Does anyone remember Google Buzz, Google Wave or Google Reader?). Any AngularJS developer who’s trying to move to Angular 5 can tell you that backwards compatibility is not a top priority.

Cloud Foundry

If Kubernetes is Google, then PCF is Apple.

A little more of a walled garden, more controlled, better design/experience layer, and a commitment for delivering a great product. I feel like the platform is more focused, and focus is critical in my line of work.

PKS

Conclusion

The main conclusion is that this is definitely not a this or that discussion. Since both the application runtime and the container runtime now live side by side in the same product, the future seems promising for both.

Thank you for reading, and happy coding!

Oded Shopen

Visit my homepage: http://odedia.org

Follow me on twitter: https://twitter.com/odedia

Follow me on LinkedIn: https://www.linkedin.com/in/odedia/

Follow me on 500px: https://500px.com/odedia

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store