The new managed Kubernetes era

Ángel Barrera Sánchez
K8Spin
Published in
4 min readMay 3, 2020

It was a long time from our last publication. Now we have something really interesting to share with you.

We love oneinfra

Introduction

A few weeks ago we found a project named oneinfra(project being developed by Rafael Fernandez, Kubernetes and kubeadm project contributor)

oneinfraaims to democratize managed Kubernetes services like GKE, EKS, AKS or IKS by making it super easy to operate Kubernetes control plane.

K8Spin.cloud team saw its value from day 0. As you surely know, there are not only big cloud providers providing this kind of service, smaller ones like DigitalOcean, Civo, Scaleway even 1&1 are in the race to provide this service.

We decided to implement oneinfra in K8Spin.cloud to enable our users the possibility to create not only namespaces, but an entire Kubernetes control plane.

It took us some time to understand the architecture behind oneinfra but we managed to create the required infrastructure to make it work (thanks to Rafael for the incredible support).

Finally, we modified our internal software to create a one-click control-plane deploy experience.

Terraform code to manage oneinfra will be released as an OSS module to start playing with oneinfra as soon as possible, both for hypervisors and workers.

Demo it!

Enter K8Spin.cloud now you will find a new button: New ControlPlane

Updated interface to provide not only namespaces, but controlplanes

After clicking this new button, you just need to fill the form with the control-plane name and submit it.

Detailed information about the control plane

Once you selected your control plane in the left column (Medium-demo), you will find a new page with some useful details about the requested control plane.

You can download the admin kubeconfig to interact with the cluster (same way as you do it with your namespaces) or delete the controlplane.

After no more than five minutes you will find control plane details like the api endpoint, the token or the cluster ca certificate.

controlplane ready to use

You can access the API with the downloaded kubeconfig:

[angel@elitebook example]$ export KUBECONFIG=medium-demo.config 
[angel@elitebook example]$ kubectl cluster-info
Kubernetes master is running at https://34.89.247.120:30001
CoreDNS is running at https://34.89.247.120:30001/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
[angel@elitebook example]$ kubectl get nodes
No resources found.

At this moment you have a ready to use Kubernetes controlplane.

Workers

What makes oneinfra different from other managed Kubernetes services like GKE, EKS, AKS… is the possibility to use your own instances to join the cluster. No matter where they are.

Our team is building OSS terraform module to easily create workers that auto-join a target cluster:

main.tf. See the example here

The module used in this project is currently available at github.com/k8spin.

Apply the terraform project:

[angel@elitebook example]$ terraform apply --auto-approve
module.k8spin_oneinfra_workers.google_compute_firewall.oneinfra_worker_ssh: Creating...
google_compute_firewall.k8spin_oneinfra_cni: Creating...
module.k8spin_oneinfra_workers.google_compute_instance.oneinfra_worker[0]: Creating...
module.k8spin_oneinfra_workers.google_compute_firewall.oneinfra_worker_ssh: Still creating... [10s elapsed]
google_compute_firewall.k8spin_oneinfra_cni: Still creating... [10s elapsed]
module.k8spin_oneinfra_workers.google_compute_instance.oneinfra_worker[0]: Still creating... [10s elapsed]
google_compute_firewall.k8spin_oneinfra_cni: Creation complete after 12s [id=projects/k8spin-demos/global/firewalls/k8spin-oneinfra-cni]
module.k8spin_oneinfra_workers.google_compute_firewall.oneinfra_worker_ssh: Creation complete after 12s [id=projects/k8spin-demos/global/firewalls/oneinfra-worker-ssh]
module.k8spin_oneinfra_workers.google_compute_instance.oneinfra_worker[0]: Creation complete after 13s [id=projects/k8spin-demos/zones/europe-west3-a/instances/oneinfra-worker-0]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.Outputs:worker_ips = [
"34.89.207.149",
]

Wait a couple of minutes to get the new node to join the cluster:

[angel@elitebook example]$ export KUBECONFIG=medium-demo.config
[angel@elitebook example]$ kubectl get nodes

NAME STATUS ROLES AGE VERSION
oneinfra-worker-0.europe-west3-a.c.k8spin-demos.internal NotReady <none> 85s v1.18.2

Now, you can deploy your preferred CNI (flannel is used in this example):

[angel@elitebook example]$ export KUBECONFIG=medium-demo.config
[angel@elitebook example]$ kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[angel@elitebook example]$kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-99c6f9775-vndmz 1/1 Running 0 58m
kube-system kube-flannel-ds-amd64-dd7rt 1/1 Running 0 49s
kube-system kube-proxy-6jrq2 1/1 Running 0 4m27s
[angel@elitebook example]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
oneinfra-worker-0.europe-west3-a.c.k8spin-demos.internal Ready <none> 5m4s v1.18.2

Now you have one node cluster running against K8Spin.cloud and your google cloud account.

Status

We consider this integration currently in beta status meaning that:

  • All control planes are free (free tier included). Limited to run one controlplane simultaneous per account.
  • All control planes can be destroyed without any advice in no more than three days.

Be careful

Having a Kubernetes controlplane (from K8Spin.cloud or in your own infrastructure) is just the beginning of an adventure.

If you plan to run different kinds of workers across multiple locations/cloud providers you have to solve the networking and storage problems.

Final notes

We built this integration to explore new possibilities at K8Spin.cloud, so all feedback is welcomed.

We are working very close with Rafael providing him feedback on his project.

If you found some issues or you don’t know how to test oneinfra at K8Spin.cloud, don’t doubt to ping us in the #k8spin channel at Kubernetes Slack Group.

--

--