Running Spark Jobs with Kubernetes on DigitalOcean High CPU Droplets
With the release high CPU droplets on DigitalOcean, running data workflows, streaming and processing, etc. can be considerably more efficient. One approach to deploying these data tools can be as a containerized service, and deploying things like Apache Spark can be highly automated and resources scheduled with Kubernetes.
Deploying Kubernetes on DigitalOcean
There’s a few fully automated options here with excellent control planes, but my favorites, and if you’re looking for a managed solutions, Platform9 Managed Kubernetes, Containership, and Stackpoint. However, because these do require a subscription, and this might not be ideal for a testing/development environment, or if you already use a method to deploy a cluster, you can use them again substituting the Node size for one of the now available high CPU sizes for this use case (in the case of my example,
I’ve written a Terraform deployment for your cluster on DigitalOcean that automates a standard Kubeadm cluster:
GitHub is where people build software. More than 28 million people use GitHub to discover, fork, and contribute to over…github.com
Kris Nova has also added a DigitalOcean profile to the excellent
kubicorn tool which will deploy Kubernetes on DigitalOcean with and encrypted VPN service mesh:
Follow @kris-nova Follow @kris__nova (I am fucking awesome) Kubernetes v1.7.3 Private Networking in Digital Ocean…www.nivenly.com
Spark on Kubernetes
This segment comes from an excellent blog post on the subject, and I recommend reading it for the full context, but I’ll re-post the highlights here to get up and running with an example Spark job. The goal here is to run these Spark jobs on the high CPU worker nodes, and with this approach to running Spark, that’s definitely as easily done as said.
With the above Master IP noted, you can grab the Spark package:
and from the extracted package’s directory, submitting a job can be done like this:
--deploy-mode cluster \
--class org.apache.spark.examples.SparkPi \
--master k8s://https://YOUR_MASTER_IP:PORT \
--kubernetes-namespace default \
--conf spark.executor.instances=2 \
--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1 \
Making the most of Kubernetes
Scheduling your Spark jobs to run on the worker nodes specifically (if for example, you run an environment with mixed sizes, and workloads), or if you need to isolate workload traffic, you can use things like
nodeSelector and different affinity options to optimize where these jobs are run after submission, etc.
For example, something like this can be used to match a grouping of hosts:
- key: "whatever"
This is great, and provides some redundancy to a fairly robust system, so the obvious remaining point of failure is your Droplet fleet itself making up the cluster. Since pods (and other controllers) can, theoretically, be provisioned on the same node in a cluster, losing a node for whatever reason can result in an outage, or in the case of pods, complete pod loss, so you can further target your deployment by:
- Labeling your nodes (as in the example above)
- Using the
nodeSelectorkey in your configuration to manually target deploy pods to specific nodes to keep them relatively isolated from too much of the rest of the MongoDB cluster, in this case, to remain online if a worker node drops out of the Kubernetes cluster.
Another (more automated) approach to this is to effectively reserve the node resources, so if you want to avoid deploying to a specific node, you can also begin using the tainting feature in
kubectlon said node to prevent Kubernetes from (re)scheduling onto that cluster member unless it matches the defined behavior in your taint command (so, for example, reserving it for specific namespaces — this is helpful if your cluster is mixed-use for a number of different workloads that may not require the same scope as your Spark jobs).
Kubernetes, like Docker Swarm, has multiple affinity options, meaning how pod containers are scheduled adhere to algorithmically defined behavior (for example, like the
binpack method in Swarm, but with a little more flexibility from your out of the box options):
There’s many other amazing things that can be done with Kubernetes, and if you’re new to the ecosystem, I recommend checking out these resources if this is your first exposure:
AutonomouStuff finds the perfect candidate from Udacity's Self-Driving Car Nanodegree. Read more! Free Course Master…www.udacity.com
This is a hands-on introduction to Kubernetes. Browse the examples: pods labels replication controllers deployments…kubernetesbyexample.com
Both are excellent, and efficient, resources to get you up and running on Kubernetes.