Carbon’s GKE Best Practices

Recommendations for architecting Kubernetes on GCP for high performance, cost-effectiveness, and security.

Olawale Olaleye
Carbon Engineering & Data Science
4 min readJan 21, 2020

--

Carbon Engineering Team

One striking tale to recount during our DevOps transition journey, we embarked on an adventure with Kubernetes on-premise to run some of our infrastructure driven workloads like ELK stack and customized RouterOS for disaster recovery. While we were able to count many successes executing those quests, some critical assessment of the whole undertakings made us set our gaze on an infrastructure platform where convenience and speed mattered.

Designing and Implementing Kubernetes architectures on Google Cloud is a lot of fun than on-premise based on our experience at Carbon. This may not be the case for many that dabbled into deploying K8s on Cloud without prior awareness of best practices. When utilizing any technology, my primary principle has always been, firstly, you need to know the best way to get the best out of that technology. Like buying an advanced gadget without going through the manufacturer’s manual. Yeah, I know, we are all guilty of this act🤪. This same principle also applies to Kubernetes on GCP. In this article, inspired by our experiences with Kubernetes on Google Cloud and GCP’s best practice recommendations, I highlighted some of the ways to adopt best practices while deploying on GKE.

About GKE

Google Kubernetes Engine provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications. It enhances developer productivity, resource efficiency, automated operations, and open-source flexibility to accelerate your time to market.

Some Benefit to Carbon

It is cost-effective, gives the business value for money in terms of deployment speed, and ability to architect for failures in terms of business continuity. This is what Carbonetes means to us, Carbon + Kubernetes.

We hope to maximize our infrastructure deployments on Kubernetes across our multiple cloud platforms (AWS and GCP) in the coming months. Watch out for more interesting articles and Carbonetes stories. We are also recruiting Infrastructure Engineers, DevOps Engineers, Cloud Architects or Cloud Engineers seeking to join an incredible team.

Kubernetes workloads

Useful GKE best practices

  • For more complex analysis and long-term storage, export log data from Stackdriver to Bigquery or Cloud Storage. This would be useful to well-regulated businesses and constantly audited firms.
  • If your organization has multiple GCP projects, the recommended best practice for logging and monitoring is to setup a host project for Stackdriver and use it to monitor other projects. Stackdriver helps you to adopt SRE, site-reliability-engineering principles. SRE is Google’s way of doing DevOps.
  • Use managed services over unmanaged services, for example, Cloud SQL over MySQL.
  • For Media storage, data lakes, backup, and archiving, it is best to use Google Cloud Storage over Persistent Volumes or Persistent Volumes Claims.
  • In the case of disaster recovery or rolling back to a previous working version when a new release fails, use the rollout the command to roll back to the previous version.
Rolling back deployments
  • You should always create a service before creating any workloads that need to access that service.
  • Exercise the principle of least privilege. Use RBAC to define who can view or change Kubernetes objects inside your cluster and Cloud IAM to define who can view or change the configuration of your GKE clusters
  • Use container optimized OS for the node OS
  • Run private clusters and master authorized networks
  • Enable automatic node upgrades
  • Use secrets for sensitive information
  • Assign roles to groups, not users
  • Don’t enable the Kubernetes dashboard. Use the GCP consoles built-in GKE dashboard or the Kubectl commands.
  • Maximize the use of labels extensively for monitoring purposes.

Additional giveaway: Tip for Using GKE with other GCP Services

Endeavor to use managed services over unmanaged service. While using managed services, you can design an application that requires other GCP resources and deploy it to run in GKE. The steps below describe how to securely connect your GKE application to other GCP managed services;

  1. Create a new service account.

2. Choose a role based on the service and the use case

3. Create a credential and key file.

4. Use Kubernetes secrets to store these credentials

5. Use secrets to make API calls to the GCP services

Note: Secret Manager and Cloud Key Management service or KMS can be used to manage and rotate secrets including Cloud IAM authentication credentials.

Thanks for reading another Carbonetes tale!!!

--

--