Automating Liqo: Welcome to the Terraform provider!

Alessandro De Crecchio
The Liqo Blog
Published in
6 min readApr 12, 2023

Terraform is among the most appreciated solutions for Infrastructure-as-Code (IaC). Liqo is one of the prominent technology for multi-cluster and multi-cloud. Why not taking the best of both worlds?

Here it is: this tutorial presents how to create and manage multi-cluster and multi cloud applications simply and in a declarative way, with our brand new Terraform provider for Liqo.
You will create a virtual cluster by peering two Kubernetes clusters and offload a namespace using the Generate, Peer and Offload resources provided by the Liqo Terraform provider. Afterwards you’ll deploy a simple application by means of Terraform itself.

Photo by Shiro Hatori on Unsplash

Terraform

Terraform is an open-source Infrastructure as Code (IaC) tool that allows users to manage their infrastructure on popular cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) using a declarative configuration language.
Terraform’s features enable users to automate the deployment, management, and scaling of their infrastructure in a consistent and repeatable way.

Liqo

Liqo is an open-source project that enables seamless and secure resource sharing between multiple Kubernetes clusters.
Liqo allows users to extend their own Kubernetes clusters, creating a single virtual cluster that spans across multiple physical ones.
This virtual cluster can be used to run workloads across multiple locations, providing increased flexibility and scalability.

The infrastructure

In this example we will create two KinD clusters, install Liqo and establish an outgoing peering from local to remote cluster.
Furthermore, we will offload a namespace to the remote cluster. This will allow the namespace to leverage resources available on both local and remote clusters, following the default offloading policy. In addition we will deploy a simple application to test the infrastructure itself.

This example is provisioned on KinD, a tool for running local Kubernetes clusters using Docker container “nodes”, since it requires no particular configuration (e.g., concerning accounts), it can be executed on a local machine (hence, no cloud costs). Yet, all the presented functionalities work also on other clusters, e.g., the ones operated by public cloud providers.

Building the infrastructure code (main.tf)

Let’s write the code step by step to provision the infrastructure.
First of all we need to declare required providers (kind, helm, kubernetes, liqo) with the following block:

terraform {
required_providers {
kind = {
source = "tehcyx/kind"
version = "0.0.15"
}
helm = {
source = "hashicorp/helm"
version = "2.7.1"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.16.1"
}
liqo = {
source = "liqotech/liqo"
}
}
}

Then we need to initialize each provider, in particular two provider instances for helm and two for liqo: for local cluster (rome) and remote one (turin).
Each provider is initialized with an alias and kubeconfig file exported by kind_cluster resources (introduced below) in order to operate on their corresponding cluster.

provider "helm" {
alias = "rome"
kubernetes {
config_path = kind_cluster.rome.kubeconfig_path
}
}
provider "helm" {
alias = "turin"
kubernetes {
config_path = kind_cluster.turin.kubeconfig_path
}
}
provider "kubernetes" {
config_path = kind_cluster.rome.kubeconfig_path
}
provider "liqo" {
alias = "rome"
kubernetes = {
config_path = kind_cluster.rome.kubeconfig_path
}
}
provider "liqo" {
alias = "turin"
kubernetes = {
config_path = kind_cluster.turin.kubeconfig_path
}
}

Now we need to add resources and config parameters to create the KinD clusters with two nodes (one as control plane and another as worker) and to install Liqo (through Helm):

resource "kind_cluster" "rome" {
name = "rome"
node_image = "kindest/node:v1.25.0"
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
networking {
service_subnet = "10.90.0.0/12"
pod_subnet = "10.200.0.0/16"
}
node {
role = "control-plane"
}
node {
role = "worker"
}
}
}
resource "helm_release" "install_liqo_rome" {
provider = helm.rome
name = "liqorome"
repository = "https://helm.liqo.io/"
chart = "liqo"
namespace = "liqo"
create_namespace = true
set {
name = "discovery.config.clusterName"
value = "rome"
}
set {
name = "discovery.config.clusterIdOverride"
value = "cbea6d94-5d1e-4f48-85ad-7eb19e92d7e9"
}
set {
name = "discovery.config.clusterLabels.liqo\\.io/provider"
value = "kind"
}
set {
name = "auth.service.type"
value = "NodePort"
}
set {
name = "gateway.service.type"
value = "NodePort"
}
set {
name = "networkManager.config.serviceCIDR"
value = "10.90.0.0/12"
}
set {
name = "networkManager.config.podCIDR"
value = "10.200.0.0/16"
}
}
resource "kind_cluster" "turin" {
name = "turin"
node_image = "kindest/node:v1.25.0"
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
networking {
service_subnet = "10.90.0.0/12"
pod_subnet = "10.200.0.0/16"
}
node {
role = "control-plane"
}
node {
role = "worker"
}
}
}
resource "helm_release" "install_liqo_turin" {
provider = helm.turin
name = "liqoturin"
repository = "https://helm.liqo.io/"
chart = "liqo"
namespace = "liqo"
create_namespace = true
set {
name = "discovery.config.clusterName"
value = "turin"
}
set {
name = "discovery.config.clusterIdOverride"
value = "36148485-d598-4d79-86fe-2559aba68d3c"
}
set {
name = "discovery.config.clusterLabels.liqo\\.io/provider"
value = "kind"
}
set {
name = "auth.service.type"
value = "NodePort"
}
set {
name = "gateway.service.type"
value = "NodePort"
}
set {
name = "networkManager.config.serviceCIDR"
value = "10.90.0.0/12"
}
set {
name = "networkManager.config.podCIDR"
value = "10.200.0.0/16"
}
}

We can now proceed to add the Liqo provider resources to play with Liqo features.
Following blocks enable Terraform to perform peer and offload operations between local and remote clusters with the Liqo default offload policies.
The “peer” resource gets the parameters provided by the “generate” resource as input. At the same time, the “liqo-demo” namespace will be created, then the “offload” resource, applied on the “rome” cluster (i.e., through the “liqo.rome” provider), will offload it on the virtual cluster.

resource "liqo_generate" "generate" {
depends_on = [
helm_release.install_liqo_turin
]
provider = liqo.turin
}
resource "liqo_peer" "peer" {
depends_on = [
helm_release.install_liqo_rome
]
provider = liqo.rome
cluster_id = liqo_generate.generate.cluster_id
cluster_name = liqo_generate.generate.cluster_name
cluster_authurl = liqo_generate.generate.auth_ep
cluster_token = liqo_generate.generate.local_token
}
resource "kubernetes_namespace" "namespace" {
depends_on = [
kind_cluster.rome
]
metadata {
name = "liqo-demo"
}
}
resource "liqo_offload" "offload" {
depends_on = [
helm_release.install_liqo_rome,
kubernetes_namespace.namespace
]
provider = liqo.rome
namespace = "liqo-demo"
}

Right, all is ready to add the deployment of a simple application, consisting of two nginx pods and one service.
One pod (nginx_local) will be scheduled in the local cluster (“rome”) and the other one (nginx_remote) in the remote cluster (“turin”).
This is achieved with the “node_selector_term”, which includes/excludes the nodes labelled as “virtual-node”.

resource "kubernetes_pod" "pod_nginx_local" {
depends_on = [
liqo_peer.peer,
liqo_offload.offload
]
metadata {
labels = {
app = "liqo-demo"
}
name = "nginx-local"
namespace = "liqo-demo"
}
spec {
affinity {
node_affinity {
required_during_scheduling_ignored_during_execution {
node_selector_term {
match_expressions {
key = "liqo.io/type"
operator = "NotIn"
values = [
"virtual-node",
]
}
}
}
}
}
container {
name = "nginx"
image = "nginxdemos/hello"
image_pull_policy = "IfNotPresent"
port {
container_port = 80
name = "web"
}
}
}
}
resource "kubernetes_pod" "pod_nginx_remote" {
depends_on = [
liqo_peer.peer,
liqo_offload.offload
]
metadata {
labels = {
app = "liqo-demo"
}
name = "nginx-remote"
namespace = "liqo-demo"
}
spec {
affinity {
node_affinity {
required_during_scheduling_ignored_during_execution {
node_selector_term {
match_expressions {
key = "liqo.io/type"
operator = "In"
values = [
"virtual-node",
]
}
}
}
}
}
container {
name = "nginx"
image = "nginxdemos/hello"
image_pull_policy = "IfNotPresent"
port {
container_port = 80
name = "web"
}
}
}
}
resource "kubernetes_service" "service_liqo_demo" {
depends_on = [
liqo_peer.peer,
liqo_offload.offload
]
metadata {
name = "liqo-demo"
namespace = "liqo-demo"
}
spec {
port {
name = "web"
port = 80
protocol = "TCP"
target_port = "web"
}
selector = {
app = "liqo-demo"
}
type = "ClusterIP"
}
}

Now, our main.tf file is completed; next step is to run it.

Running main.tf

Move inside the folder containing the above file and then run with the following command provided by Terraform CLI:

terraform init
terraform apply

After completion, run following command to check the status of the pods running in the (offloaded) liqo-demo namespace:

export KUBECONFIG="$PWD/rome-config"
kubectl get pod -n liqo-demo -o wide

The output should be similar to the one below:

NAME          READY  STATUS   RESTARTS  AGE  IP           NODE         NOMINATED NODE  READINESS GATES
nginx-local 1/1 Running 0 10s 10.200.1.11 rome-worker <none> <none>
nginx-remote 1/1 Running 0 9s 10.202.1.10 liqo-turin <none> <none>

You can also test that the infrastructure you have just created behaves correctly by querying the two nginx pods, first “nginx_local” and then “nginx_remote”:

export KUBECONFIG="$PWD/rome-config"

LOCAL_POD_IP=$(kubectl get pod nginx-local -n liqo-demo --template={{.status.podIP}})
REMOTE_POD_IP=$(kubectl get pod nginx-remote -n liqo-demo --template={{.status.podIP}})

kubectl run --image=curlimages/curl curl -n default -it --rm --restart=Never -- curl ${LOCAL_POD_IP}
kubectl run --image=curlimages/curl curl -n default -it --rm --restart=Never -- curl ${REMOTE_POD_IP}

Conclusions

This post shows how to create a virtual cluster and offload a namespace using Liqo via the IaC paradigm. In addition, you have see also how to deploy a simple application on the namespace you just offloaded.

That’s all folks!
You can get more details about code reading the proper section of Liqo docs.
If you like the project, please do not forget to “star” it on GitHub!

--

--