A cloud native brew with Oracle Database, Helidon and Kubernetes — Part 4 — GitOps Deploy

Using GitOps, ora-operator to manage the lifecycle of Oracle Databases on Oracle Cloud and Kubernetes

Ali Mukadam
Oracle Developers
12 min readMar 26, 2024

--

In Part 3, we were mostly setting up the management infrastructure to get ourselves ready for deployment. In this article, we’ll do the actual deployment. Recall that this is what we want to achieve:

GitOps with Cluster API, ora-operator, Oracle Autonomous Database and Helidon

Deploying Autonomous Database with GitOps and ArgoCD

First, let’s create the manifest for our Autonomous Database:

# Copyright (c) 2022, Oracle and/or its affiliates. 
# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
apiVersion: database.oracle.com/v1alpha1
kind: AutonomousDatabase
metadata:
name: r1-adb
namespace: r1
spec:
details:
# Update compartmentOCID with your compartment OCID.
compartmentOCID: ocid1.compartment.oc1...
# The dbName must begin with an alphabetic character and can contain a maximum of 14 alphanumeric characters. Special characters are not permitted. The database name must be unique in the tenancy.
dbName: r1adb
displayName: r1-adb
cpuCoreCount: 1
adminPassword:
ociSecret:
# The OCID of the OCI Secret that holds the password of the ADMIN account. It should start with ocid1.vaultsecret... .
ocid: ocid1.vaultsecret.oc1...
dataStorageSizeInTBs: 1
networkAccess:
accessType: PRIVATE
privateEndpoint:
subnetOCID: ocid1.subnet.oc1...
nsgOCIDs:
- ocid1.networksecuritygroup.oc1...
isMTLSConnectionRequired: true

Check this into a git repo and into a branch, say adb. If your repo is private, you’ll need to connect it in Argo CD. You can then create an Argo CD application by pointing to your repo. 1 of the many things I like about Argo CD is its intuitive UI and this should be straightforward. You can then sync and watch your Autonomous DB getting created:

In the OCI Console, we can also see the Autonomous Database being created:

Once the Autonomous Database is created, check its network configuration and see it’s been placed in the dedicated VCN:

Creating the OKE workload cluster

We’ll now create the OKE Workload Cluster. At this point, we have 2 options:

  1. Either we pre-create the core infrastructure (VCN, subnets, NSGs, routing table etc) separately and we get CAPI to just reuse it by providing their OCIDs.
  2. Or we use CAPI to create everything required for the workload cluster to be able to connect to Autonomous DB, including DRG and RPC.

For the purpose of this article, we’ll go with the 2nd option:

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
labels:
cluster.x-k8s.io/cluster-name: r1-syd
take-along-label.capi-to-argocd.workload: "true"
workload: "true"
name: r1-syd
namespace: capi
spec:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: OCIManagedCluster
name: r1-syd
namespace: capi
controlPlaneRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: OCIManagedControlPlane
name: r1-syd
namespace: capi
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: OCIManagedCluster
metadata:
labels:
cluster.x-k8s.io/cluster-name: r1-syd
name: r1-syd
spec:
compartmentId: "ocid1.compartment.oc1.."
definedTags:
cn:
ora: ora-operator
region: ap-sydney-1
identityRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: OCIClusterIdentity
name: cluster-identity
namespace: capi-system
networkSpec:
apiServerLoadBalancer:
name: ""
vcnPeering:
drg:
# true, a new DRG will be created.
manage: true
name: "r1-syd-drg"
# The CIDR ranges specified below will be added to the workload cluster VCN
# with the target as DRG. This route rule will make sure that traffic to management
# cluster VCN will be directed to the DRG.
peerRouteRules:
- vcnCIDRRange: "10.10.0.0/16" # db vcn
remotePeeringConnections:
- managePeerRPC: true
peerDRGId: "ocid1.drg.oc1.ap-sydney-1."
peerRegionName: "ap-sydney-1"
vcn:
cidr: 10.0.0.0/16
networkSecurityGroup:
list:
- egressRules:
- egressRule:
description: Allow Kubernetes API endpoint to communicate with OKE.
destination: all-syd-services-in-oracle-services-network
destinationType: SERVICE_CIDR_BLOCK
isStateless: false
protocol: "6"
- egressRule:
description: Path Discovery.
destination: all-syd-services-in-oracle-services-network
destinationType: SERVICE_CIDR_BLOCK
icmpOptions:
code: 4
type: 3
isStateless: false
protocol: "1"
- egressRule:
description: Allow Kubernetes API endpoint to communicate with worker
nodes.
destination: 10.0.64.0/20
destinationType: CIDR_BLOCK
isStateless: false
protocol: "6"
tcpOptions:
destinationPortRange:
max: 10250
min: 10250
- egressRule:
description: Path Discovery.
destination: 10.0.64.0/20
destinationType: CIDR_BLOCK
icmpOptions:
code: 4
type: 3
isStateless: false
protocol: "1"
ingressRules:
- ingressRule:
description: Kubernetes worker to Kubernetes API endpoint communication.
isStateless: false
protocol: "6"
source: 10.0.64.0/20
sourceType: CIDR_BLOCK
tcpOptions:
destinationPortRange:
max: 6443
min: 6443
- ingressRule:
description: Kubernetes worker to Kubernetes API endpoint communication.
isStateless: false
protocol: "6"
source: 10.0.64.0/20
sourceType: CIDR_BLOCK
tcpOptions:
destinationPortRange:
max: 12250
min: 12250
- ingressRule:
description: Path Discovery.
icmpOptions:
code: 4
type: 3
isStateless: false
protocol: "1"
source: 10.0.64.0/20
sourceType: CIDR_BLOCK
- ingressRule:
description: External access to Kubernetes API endpoint.
isStateless: false
protocol: "6"
source: 0.0.0.0/0
sourceType: CIDR_BLOCK
tcpOptions:
destinationPortRange:
max: 6443
min: 6443
name: control-plane-endpoint
role: control-plane-endpoint
- egressRules:
- egressRule:
description: Allow pods on one worker node to communicate with pods on other worker nodes.
destination: "10.0.64.0/20"
destinationType: CIDR_BLOCK
isStateless: false
protocol: "all"
- egressRule:
description: Allow worker nodes to communicate with OKE.
destination: all-syd-services-in-oracle-services-network
destinationType: SERVICE_CIDR_BLOCK
isStateless: false
protocol: "6"
- egressRule:
description: Path Discovery.
destination: 0.0.0.0/0
destinationType: CIDR_BLOCK
icmpOptions:
code: 4
type: 3
isStateless: false
protocol: "1"
- egressRule:
description: Kubernetes worker to Kubernetes API endpoint communication.
destination: 10.0.0.8/29
destinationType: CIDR_BLOCK
isStateless: false
protocol: "6"
tcpOptions:
destinationPortRange:
max: 6443
min: 6443
- egressRule:
description: Kubernetes worker to Kubernetes API endpoint communication.
destination: 10.0.0.8/29
destinationType: CIDR_BLOCK
isStateless: false
protocol: "6"
tcpOptions:
destinationPortRange:
max: 12250
min: 12250
ingressRules:
- ingressRule:
description: Allow pods on one worker node to communicate with pods on other worker nodes.
isStateless: false
protocol: "all"
source: 10.0.64.0/20
sourceType: CIDR_BLOCK
- ingressRule:
description: Allow Kubernetes API endpoint to communicate with worker nodes.
isStateless: false
protocol: "6"
source: 10.0.0.8/29
sourceType: CIDR_BLOCK
- ingressRule:
description: Path Discovery.
icmpOptions:
code: 4
type: 3
isStateless: false
protocol: "1"
source: 0.0.0.0/0
sourceType: CIDR_BLOCK
- ingressRule:
description: Load Balancer to Worker nodes node ports.
isStateless: false
protocol: "6"
source: 10.0.0.32/27
sourceType: CIDR_BLOCK
tcpOptions:
destinationPortRange:
max: 32767
min: 30000
name: worker
role: worker
- egressRules:
- egressRule:
description: Load Balancer to Worker nodes node ports.
destination: 10.0.64.0/20
destinationType: CIDR_BLOCK
isStateless: false
protocol: "6"
tcpOptions:
destinationPortRange:
max: 32767
min: 30000
ingressRules:
- ingressRule:
description: Accept http traffic on port 80
isStateless: false
protocol: "6"
source: 0.0.0.0/0
sourceType: CIDR_BLOCK
tcpOptions:
destinationPortRange:
max: 80
min: 80
- ingressRule:
description: Accept https traffic on port 443
isStateless: false
protocol: "6"
source: 0.0.0.0/0
sourceType: CIDR_BLOCK
tcpOptions:
destinationPortRange:
max: 443
min: 443
name: service-lb
role: service-lb
subnets:
- cidr: 10.0.0.8/29
name: control-plane-endpoint
role: control-plane-endpoint
type: public
- cidr: 10.0.0.32/27
name: service-lb
role: service-lb
type: public
- cidr: 10.0.64.0/20
name: worker
role: worker
type: private
---
kind: OCIManagedControlPlane
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
metadata:
name: r1-syd
namespace: capi
spec:
version: "v1.28.2"
clusterPodNetworkOptions:
- cniType: "FLANNEL_OVERLAY"
---

Notice the section under networkSpec:

networkSpec:
apiServerLoadBalancer:
name: ""
vcnPeering:
drg:
# true, a new DRG will be created.
manage: true
name: "r1-syd-drg"
# The CIDR ranges specified below will be added to the workload cluster VCN
# with the target as DRG. This route rule will make sure that traffic to management
# cluster VCN will be directed to the DRG.
peerRouteRules:
- vcnCIDRRange: "10.10.0.0/16" # db vcn
remotePeeringConnections:
- managePeerRPC: true
peerDRGId: "ocid1.drg.oc1.ap-sydney-1."
peerRegionName: "ap-sydney-1"

If you configure this, CAPI will automatically peer the 2 VCNs for us. If not, you must configure peering yourself in the OCI console. Note there are many peering variations that you can perform and these are beyond the scope of this article. Instead, we’ll simply assume 1 DRG and 1 RPC for each workload VCN.

We can also then specify the node pools:

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
name: r1-syd-mp-0
namespace: capi
spec:
clusterName: r1-syd
replicas: 2
template:
spec:
clusterName: r1-syd
bootstrap:
dataSecretName: ""
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: OCIManagedMachinePool
name: r1-syd-mp-0
version: v1.28.2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: OCIManagedMachinePool
metadata:
name: r1-syd-mp-0
namespace: capi
spec:
version: "v1.28.2"
nodeShape: "VM.Standard.E4.Flex"
sshPublicKey: ""
nodeSourceViaImage:
bootVolumeSizeInGBs: 50
nodeShapeConfig:
ocpus: "2"
nodePoolNodeConfig:
nodePoolPodNetworkOptionDetails:
cniType: "FLANNEL_OVERLAY"
---

Notice how we have also set the OCI defined tags on the cluster:

spec:
compartmentId: "ocid1.compartment.oc1.."
definedTags:
cn:
ora: ora-operator

This will ensure the workload cluster will also be part of the same dynamic group we created before and save us the trouble of using key-based authentication.

Check these into the same git repo but a different branch, say oke and create another Argo CD application. Sync and watch your cluster get created. If you don’t want to keep syncing everytime, you can set the sync policy to automatic.

OKE Workload Cluster being created by Argo CD and Cluster API

Similarly, in the OCI Console, we can see the OKE workload cluster being created:

While the OKE cluster is being created, use the OCI Network Visualizer to verify that the 2 VCNs have been peered:

While this is happening, let’s proceed to the next part to ensure ora-operator will be deployed to the OKE Workload cluster.

Automatically registering workload cluster with Argo CD

Argo CD can handle deployment to multiple clusters. All you need to do is register the target Kubernetes cluster with it and you can then use Argo CD to deploy applications to it. How does Argo CD know about the clusters? Well, when you register a cluster, Argo CD creates a cluster credential and stores this in a Kubernetes Secret for each cluster. When you create an application and target it to the newly registered cluster, Argo CD will use this credential to authenticate itself and deploy the application. Your management cluster must therefore be able to access the API server of your workload cluster. In this article, our workload clusters are public.

But hang on, since we are already deploying our workload clusters with Cluster API and the kubeconfigs are already retrievable via Cluster API:

this should be automatic right? Not so fast unfortunately. As of now, Argo CD cannot yet recognize these kubeconfig secrets created by CAPI and use them as target clusters. There are at least 4 community efforts, with varying levels of commitment and maturity to automate this:

  1. https://github.com/a1tan/argocdsecretsynchronizer
  2. https://github.com/dmolik/automent
  3. https://github.com/dntosas/capi2argo-cluster-operator
  4. https://github.com/lknite/daytwo

The last one seems very promising but seems to have a further dependency on pinniped, so instead we’ll go with the 3rd one which have more or less the same functionality. Not that I have anything against pinniped but I don’t feel like looking into another tool to solve a small issue. Let’s install capi2argo operator in the hub cluster:

helm repo add capi2argo https://dntosas.github.io/capi2argo-cluster-operator/
helm repo update
helm upgrade -i capi2argo capi2argo/capi2argo-cluster-operator

When the OKE workload cluster is created, the capi2argo operator will register it automatically as a target cluster for deployment by Argo CD. We can now see that the cluster secret has also been automatically created:

kubectl -n argocd get secrets
NAME TYPE DATA AGE
argocd-initial-admin-secret Opaque 1 6d1h
argocd-notifications-secret Opaque 0 6d1h
argocd-secret Opaque 5 6d1h
cluster-r1-oke Opaque 3 7m32s

and the workload cluster automatically registered with Argo CD:

If we extract the cluster-r1-oke secret in yaml, we can also see the label “workload” has also been copied:

k get secret cluster-r1-oke -o yaml
apiVersion: v1
data:
config:
name: Y2x1c3Rlci1jczJkNHVjb3JjYQ==
server:
kind: Secret
metadata:
creationTimestamp: "2024-03-25T07:47:12Z"
labels:
argocd.argoproj.io/secret-type: cluster
capi-to-argocd/cluster-namespace: capi
capi-to-argocd/cluster-secret-name: r1-oke-kubeconfig
capi-to-argocd/owned: "true"
taken-from-cluster-label.capi-to-argocd.workload: ""
workload: "true"
name: cluster-r1-oke
namespace: argocd
resourceVersion: "3438098"
uid: a3758587-7965-4f25-af4c-f5d5ce95e147
type: Opaque

Having the label carried over is important as it will allow us to use Argo CD generators later on.

Deploying ora-operator to workload clusters

Now that we can automatically register workload clusters, we also want to automatically deploy other applications to them. To begin with, we want to make it possible to deploy ora-operator automatically in workload clusters. We do so by creating the following manifest:

apiVersion: addons.cluster.x-k8s.io/v1alpha1
kind: HelmChartProxy
metadata:
name: ora-operator
namespace: capi
spec:
clusterSelector:
matchLabels:
workload: "true"
repoURL: oci://path/to/charts
chartName: ora-operator
version: 0.0.1
releaseName: ora-operator

Notice the cluster selector and the workload label. This will ensure only clusters registered with those labels will have ora-operator deployed.

Check this into another git branch, say ora-operator and create another Argo CD application for it. Argo CD, Cluster API and the latter’s helm addon will do the deployment for us:

Automating ora-operator deployment via Argo CD and Cluster API

If we inspect the status of ora-operator, we can see r1-oke is a matching cluster:

Finally, if we check the operator pod status in r1–oke:

kubectl -n oracle-database-operator-system get pods

NAME READY STATUS RESTARTS AGE
oracle-database-operator-controller-manager-5b7f86dd87-6xb45 1/1 Running 0 3m54s
oracle-database-operator-controller-manager-5b7f86dd87-pqs89 1/1 Running 0 3m54s
oracle-database-operator-controller-manager-5b7f86dd87-xd2rp 1/1 Running 0 3m54s

we will see the the operator has now been deployed in r1-oke too.

Binding the Autonomous Database with the workload cluster

Now that ora-operator is deployed to our workload cluster, we also want to bind the ADB to it. Let’s define our binding manifest:

apiVersion: database.oracle.com/v1alpha1
kind: AutonomousDatabase
metadata:
name: r1-adb
spec:
details:
autonomousDatabaseOCID: ocid1.autonomousdatabase.oc1...
wallet:
# Insert a name of the secret where you want the wallet to be stored. The default name is <metadata.name>-instance-wallet.
name: clouddb-wallet
password:
ociSecret:
# The OCID of the OCI Secret that holds the password of the ADMIN account. It should start with ocid1.vaultsecret... .
ocid: ocid1.vaultsecret.oc1...

And check it into adb-bind branch. But unlike the others, we are not going to create an Argo CD application out of this one. We are not? Bear with me.

The reason we are not is that we want to be selective in terms of how our clusters are targeted for deployment by Argo CD. In this minimal example, all we have to identify clusters as workloads is the key-value label pair:

workload: "true"

But you can imagine in more complex environment, you can have more complex needs and you want to ensure you handle the bind appropriately, you have the necessary permissions to use this particular database, OCI secrets etc. Instead, Argo CD provides us with a more sophiscated resource: ApplicationSets.

ApplicationSets allows us to use generators as well as specify remote git repos. Specifically, we’ll use cluster generators are basically a way to dynamically determine to which clusters we should deploy this particular application using labels. So instead, we’ll create an ApplicationSet that uses a Cluster Generator:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: appset-adb-bind
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- clusters:
selector:
matchLabels:
workload: "true"
template:
metadata:
name: '{{.name}}-adb-bind'
spec:
project: "default"
source:
repoURL: <replaceme>
targetRevision: adb-bind
path: .
destination:
server: '{{.server}}'
namespace: r1
syncPolicy:
automated:
prune: true
self-heal: true
syncOptions:
- CreateNamespace=true

and points to our git branch (adb-bind) where we have checked in the manifest to bind our Autonomous Database. We’ll check this into a new branch, say app-deploy from which we’ll create an Argo CD application. Now, when the OKE workload cluster is created, capi2argo operator will automatically register it with Argo CD, which in turn will deploy ora-operator and bind OCI Autonomous Database to it. In Argo CD, you’ll see that a new Argo CD application was automatically generated, in this case, to bind the Autonomous DB:

If we browse to the generated application, we can see ADB as well as the secret containing the wallet created:

Deploying Helidon

The final step is to also seamlessly deploy our Helidon application using the same GitOps approach as above. We’ll do this in the next article as it looks like this one is already long enough and Medium is indicating this to me by making any further writing painfully slow.

Summary

In this article, we continued our journey with ora-operator and used it with GitOps tools like Argo CD and Cluster API to automate the lifecycle and usage of Oracle Database deployment. We used the Autonomous Database but in future articles, we’ll also look at using other flavours of the Oracle Database supported by the operator.

To make our setup more secure, we used instance_principal instead of key-based authentication and OCI Vault instead of Kubernetes secrets to store and access the database admin and wallet passwords.

To decouple our database from our application infrastructure, we run our database in a dedicated VCN separate from the OKE workload cluster is running. Finally, we use Cluster API Helm Add on to automatically install ora-operator in our workload cluster. Using the ora-operator, we can then bind the existing database to it so that our application can then connect to it.

I’ll conclude here and thank my colleagues Julian Ortiz, Shyam Radakrishan and Shaun Levey for their contributions to this article as well as the fantastic crew writing developing ora-operator.

--

--