Database CI/CD with the Oracle DB Operator for Kubernetes, GitHub Actions and Liquibase

Norman Aberin
6 min readNov 29, 2023

--

Databases often require database schema changes against pre-provisioned, local or shared databases. These changes to the database must be propagated into another environment, or a new database provisioned by a development team. In all cases, there is a dependency on human operators or database administrators.

In this blog, we will explore another approach with containerized databases on Kubernetes clusters using the Oracle DB Operator for Kubernetes, Liquibase and GitHub Actions. With the following use-case:

  • Provision a new database each time a feature branch is created with the latest schemas and necessary data

the blog demonstrates the automated creation, configuration and relinquishment of containerized databases based on ongoing development needs triggered by GitHub events.

About The Oracle DB Operator and Liquibase

The Oracle DB Operator for Kubernetes (OraOperator) is an operator that automates and manages the deployments of stateful Oracle Databases as observable and operable objects on Kubernetes. The Oracle DB Operator (now in production release version v1.0.0) introduces multiple Custom Resource Definitions (CRDs) and supports different Oracle Database configurations with different lifecycle operations per database.

Liquibase is an open-source tooling for database schema change management and enables developers to version and deploy database code easier.

With both the OraOperator and Liquibase, we have the necessary tools for managing database lifecycles (create, delete) using kubectl and setting up the database with schemas and some data programmatically on GitHub Actions.

Prerequisites

Before we dive deeper, there are a few components that are required by the blog workflows:

  • A working Kubernetes cluster with the OraOperator installed
  • Oracle Cloud Infrastructure access permissions (policies, secrets)
  • The necessary GitHub secrets listed here

Creating Databases with GitHub Actions

This blog uses GitHub Actions — a powerful CI/CD tool on GitHub that has an excellent collection of marketplace actions. We have two working workflows for create-branch and delete-branch that respectively creates and cleans up containerized Oracle Database resources in the Kubernetes cluster based on the creation and deletion of feature branches. These events are filtered by with the following syntax:

on:
create:
branches:
- 'feature/**'

In this example, the feature branches are prefixed with feature/ and followed by a unique ID uniquely identifying the database and the Kubernetes resources to be created. The unique ID is assumed to be unique to simplify things.

Upon creating a feature branch, the create.yml workflow in the repository checks out the code from GitHub and runs a python script to retrieve the unique ID.

Kubectl is then configured to connect to a working Oracle Kubernetes Engine (OKE) cluster using the official oracle-actions/configure-kubectl-oke action on GitHub marketplace provided by Oracle.

    env:
OCI_CLI_USER: ${{ secrets.OCI_CLI_USER }}
OCI_CLI_TENANCY: ${{ secrets.OCI_CLI_TENANCY }}
OCI_CLI_FINGERPRINT: ${{ secrets.OCI_CLI_FINGERPRINT }}
OCI_CLI_KEY_CONTENT: ${{ secrets.OCI_CLI_KEY_CONTENT }}
OCI_CLI_REGION: ${{ secrets.OCI_CLI_REGION }}
BRANCH_REF: ${{ github.event.ref }}

steps:
# Get the OKE credentials so we can deploy to the cluster
- name: Configure Kubectl
uses: oracle-actions/configure-kubectl-oke@v1.3.2
id: test-configure-kubectl-oke-action
with:
cluster: ${{ secrets.OKE_CLUSTER_OCID }}

Using kustomize, a tool for customizing Kubernetes objects, we generate the YAML file using the base file for creating containerized Oracle Databases. Once the YAML file is generated, suffixing the namespace and the name with the unique ID, we directly pipe the file into kubectl apply -f to create the resource.

    # Set up kustomize
- name: Set up Kustomize
run: |-
curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
chmod u+x ./kustomize

# Run Kustomize and Deploy
- name: Deploy SIDB
run: |-
kustomize create --resources base/sidb-xe
kustomize edit set namesuffix ${{ steps.getname.outputs.idname }}
kustomize edit set namespace feature-${{ steps.getname.outputs.idname }}
kustomize build . | kubectl apply -f -

The rest of the job waits for the single instance database to be fully provisioned and retrieves the IP address once it is ready.

    # Wait for database to get ready
- name: Wait for Database
id: getready
run: |-
./scripts/checkdb.sh ${{ steps.getname.outputs.idname }}

- name: Retrieve IP
id: getaddress
run: echo "ipaddress=$(kubectl get -n feature-${{ steps.getname.outputs.idname }} singleinstancedatabase db-${{ steps.getname.outputs.idname }} -o 'jsonpath={.status.pdbConnectString}')" >> "$GITHUB_OUTPUT"

The next job will make use of the action liquibase-github-actions/update from the official Liquibase repositories to update the database as system and create the schemas using the changelog file.

  # Setup DB Schemas/Users
setup_users:
# ...

steps:
# Run LB as system
- name: "Run LB as system"
uses: liquibase-github-actions/update@v4.23.2
with:
changelogFile: "changelog.xml"
url: ${{ format('jdbc:oracle:thin:@//{0}', env.DB_IP) }}
username: "system"
password: ${{ env.DB_PWD }}
searchPath: "/github/workspace/liquibase/admin"

Once the schemas are created, the final job of the workflow: setup_tables will go through each schema and add the tables and data required for development. Inside the liquibase directory of the repository, besides the admin directory, each schema has its own directory to hold the changesets in SQL files related to it. To create the necessary tables within each schema, the following step runs a python script to list the schema directories to loop through.

    - name: list
id: getschemas
run: echo "schemas=$(python scripts/run_util.py schemas liquibase)" >> "$GITHUB_OUTPUT"

The output value schemas is retrieved in the setup_tables job and is used as the variable for GitHub Action’s matrix strategy. With a matrix of schemas, we use Liquibase’s action for update once more to connect to each schema by proxy through the system user and run an update using each schema’s changelog file.

  # Setup Schema Tables and Lookup Table Data
setup_tables:
# ...
strategy:
matrix: ${{ fromJson(needs.setup_users.outputs.schemas) }}

steps:
# ...
- name: "Run LB as User"
uses: liquibase-github-actions/update@v4.23.2
with:
changelogFile: "changelog.xml"
url: ${{ format('jdbc:oracle:thin:@//{0}', env.DB_IP) }}
username: ${{ format('system[{0}]', matrix.schemas) }}
password: ${{ env.DB_PWD }}
searchPath: ${{ format('/github/workspace/liquibase/{0}', matrix.schemas) }}

Once the database is fully provisioned and configured, you can then change the database password that was provided by default by the workflow.

Cleaning Up with GitHub Actions

Once feature development is done and there is no longer a need for the isolated database, we have configured the delete-branch workflow to cleanup the resources. This workflow uses the same oracle-actions/configure-kubectl-oke action as before. What is different about this workflow is it runs kubectl delete to terminate the resources created before and is triggered when the feature branches are deleted.

      # Run Kubectl commands to delete adb and namespace
- name: Delete resources
run: |-
kubectl delete singleinstancedatabase db-${{ steps.getname.outputs.idname }} -n feature-${{ steps.getname.outputs.idname }}
kubectl delete namespace feature-${{ steps.getname.outputs.idname }}

What is next?

This blog has shown basic integrations of both the OraOperator and Liquibase to manage database lifecycles and schema changes for when a feature branch is created or deleted. Depending on your use-cases, you can certainly add more workflows for different events. For example, a pull_request to run other liquibase commands such as diff-changelog and tag or a push to main to run an update to a higher database environment. In all cases, the same building blocks, interfaces (kubectl), commands and actions (liquibase-github-actions/update) may be used to provision, cleanup and update Oracle databases.

You can also use self-hosted GitHub Action runners to pre-install the necessary software and avoid the overhead of installing the tools in every job or when private access to a private cluster or databases in a private network is required.

Integration with Jenkins

A working example integrating the Oracle DB Operator with Jenkins can be found in one of our LiveLabs as a lab that you can try out for yourself. An upcoming version of the aforementioned LiveLab related to the OraOperator and Jenkins, will include integration with Liquibase. Meanwhile, another Livelab demonstrates a working integration of Liquibase with Jenkins already but does not include the OraOperator.

Oracle Livelabs is a free platform in which you can find a huge collection of workshops to learn more about Oracle technologies and different integrations.

Note that the labs are configured to work with an Oracle Cloud account or a valid Oracle Cloud Free Tier (30-day free trial).

Final Words

Databases can be a part of the application development and deployment CI/CD pipelines, using the right tools. Using Liquibase and the Oracle Database Operator for Kubernetes or OraOperator, we can bring Oracle databases to Kubernetes, simplify the database lifecycle management with an operator and have the ability to manage database schema changes.

Thank you for reading! If you have any feedback or questions, leave a comment below or chat with us on Discord! https://discord.gg/WQR5u5R5gG

References

--

--