How to manage Oracle PL/SQL Database Enterprise Edition CI/CD Deployment for 100+ System Identifier

Abhinav korpal
Airtel Digital
Published in
8 min readDec 5, 2022

Introduction

At Airtel, we are managing millions of customers Billing Data, every day, for which we use the Oracle Database Enterprise Edition and we are using Jenkins Open Source to perform CI/CD Deployments for Oracle Database. In our CI/CD deployment, we deal with database objects like tables, views, sequences, procedures, packages, triggers, etc. It provides methodologies to CREATE, INSERT, UPDATE, DELETE, RENAME, ALTER, TRUNCATE, and DROP using Oracle Database sub-language DML/DDL. Through CI/CD we modify the object structures running in the database. Our Oracle Database runs on a physical machine in Airtel Data Center (DC). The whole Implementation and Algorithm of this blog are completely written in Groovy and Shell Script using SQL Plus as CLI.

Background

The agenda/concept of this document has been directly involved/used in the deployment of hundreds or thousands of Oracle Database Enterprise Edition on the Airtel Private Cloud.

Who Should Read this Document

Any DevOps Engineer/Developer using Oracle Database Enterprise Edition running on private cloud, OPS Engineer, or SRE Engineer who deploys or manages Oracle Database Enterprise Edition.

Code Structure

We have kept the interface structure of our code repository the same as the Original SQL Developer User Interface for all PL/SQL objects.

We keep the whole code in Bitbucket in the same format depicted below snapshot.

For CI/CD Deployment we are using SQL *Plus CLI from the Build machine which will be our source while doing the deployment.

Deployment System Design for Oracle Database CI/CD

Bitbucket Branch for which DB changes or deploy activity is going to be performed validate and JIRA ID for performed task or which is getting used as a comment for Bitbucket commits, on which DB Deployment or activity is going to happen, these are alias of Database IPs which needs to be selected before triggering Jenkins build so that we know on which databases activity is going to be performed.

Pipeline Flow

It has been divided into 3 components as below:

Jira Integration — — →Parent Pipeline Process — — →Child Pipeline Process

Jira Integration

The developer, TPM, and Release Manager have the components on JIRA Tickets to control the same as from Jenkins Parameters.

Parent Pipeline Process

Process A (Validate JIRA_ID): Check if mentioned JIRA_ID is accepted for Deployment or not. Pipeline support and checks for only some specific projects.

Process B (Checkout for JIRA_ID): Clones the Bitbucket Code for that project considering mentioned Branch in Build with Parameters of Jenkins. Check out only those specific changes which are mapped to the committed JIRA_ID that the developer provided as comments in commits for that change.

Process C (Upload Artifacts to Nexus): All the code which was checkout for that mentioned JIRA_ID is in one place or workspace. That code is bundled and migrated to Nexus for better storage of checked code and visibility of what all committed changes are getting deployed in a single place.

Process D (Finding DB_SIDs for DB Script Execution): If in-case DB_SIDS (Parameters for Jenkins) is not selected, Jenkins code is intelligent enough to cater all the Databases (DB_SIDS) for that checked-output code from Bitbucket, but in this scenario, changes will be replicated or deployed on all that supported changes Databases. Here, the selection is happening depending on PL/SQL Business Types i.e., is it for ADMIN, CATALOG, CUSTOMER, etc

Similarly, if this DB_SID build parameter is selected, checked-out code changes or PL/SQL Objects will only be performed or deployed on those selected Databases (DB_SIDS).

Process E (Deploy of DB Objects): Other Pipeline i.e., CHILD_PIPELINE will be evoked in this stage for PL/SQL provisioning on considered DB_SIDS. This stage is supposed to run till CHILD_PIPELINE Jenkins Pipeline is fully executed.

From Process E we evoke CHILD_PIPELINE which deploys PL/SQL Objects in all users selected or dynamically selected Databases (DB_SIDS) in the sequential mode.

Child Pipeline Process:

Calculating containing a number of IP values to be deployed.

This Pipeline is written in a proper shell scripting language which acts as a Child pipeline of the Parent Pipeline.

Build Parameters of Child Pipeline inherited from Parent Pipeline

i) db_user — common user for all DB_SIDS or Oracle Databases which will be used for login into the database and executing PL/SQL Scripts.

ii) db_pwd — common password for all DB_SIDS or Oracle Databases which will be used for login into the database and executing PL/SQL Scripts.

[ Note: We maintain all passwords in mask and vault]

iii) db_port — common port for all DB_SIDS or Oracle Databases which will be used for login into the database. [Note: we maintain all password mass and in vault].

v) db_sid — This contains all the DB_SIDS (alias of Databases) in a comma-separated format which we are used in Pipelines. Each DB_SID will be considered one by one and PL/SQL Script will be executed on it.

vi) jira_id — This is used to find out the Defect Name which will be used as a file path or name in this pipeline. Files are used for storing the previous state of PL/SQL Objects (which are taken into consideration for current deployment) and storing those artifacts to Nexus.

vii) PARENT_BUILD_WORKSPACE — This is used to enter to the parent pipeline i.e., PARENT_BUILD_WORKSPACE where deployable code is present.

The logic behind this Child Pipeline

i) Enter to Parent Pipeline Build Workspace i.e., PARENT_BUILD_WORKSPACE

ii) We wrote a loop for each fix module where Business logic PL/SQL scripts are present.

for {

a) Copy each module into the CHILD_PIPELINE workspace.

b) Start a for loop for each DB_SID fetched from the db_sid build parameter.

for {

a) Find mapped DB_SID Database IP or Host and store it in a variable i.e., db_host

b) Find and store all the supported PL/SQL Objects files paths going in the current deployment in a file i.e., ack_files.txt

c) Check chosen DB_SID belongs to which PL/SQL Business Types i.e., is it for CUSTOMER, etc. Depending on this it will cater only to those script file paths which are supported by it and store it in different files i.e., ${Bussiness_Type_Alias}_scripts. txt

d) After getting the results in a file where we have stored which script file needs to be executed on which DB_SID, execution starts sequentially in certain sequences like, Table, Queues, Types, and Sequence should be executed first in a linear way and then rest of other PL/SQL Objects.

e) Above step execution happens with help of SQL *Plus CLIwhere every above-mentioned build parameter is used for login and execution.

f) After all the above PL/SQL Script Execution is completed, Pipeline moves towards recompilation where it will check that in case any object is in a compilation error state so the script is again executed in the end to fix the compilation issue. This is checked using the dba_objects table of Oracle where every object's current state is present and can be checked.

g) After recompilation, store the pipeline results like JOB_NAME, BUILD_ID, OBJECT_NAME, OBJECT_TYPE, CREATE_DATE, STATUS, DEPL_STATUS, DB_NAME, etc in a DevOps monitoring-based database i.e., jenkins_release_reg which will take values from dba_objects table of Oracle where every object current state is present and can be checked.

h) In the end, it will validate the count of successfully deployed PL/SQL Objects stored in the jenkins_release_reg database with the total files checked out for deployment. If the count matches, Deployment seems successful otherwise failure.

}

}

iii) Above 2nd Step will keep on running until all modules are covered.

Last to verify the child process return a status value to it child process. A Parent process terminates when it finishes executing the pipeline final statement and oracle exit () system call. At this point, the child process returns a status value (typically in integer) to its parent pipeline via a system call.

The below diagram depicts the Actual Functionality of the Process:

Each machine takes a few seconds to receive, commit and compile the data. As such we have arranged our hosts in sequential order. Each object it receives immediately moved to the next machine in the ring.

Blue/Green Deployment:

Our Blue/Green Deployment strategy is in which we have separate but identical environments. So basically in the plain text we have 100s of servers, can be decided on which first server you want to test and run our code.

Termination can occur in other circumstances as well:

- DB connectivity Issues Due to password mismatch or expire.[ Note: We maintain all password in mask and vault]

- Database queries get stuck in case the database is locked from backed while it is business in executing, in this case, default 10 min time out occurs.

- Wrongly written queries for PL/SQL Objects due to which SQL*PLUS CLI throughs an error/exception and terminates the pipeline process.

- Few PL/SQL Objects (committed change on Bitbucket) get deployed successfully and few are not. This leads to a mismatch in the count with respect to the DevOps-owned Jenkins build database count and the number of deployable script counts.

Parent Pipeline may terminate the execution of Child Pipeline for a variety of reasons such below

- Nexus Repository is down due to which artifacts are not uploaded successfully.

- Wrong DB_SIDs selection from parameters.

- The developer missed committing their change with proper JIRA ID, which will lead to failure.

Future System Design Plans:

The work of DevOps team is never complete. We are brainstorming for Roll Back Strategies for the above-implemented deployment process as we are dealing with huge amounts of data with hundreds of databases.

The Team Building Great Things Together

We focused on expanding our use case footprint. This wouldn’t be possible without hard work and great contributions from the team to build impactful systems that help our business forward, bringing DevOps Engineering solutions.

Special Thanks to the members of various teams: Nathan Phipps and Anubhav Yadav

Author of this whole implementation and DevOps Practices/System Design For DevOps: Abhinav korpal

Special Thanks and credits should go to stunning colleagues for direct collaboration: Gaurav Walecha, Anshika Mishra, Praveen, Arzaw Tiwari

--

--