Supercharged Automation: A Introduction into Dynamic CI/CD for GCP with Jenkins Pipeline

Venkatesh R
Niveus Solutions
6 min readMay 14, 2024

--

Dynamic CI/CD: Streamlining Delivery with Intelligent Automation

Continuous Integration and Continuous Delivery (CI/CD) has become the cornerstone of modern software development. While traditional CI/CD pipelines offer automation, they can become rigid and inflexible as projects evolve. This is where Dynamic CI/CD steps in, bringing intelligence and adaptability to the process.

What is Dynamic CI/CD?

Dynamic CI/CD pipelines leverage code and scripting languages alongside static configurations. This allows for workflows that adapt based on various factors like:

  • Branch: Different tests or deployments might be needed for development, staging, and production branches.
  • Code changes: Specific changes can trigger targeted builds or deployments.
  • Environment: Resource allocation or testing steps can be adjusted for different environments.

Benefits of Dynamic CI/CD

  • Reduced Costs: Dynamic pipelines eliminate the need for managing multiple static configurations. This saves development time and reduces maintenance overhead.
  • Faster Delivery: By automating decisions based on context, dynamic pipelines streamline the CI/CD process, leading to faster deployments.
  • Increased Flexibility: Dynamic pipelines can adapt to changing project requirements without manual intervention.

Cost Analysis: Static vs. Dynamic CI/CD

By embracing Dynamic CI/CD, development teams can achieve significant cost savings, accelerate delivery cycles, and maintain a highly adaptable development process.

In the fast lane of modern development, manual infrastructure management is a roadblock. Dynamic CI/CD pipelines come to the rescue, automating resource creation, deployments, and even tear down — all without human intervention. This blog post takes a deep dive into building such a pipeline for Google Cloud Platform (GCP), encompassing GCE instances, GKE clusters, Cloud SQL databases, and data loading from GCS storage. We’ll also provide a comprehensive Proof of Concept (POC) with a detailed Jenkinsfile, covering common scenarios and stage breakdowns.

The Power of Dynamic CI/CD

  • Reduced Manual Work: Say goodbye to manual infrastructure provisioning and configuration, freeing your team for innovation.
  • Enhanced Efficiency: Streamline deployments and testing, accelerating development life-cycles.
  • Improved Reliability: Automate configurations, minimizing human error and inconsistencies.
  • Unmatched Scalability: Effortlessly handle changes in resource requirements as your project grows.

Building a Robust Pipeline

Here’s a breakdown of the pipeline’s core components:

  1. Version Control System (VCS): The foundation — store your code and infrastructure configurations (e.g., Terraform scripts) in a VCS like Git. Here, collaboration and version history thrive.
  2. CI/CD Tool: Choose your weapon — Cloud Build or a third-party solution like Jenkins. This is the conductor of your automation orchestra.
  3. Infrastructure Provisioning: Leverage tools like Terraform or Cloud Deployment Manager scripts. These scripts create and manage GCE instances, GKE clusters, and Cloud SQL databases based on configurations residing in your VCS repository.
  4. Containerization and Deployment: Embrace containerization with Docker to build container images for your application. Powerful tools like kubectl then handle deployment to your GKE cluster.
  5. Database Schema Management: Automate database schema migrations using tools like Flyway or Liquibase. These tools ensure your database schema stays in sync with your code.
  6. Data Loading: Access and load data from GCS storage buckets into your Cloud SQL database using the SQL command or Cloud SQL APIs. Streamline data movement for a seamless flow.
  7. Testing Integration: Integrate automated testing frameworks like JUnit or Selenium to test your application within the pipeline. This ensures quality throughout the development process.
  8. Optional Teardown: For ultimate resource optimization, configure the pipeline to automatically delete resources (GCE instances, GKE clusters, Cloud SQL) after successful testing or deployment.

POC with Jenkinsfile: Orchestrating Automation on GCP

Let’s leverage the power of Jenkins with a comprehensive Jenkinsfile to create a dynamic CI/CD pipeline for a real-world scenario. We’ll assume you’re building a microservices application and utilizing the “Building a Serverless Recommendation Engine” lab from GCP Cloud Skill Labs as a reference.

Here’s the breakdown of the Jenkinsfile stages:

Stage 1: Clone Repository

stage(‘Clone Repository’) {

steps {

git branch: ‘main’,

credentialsId: ‘your-git-credentials-id’,

url: ‘https://github.com/your-organization/your-repo.git'

}

}

This stage clones your code repository from a version control system like Github. Replace placeholders like your-git-credentials-id and https://github.com/your-organization/your-repo.git with your specific details.

Stage 2: Terraform — Provision Infrastructure (GCE,GKE, Cloud SQL)

stage(‘Terraform — Provision Infrastructure’) {

steps {

sh ‘terraform init’

sh ‘terraform apply -auto-approve’

}

}

This stage utilizes Terraform to provision the required GCP infrastructure. The sh command executes shell scripts within the Jenkins environment. The first script initializes Terraform, and the second applies the infrastructure configuration defined in your Terraform scripts.

Stage 3: Build Docker Images

stage(‘Build Docker Images’) {

steps {

sh ‘docker build -t gcr.io/your-project-id/recommendation-engine:latest .’

// Add similar steps for other microservices if applicable

}

}

This stage builds Docker images for your application (recommendation engine in this example) using the Dockerfile located in your repository. Replace gcr.io/your-project-id/recommendation-engine:latest with the appropriate image name for your project. Similar steps can be added to build images for other microservices in your application.

Stage 4: Deploy to GKE

withCredentials([usernamePassword(credentialsId: ‘your-gke-credentials-id’, usernameVariable: ‘GKE_USERNAME’, passwordVariable: ‘GKE_PASSWORD’)]) {

sh ‘’’

gcloud auth login — username ${GKE_USERNAME} — password ${GKE_PASSWORD}

kubectl apply -f deployment.yaml

kubectl apply -f service.yaml

// Add similar steps for deployments of other microservices if applicable

‘’’

}

}

}

This stage deploys the container images to your GKE cluster. It retrieves credentials stored in Jenkins using the withCredentials block. Replace your-gke-credentials-id with the ID of your credentials containing the GKE username and password. The script authenticates with GCP using gcloud auth login, then deploys the application deployment and service YAML configurations (replace deployment.yaml and service.yaml with your actual file names) to the cluster using kubectl. Similar steps can be added to deploy configurations for other microservices.

Stage 5: Database Migrations (Flyway) (Optional)

stage(‘Database Migrations (Flyway)’) {

steps {

sh ‘’’

flyway -user “your_db_user” -password “your_db_password” -url “jdbc:mysql://<your_cloud_sql_instance_ip>/<your_database_name>” migrate

‘’’

}

}

This stage utilizes Flyway to manage database schema migrations. The script uses the flyway command with specific arguments like username, password, database URL (replace placeholders with your details), and the migrate command to apply any pending database schema changes.

Stage 6: Data Loading (GCS to Cloud SQL)

stage(‘Data Loading (GCS to Cloud SQL)’) {

steps {

sh ‘’’

gsutil -m cp gs://your-gcs-bucket/data.sql /tmp/data.sql

mysql -h <your_cloud_sql_instance_ip> -u “your_db_user” -p”your_db_password” <your_database_name> < /tmp/data.sql

‘’’

}

}

This stage loads data from a GCS bucket into your Cloud SQL database. The script uses gsutil to copy the data dump file (data.sql) from your GCS bucket to a temporary location on the Jenkins agent. It then utilizes the mysql command to connect to your Cloud SQL database and execute the SQL statements from the data dump file. Remember to replace placeholders with your specific details.

Stage 7: Run Tests

stage(‘Run Tests’) {

steps {

sh ‘mvn test’ // Replace with your test runner command if using a different framework

}

}

This stage executes automated tests using a testing framework like JUnit (the example uses mvn test for Maven projects). This ensures your application functions as expected after deployment.

Stage 8: Optional Teardown

stage(‘Optional Teardown (After Successful Tests)’) {

when {

expression { return SUCCESS } // Only run if previous stages succeeded

}

steps {

sh ‘terraform destroy -auto-approve’

}

}

This optional stage demonstrates tearing down the provisioned infrastructure after successful testing. It utilizes the terraform destroy command with -auto-approve to remove the resources created in the Terraform — Provision Infrastructure stage. Note that this is an optional stage and should be used with caution in production environments.

Remember: This is a comprehensive example, and you might need to adapt the Jenkinsfile to your specific project requirements, tools, and testing frameworks and based on the security best practices.

This POC with a detailed Jenkinsfile showcases how dynamic CI/CD can be achieved for various scenarios on GCP. By leveraging infrastructure as code with Terraform, containerization with Docker, and database management you can establish a robust and automated deployment pipeline.

Remember to secure your credentials and configure access controls appropriately for production environments. Embrace dynamic CI/CD and empower your development process with automation!

Further Reading :

--

--

Venkatesh R
Niveus Solutions

Solution Architect | AWS - Azure - GCP | Terraform | DevOps | IAAS | Database & Caching | WCS |Mangement | 📃🎙️Creative Write