Our Custom-Built DevOps Tools Enable Us to Deploy Code in Production in Just Two Clicks!

Jevgeni Demidov
Pipedrive R&D Blog
Published in
8 min readSep 2, 2021

Introduction

Pipedrive provides SaaS CRM software to over 98,000 companies across the globe. We deploy up to 350 times per day to deliver new features to customers in time.

To achieve such velocity, we’ve created custom-built tools by keeping process visibility at the center. We have developed an in-house deployment orchestrator — Rakett, unified repository visibility — Single Point of Truth, and a mission tracking tool to hold us accountable.

And guess what? Using these tools, it takes us two clicks and 15 minutes to test and push changes to production. In this article (and a set of upcoming ones), we will spill all our secrets: from our deployment process and testing methodology to our custom-built DevOps tools. Let’s dive in!

How We Deploy Code in Production in Just 15 Minutes?

What’s in Our Tech Stack?

Before we dive into our deployment and testing process, let’s have a look at our tech stack:

At our core, we use a microservice architectural pattern. Why? Well, it increases our deployment speed while allowing us to deliver failure-resistant applications that can be scaled independently. For this, our codebase is divided across 1000+ repositories, and our product consists of over 500 services.

To build and ship our services, we use Docker. We use Docker images at every step, be it the developer’s local machines or our production environment.

Speaking of our CI/CD pipeline, we use GitHub as SCM, Jenkins as our CI/CD tool and Codeship to perform code quality checks. To further power our deployment pipeline, we use our in-house deployment orchestrator — Rakett, and a custom framework for running pipelines in Jenkins (check out how we built our own CI/CD framework from scratch).

Most importantly, our DevOps Tooling team supports our engineering team by providing:

  • Full-blown CI/CD for testing, development and deployment
  • Local and remote development environment
  • On-demand sandbox environment
  • Testing frameworks
  • On-call related tools
  • Tools for monitoring and alerting

How Do We Test and Deploy at Pipedrive?

Before we dive into our deployment process, here’s what you should know:

  • At Pipedrive, developers test their code changes themselves
  • We don’t have dedicated testers. Yes, you heard that right! Our testing process is fully automated
  • If you want to learn how we maintain quality without having a dedicated testing department, here you can find more insights into this

You might wonder — how do we test and deploy without employing any dedicated testers?

Here’s how:

  1. Firstly, developers make code changes and push commits on GitHub, where basic linting tests and unit tests are automatically triggered for each commit.
  2. Next, developers add a “test-in-textbox” label in GitHub pull request. This deploys the development branch to an isolated on-demand sandbox environment and triggers automated integration tests.
  3. After the code changes pass the integration tests, developers push the changes to production by adding a “ready-for-deploy” label in the GitHub pull request. This label triggers our in-house orchestration tool to deploy the code changes to the test environment and then to the production
  4. After the “ready-to-deploy” label is added on GitHub, a Docker image is created and deployed to the test environment. Here we run automated smoke tests for regression testing
  5. Once the container image passes the regression tests, CI/CD pipeline merges the code changes with the master branch and deploys the container image to production environments across several data centers

Guess what? This entire chain, from untested code changes to working code in production, requires only two clicks and takes around 15 minutes. All thanks to our efficient workflow and our custom-built DevOps tooling.

Wait, what if something goes wrong during testing? What happens then?

In our organization, each engineering team independently maintains the services they develop.

If some errors pop up while testing, all alerts and issues are forwarded to the on-call person of the engineering team who owns the service. Upon receiving the alerts, the assigned engineer solves the issues or delegates them to someone else.

Fixing the error includes a “blameless postmortem”. This consists of a high-level summary, root cause analysis and steps on how to prevent that specific problem.

Now that you know our deployment and testing process, let’s now discuss our in-house DevOps tools.

What Tools Have We Developed in DevOps Tooling Team?

Rakett – The Deployment ROCKET

Being in the DevOps space for over 5 years, we know for a fact that providing complete visibility to developers is the key for fast-paced deployments. Since there was no out-of-the-box solution for this, we custom-built an orchestrator called Rakett (Rocket in Estonian).

What Is Rakett?

Rakett is a deployment orchestrator tool built on top of Jenkins that manages and speeds up automated deployments.

It comes into the picture in the final stage of our deployment cycle.

As discussed, Rakett converts changes to production into a Docker image, deploys it in a test environment and executes tests to validate it. Once the validation is complete, Rakett merges the PR and deploys the image in our production environment.

Essentially, Rakett puts the engineering team in the driver’s seat, enabling them to deliver changes effectively to end customers. Most importantly, Rakett displays detailed progress reports across various CI/CD pipelines helping our engineers remain accountable for the tasks they complete.

Rakett Key Features

  • Visualizes development targets and their history
  • Rollback changes for broken components without any hassle
  • Pauses/releases components as required
  • Provides accurate time estimates for deploying changes
  • Simplifies navigation between deployers, pull requests, dashboards and other deployment artifacts
  • Provides detailed insights about every deployment
  • Highlights deployment failures and exposes links to logs and failure reasons

Rakett Advantages

  • No repetitive manual steps. Rakett delivers code in a straightforward manner
  • Rakett leaves no room for errors as it thoroughly checks and verifies every step
  • Managing deployments is a breeze with Rakett. All thanks to its easy-to-use interface and detailed progress reports

Single Point of Truth (SPoT) — Bringing All Repositories at One Place

At Pipedrive, we follow a microservice architecture to deliver quality applications at a comfortable speed.

Our codebase is divided into over 1000 repositories, while our product consists of over 500 services. Using these repositories and CI/CD pipelines, we deploy up to 350 times per day.

To manage such a massive repository collection and provide more clarity to our engineers, we developed “Single Point of Truth’

What Is a Single Point of Truth?

Written in NodeJS, Single Point Of Truth is a web service that aggregates information about Pipedrive from several sources such as GitHub, Kubernetes, and SonarQube. The major highlight is that it stores daily snapshots in ArangoDB and displays them on an easy-to-read timeline helping our engineers to make better data-driven decisions.

What Information Does Single Point of Truth Stores?

  • General — Owner, repository type, its status, etc.
  • SonarQube — Quality gate status, unit test coverage and lines of code
  • Dependencies — External, internal, outdated packages
  • Deployment — Versions, active regions, links to logs, etc.
  • Kubernetes — ReplicaSets, pods, datacenters, actions with resources
  • Other sources — Our dashboard also stores data such as relationships between services, as well as information from sources like Confluence, Consul and other data storages

Advantages of Single Point of Truth

  • Stores linked data for services, Kubernetes resources, software dependencies and dependencies between Pipedrive microservices
  • Gathers all data in one place, making it easier to provide data to consumers using API or UI
  • The dashboard tracks several metrics, thereby helping us make data-driven decisions
  • Engineering onboarding becomes a breeze for newcomers with it

Mission Tracking Tool — Organizing Business Goals

Before we dive into our Mission Tracking Tool, let’s first understand our product development team hierarchy.

At Pipedrive, product development is assigned to engineering Tribes that hold expert knowledge of that product area. Each Tribe is led by an Engineering Manager and has up to 20 software engineers, among other roles.

Tribes are further divided into smaller Mission Teams, each of which is assigned a single business goal (mission). A Mission Team consists of a Product Manager, Designer and several developers.

This detailed division of labor has helped us complete over 500 engineering missions and counting. Nonetheless, our missions lacked visibility and were unstructured.

To address this and take our product development a level higher, we developed a Mission Tracking Tool (MTT).

What Is Our Mission Tracking Tool (MTT)?

As its name suggests, Mission Tracking Tool keeps a close tab on all our engineering missions increasing visibility into our business goals. MTT highlights each mission’s purpose, helps in launching missions and displays what each Tribe is working on.

Most importantly, MTT gamifies missions using achievement badges.

How Does It Work?

MTT calculates and shows the availability of engineers, upcoming mission requirements, and staffing to assist with missions. In short, it helps in making organizational decisions.

For compiling this data, MTT gathers data from tools like:

  • Bamboo– For listing tribe members and other specifications
  • Confluence — For mission list and other related data
  • OpsGenie — For scheduling on-call duty
  • Airtable — For product vision, impact and goal visualization
  • 7Geese — For objectives and related results

Advantages

  • Gamifies missions
  • Single interface for all missions to organize missions in different stages
  • Up-to-date data with clear goals, visions and challenges
  • Inspires to achieve the bigger picture

Wrapping Up

To streamline your workflow and maximize efficiency, you can use custom-built tools, open-source, or paid DevOps solutions.

Nonetheless, it only makes sense to develop an in-house tool as it will fulfill all your requirements. Other solutions are built, keeping a broader audience in mind. However, when you develop your DevOps application, you can easily customize it to fit your needs.

And yes, you don’t always need to build custom solutions from scratch. You can use open-source solutions as lego bricks to create your unique tool.

Interested in working in Pipedrive?

We’re currently hiring for several different positions in several different countries/cities.

Take a look and see if something suits you

Positions include:

  • Software Engineer in DevOps Tooling
  • Full-Stack Developer
  • Information Security Risk Analyst
  • Software Developer
  • Data Product Manager
  • And several more

--

--