DevOps for Devs

iamprovidence
15 min readApr 17, 2024

--

You may not be a DevOps engineer, however, it is still important for every developer to know and understand DevOps principles. They not only enhance the efficiency of the software development lifecycle but also directly impact our reasoning about code. Don’t believe me? Read this to the very end, to see it yourself.

In today’s article, we will review CI/CD practices. You will learn what Infrastucture as Code is. Discover different deployment patterns that can be applied. But most importantly, you will see how those affect our code!

Without any further ado, let us begin.

DevWhat?

In case, you are not familiar with the term, let me explain.

DevOps is a set of practices, that aims to streamline the software delivery process, automate infrastructure deployment, and enhance the overall efficiency and quality of software development and deployment cycles.

Long story short, DevOps allows us to deliver the code from Git to the cloud avoiding different impediments as much as possible.

The person responsible for that is called a DevOps engineer. However, as often happens, customers do not have money to hire one, so we, developers are forced to do their job 😬 But don’t worry, DevOps is quite an interesting and exciting practice 🙃.

CI/CD

The most known DevOps practices are continuous integration (CI) and continuous delivery (CD). Together they make CI/CD.

Continuous Integration (CI) is a practice of aiming to detect errors early by verifying newly created pull requests before merging.

So, when a developer creates a PR, it should satisfy some automated checks before being able to be merged. Those checks are called steps. All steps together form a CI pipeline.

The steps can be different in each project, but typically they include the following actions:

  • restore of dependencies such as libraries, NuGets, etc
  • an automated build/code compilation
  • unit-test execution
  • code quality checks
  • send a message to Slack, or email if the build fails

Continuous Deployment (CD) extends CI by automatically deploying all code changes to the production environment

The steps involved in CD include:

  • artifact creation. After passing all tests, the application is packaged into deployable artifacts
  • environment promotion. The artifacts are promoted through different environments, such as development, staging, and production
  • database migrations. Modify the database schema to align with the new structure or version
  • automated deployment. The artifacts are deployed to the target environment(s) automatically

Usually, CI/CD are done together. However, it is totally fine to have them separated. Let’s say the CI part is in GitHub, while CD — is in Azure DevOps. It is also totally fine to have one part, but not another. For example, you don’t use a cloud and host on promise but still want to benefit from CI only.

Time for a bit of practice. Often to write those steps you need to know scripting and be able to execute them locally within the console.

Let’s say my team is using Bitbucket for hosting the Git repository. I want to make sure that the code compiles and all unit tests run successfully on each pull request.

Locally it can be done with those commands:

# compile
dotnet build ./src/Example.sln

# run unit tests
dotnet test ./src/Example.sln --no-build

But to run it in Bitbucket you need to add bitbucket-pipelines.yml file in your repository with the next content:

image: mcr.microsoft.com/dotnet/sdk:5.0  # Use the .NET SDK image

# define pipeline with two steps

pipelines:

- step:
name: Build
script:
- dotnet build ./src/Example.sln

- step:
name: Tests
script:
- dotnet test ./src/Example.sln --no-build

All popular platforms used for version control like GitHub, Azure DevOps, Bitbucket, GitLab, etc usually provide ways to easily enable CI/CD pipelines. The problem with those is that each of them has their own syntax.

DevOps engineers like to master one of CI/CD dedicated platforms like AppVeyor, Jenkins, TeamCity, CircleCI, etc, to apply it in every project or because of their simplicity/feature-rich capabilities.

Let’s say you would like to use AppVeyor for your CI. For that, you need appveyor.yml file with the following content:

image: Visual Studio 2022

build:
project: ./src/Example.sln

test: on

I would also suggest you reading about Cake. In short, it allows to write CI/CD with C#:

var target = Argument("target", "Test");

Task("Build")
.Does(() =>
{
DotNetBuild("./src/Example.sln");
});

Task("Test")
.IsDependentOn("Build")
.Does(() =>
{
DotNetTest("./src/Example.sln", new DotNetTestSettings
{
NoBuild = true,
});
});

RunTarget(target);

There are a few benefits:

  • your code and your automation is written in the same language
  • you can execute CI locally
  • you are independent from CI/CD servers and can easily migrate from one to another

Infrastructure as Code (IaC)

As was mentioned before, CD will deploy your code to the cloud.

If you were working with the cloud before, you know, you can not just “deploy the code”. First, you need to create an account, subscription, resource group, App Services, Database, etc. Those are done with a Graphical User Interface.

While for the first time, it may be entertaining, quickly it becomes annoying routing. Not only that but doing everything by hand is bug-prone. You need to set up multiple cloud resources. You need to do the same for the client from another region. You have microservices and now everything should be done for each microservice 😖.

If only we could automate it 😔. Wait a second, we can, with infrastructure as code 😃.

Infrastructure as code (IaC) is an approach in which infrastructure resources like resource groups, servers, databases, etc are provisioned and managed with code.

Different cloud providers, like Azure, provide IaC utilities.

Let’s say you want to create a resource group with a code. In Azure, it can be done using JSON-based ARM Templates ✋.

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"type": "Microsoft.Resources/resourceGroups",
"apiVersion": "2021-04-01",
"name": "MyGroupName",
"location": "West Europe"
}
]
}

There is also an alternative approach that offers a cleaner and more concise syntax compared to ARM templates. It is called Bicep💪.

resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
name: 'MyGroupName'
location: 'West Europe'
}

DevOps engineers are as much lazy as we are, so they would prefer to learn a single IaC tool that works across multiple cloud providers, including AWS, Azure, Google Cloud, and others.

Something like Terraform will do the job.

# Define Terraform Provider - Azure in our case
provider "azurerm" {
features {}
}

resource "azurerm_resource_group" "my-resource-internal-id" {
name = "MyGroupName"
location = "West Europe"
}

I would also suggest checking out Pulumi. It allows us to write infrastructure as code in C#:

using System.Threading.Tasks;
using Pulumi;
using Pulumi.AzureNative.Resources;

class MyStack : Stack
{
public MyStack()
{
var rg = new ResourceGroup("MyGroupName", new ResourceGroupArgs
{
Location = "West Europe"
});
}
}

class Program
{
public static Task<int> Main(string[] args)
{
return Deployment.RunAsync<MyStack>();
}
}

Remember, that each tool has its pros and cons. Choose wisely 🧙‍♂.️

Deployment patterns

After all the required infrastructure in the cloud has been created, it is time to deploy the code.

There are multiple ways to do it. Some approaches aim to minimize downtime, while others — to release new features progressively.

We will discuss them one by one.

Recreate

The most simple deployment strategy is called recreate:

  1. Let’s say your environment is live
  2. A new release is ready
  3. We stop the application
  4. Deploy a new version
  5. Start the application again

At step 3, you would usually see some kind of a wait banner.

Despite its simplicity, this approach implies downtime causing an unpleasant user experience.

Rolling deployment

In rolling deployment (aka phased deployment) an application’s new version gradually replaces the old one.

  1. Let’s say your environment is live
  2. A new release is ready
  3. We deploy the code, migrate the db, update other components, etc while the application is still running

This often leads to unstable environment, bugs and may impact data consistency. However, this is an uninvestable approach in distributed systems, like microservices, since you can not update all the components exactly at the same time.

Blue-green deployment

To reduce downtime and keep your environment stable, you can apply a blue-green deployment strategy.

  1. You are supposed to have two identical environments, called blue and green
  2. At any time, only one of the environments is live. Let’s say blue is live
  3. A new release is ready
  4. We deploy it to the green environment
  5. Finally, we just switch the load balancer so that all incoming requests go to the green environment
  6. Green stays live, while blue goes down

Blue-green deployment also gives us a fast way to do a rollback. If anything goes wrong in the green environment, then we just switch the router back to the blue one.

Even though, it sounds straightforward, in practice, having two environments will cause challenges since you have to support backward compatibility. We will discuss that later.

Canary releases

With this deployment strategy, you expose new features to users gradually.

  1. You are supposed to have two identical environments, let’s say v1 and v2
  2. At the beginning of a release, both of the environments are live until we make sure that the new release is stable
  3. A new release is ready
  4. We deploy it to the v2 environment
  5. With load balancer help, a small number of users are redirected to the v2 environment
  6. We monitor what happens when we release the feature. If the v2 has problems, we redirect users back to v1. If everything is correct we continue redirecting users to v2
  7. When all users are redirected, v1 is disposed

This strategy allows to identify potential problems early without exposing all users to the issue. Additionally, you should notice that releasing a new version won’t be complete in a second but may take a few days or even weeks.

Progressive-exposure deployment

Progressive-exposure deployment (ring-based deployment) is another way to limit how changes affect users while making sure those changes are valid in a production environment.

  1. You are supposed to have two identical environments, let’s say v1 and v2
  2. At the beginning of a release, both of the environments are live until we make sure that the new release is stable
  3. A new release is ready
  4. We deploy it to the v2 environment
  5. Firstly, risk-tolerant users are redirected to the v2 environment
  6. We monitor what happens when we release the feature. If the v2 has problems, we redirect users back to v1
  7. If everything is correct we continue redirecting more users to v2
  8. We continue progressively rolling out the feature to a larger set of users until all users are redirected
  9. At the very end, v1 is disposed

Rings are basically an extension of the canary release. However, while in canary deployment new features are available arbitrarily, in the ring-based — we have multiple user categories (rings) based on their pricing tier, risk-tolerance, urgency, etc.

Dark launches

A goal of dark launches is to observe whether users like the feature or not. It is like an experiment, where users don’t even know they are participating. This is why it is called dark launches.

  1. You are supposed to have two identical environments, let’s say v1 and v2
  2. At the beginning of a release, both of the environments are live until we finish examining our feature
  3. A new release is ready
  4. We deploy it to the v2 environment
  5. A small number of users are redirected to the v2 environment
  6. We monitor users’ activity. If they don’t like the feature, we redirect users back to v1. If the feature is widely used, we redirect all the users to v2
  7. When all users are redirected, v1 is disposed

It is similar to canary deployment but pursues other objectives.

This approach is useful when you have a risky feature that changes the way your users interact with the site and your goal is to verify the feature’s performance.

A/B testing

This strategy helps to compare and determine which feature performs better.

  1. At the beginning of a release, you have two environments running simultaneously called A and B
  2. A new release is ready
  3. We deploy to only one environment
  4. Some users have access to environment A while others — to environment B
  5. Then we use statistical analysis to decide which variation performs better for our goals, let’s say it is environment B. All users are redirected to that environment
  6. When all users are redirected, environment A is disposed

Imagine the next scenario, the marketing team has two versions of a banner for your company’s website and they want to know which version produces more clickthroughs. This is a typical use case for A/B deployment strategy.

Feature flags

Previously, in deployment strategies, we were referencing having multiple environments. However, often the same can be achieved with feature flags.

Feature flags (feature toggle, feature switch, feature flipper, conditional feature, etc) allow us to hide the feature from our users.

  1. You have an environment
  2. A new release is ready
  3. You deploy a new release with any strategy you like
  4. From a user point of view, no features were added. We can flip the switch to “on” at any time and expose the feature.

The feature flags are useful not only during deployment.

Your client could enable some features based on the pricing tier or user’s permissions.

Developers can benefit from feature flags too. When working on a huge feature, instead of creating a massive pull request with 500+ files changes, you can merge by small parts simply hiding your feature.

Feature flags can be stored in the database, or in configuration, it may hide only UI, or backed logic too. Regardless the implementation feature flag is just a simple if/else statement:

if (featureManager.IsFeatureEnabled("news-page"))
{
renderNews();
}
else
{
// nothing here
}

However, remember that overusing the approach may complicate your code or even lead to a dead code appearance.

Backward compatibility during deployment

Imagine you are working on a small project. Nothing overcomplicated. Just our server, a database, and a service bus.

Most of the deployment patterns we have discussed rely on a second environment. However, we can not have a second database or a second service bus. Basically, anything that persists data should exist in a single instance and always be live.

At some point in time, you will have users working with the old codebase, but with an already migrated database. This is why your schema migration operation should be backward compatible.

Let’s see why it is important and how it can be achieved.

Database schema migrations

Backward-compatible schema-changing operations are modifications made that do not break compatibility with the existing codebase.

Here are some examples:

  • adding columns
  • adding tables
  • creating indexes
  • creating views

However, several operations can break backward compatibility, causing disruptions to the old codebase:

  • deletion of a column
  • deletion of a table
  • renaming columns
  • renaming tables
  • altering data type

With them, you will encounter the following issues while migrating the database:

Imagine, you have been working for a few weeks and it is about time to do a release. During this sprint, you have successfully implemented new features, but also you refactored some code. Turned out the refactored code does not need one of the tables, so you have added a migration that deletes it.

Let’s see what happens with the blue-green deployment pattern:

  1. You have two environments: blue and green. Blue has old code, while green is empty for now
  2. A new release is ready
  3. We start deployment. Remember the delay between the next steps may be up to a few hours: first, we migrate the database, delete outdated tables, provision infrastructure, start delivering a new codebase to the green environment, then …
  4. 💥 We received a complaint from users that the website does not work. No wonder, they are still using the old codebase which relies on a deleted table

This is why it is important for a developer to know which deployment strategy is applied on the project. We will start again, but this time having in mind that migration should be backward compatible.

After refactoring, you have to realize that one of the tables is no longer needed. Instead of adding a migration that deletes it, you do nothing!

Let’s see what happens during the release:

  1. You have two environments: blue and green
  2. A new release is ready
  3. We start deployment. First, we migrate the database, keep outdated tables, and deliver a new codebase to the green environment
  4. Everything is done. There are still no complaints from users, the old code is still working
  5. We switch users to the green environment where the table is no longer used in the code
  6. The release has been finished

In the next sprint, you add a migration script that deletes a table. This time operation can be performed safely since no code is using it.

Backward incompatible migration should be done in two releases:
- in the first release, you just update the code to stop using the data
- in the second release, you add the migration

However, it is not always the same for each operation:

Incompatible update operations (rename column/table, alter column’s type) should be replaced with compatible ones:
- in the first release, create a new column/table and copy the data there
- in the second release, delete old column/table

Messaging infrastructure migrations

You will face similar issues with messaging infrastructure like Service Bus, RabbitMq, etc. Even though there is no strict schema, care should be taken when making modifications in a way that allows existing clients to continue functioning without interruption.

Here are backward-compatible operations:

  • adding a new queue
  • adding a new topic
  • adding a new subscription to a topic
  • increasing message payload with nullable properties
  • adding metadata to messages
  • removing a property from a message

While those are typically not backward compatible:

  • removing a queue
  • removing a topic
  • removing a subscription from a topic
  • renaming a queue
  • renaming a topic
  • renaming a subscription in a topic
  • increasing message payload with required for deserialization properties
  • renaming message property
  • renaming message (in case the name of the message is crucial for the routing or serialization logic)

Removing or renaming a queue/topic can be done similarly to what we have seen with databases.

- in the first release, remove the code that publishes messages, and keep the code that handles messages
- in the second release, when all messages are proceeded, remove the old queue/topic, remove code that handles messages

Changing message format can be achieved with versioning or by adding a new message type. This way, clients can determine how to handle messages based on their compatibility with the message schema version.

[Obsolete("Use UserCreatedV2")]
class UserCreated
{
public int UId { get; set }
}

// should be publised instead of <see cref="UserCreated" />
class UserCreatedV2
{
// public Version Version { get; set; } = Version.V2;
public int UserId { get; set }
}

The only difficulty can be renaming subscriptions on the same topic. If you add a new subscription, the message published to the topic will be handled twice: by old and new code.

You can not remove old subscriptions either. It will lead to message lost.

Therefore consider implementing idempotent consumers.

Alternatively, you can use feature flags.

if (featureManager.IsFeatureEnabled("IsNewSubscription") == false)
{
// consume a message with an old subscription
}
else
{
// consume a message with a new subscription
}

This will allow us to handle messages accumulated during the release process, and start consuming them with the new subscription. At some point, you can just switch the flag and delete the old subscription in the next release.

Wrapping it up

You may not be directly responsible for a DevOps part of your project, but still, you can not set aside. Knowing the full project lifecycle, may not only improve collaboration and communication in the team but also bring resilience and stability.

I hope this article enlightens a topic for you. So the next time, when a customer asks whether you have experience with the DevOps, you can loudly say “Yes” and get stuck with configuring the pipeline😁 After all, that is how every developer ends up learning it 🙃.

Let me know in the comment section how often you have to deal with DevOps?💬
Give this article a clap or a few if you like it 👏
You can support me with a link below ☕️
Don’t forget to follow to receive more of those✅
See you later, alligators 🐊

--

--

iamprovidence

👨🏼‍💻 Full Stack Dev writing about software architecture, patterns and other programming stuff https://www.buymeacoffee.com/iamprovidence