Architecting for Growth Series: Cloud-Native Adoption

Growth Acceleration Partners
9 min readAug 22, 2024

--

by Santiago Arango, Software Architect at Growth Acceleration Partners

The Wonders of IaaS and PaaS in a Single Flavor

No one wants to run a virtual machine anymore or deal with any of the associated nuances. There might be cases where you need to, or you must. However, if you ask any technical individual, they do not want to.

Of course, this feeling arises from a context where we have numerous options that can eliminate and streamline many of these manual operations. These tasks can be time-consuming or even costly when viewed from the perspective of budget managers.

The Cloud can be complex, but there are too many options to choose a good server solution.

To align with the description provided in the preceding paragraph — and to do justice to some of the available options — we have services like containers, Kubernetes, Serverless and Web Apps, all of which abstract away the underlying infrastructure. Despite this, some of these services have earned their own shorthand names in the cloud landscape. Specifically, we commonly refer to them as IaaS (Infrastructure as a Service) and PaaS (Platform as a Service). While IaaS focuses on providing infrastructure, PaaS offers a platform for development. In between these two, we find CaaS (Containers as a Service).

Why does CaaS occupy this middle ground between PaaS and IaaS? The answer lies in the fact that CaaS provides a platform solution that remains hidden from you. You do not need to manage it directly.

For instance, consider Azure Container Instances and Container Apps, which leverage Kubernetes in the background. As a user, you do not need to worry about managing the cluster or its resources. However, you do retain the flexibility to choose the underlying infrastructure. You can select the type of virtual machine that aligns with your requirements, specifying CPU, RAM and other parameters.

Additionally, you have the freedom to determine the number of replicas for the same compute, as well as the availability of regional or global copies to serve your users. All of this happens without the administrative burden of handling a virtual machine — you do not need to patch, update or troubleshoot it. If it crashes, fails or goes offline, the cloud provider seamlessly replaces it.

The Cloud Is Not for Everyone

Please do not hate me for the title, but it is what it is. One of the undeniable effects of moving to the cloud is that it compels your organization to undergo a 180-degree transformation. Well, that is, if you genuinely want to embrace the cloud.

If you believe transitioning from on-premises means merely moving your 50 services running on 20 virtual machines, you might want to reconsider. The journey to the cloud extends far beyond a basic infrastructure migration.

You have a couple of options here. First, you can adapt your existing solution to be cloud ready. This means it can be deployed to truly cloud-native resources like Containers, Serverless and isolated Web Apps. Alternatively, you can rebuild those solutions — assuming you have the budget, time and patience. However, this approach is often more relevant for incredibly old applications that are challenging to export to cloud environments.

As you navigate this transition, challenges will inevitably arise. Your development teams, infrastructure experts, DBAs and other technical roles within your organization may resist change. The shift might lead some to believe that their roles are no longer necessary, but that is not necessarily true. There is a significant difference between not being needed and needing to transform oneself. While I will only dedicate this small paragraph to the topic, I promise to return with another post about cloud transformation.

For now, let us circle back to Cloud Native solutions.

Benefits of Cloud-Native Solutions

All right, let us dive in. When we talk about Cloud Native solutions, we are referring to a set of tools that simplify development, deployment and feedback cycles through features like logs, monitoring and tracing. Some of these tools include Kubernetes, Docker, Serverless for compute options and Kafka for building apps that work with streams shared between services or target microservices. These tools allow your team to create applications that truly embrace the cloud-native concept.

The biggest benefits of cloud-native applications become evident when you consider the advantages of the tools you are adopting. Each of the services mentioned has specific features. They ensure high availability, provide tools to set up your application properly and enhance efficiency. With cloud-native approaches, you can deliver frequent changes to your product using CI/CD and deployment strategies. Additionally, these solutions offer a cost-effective structure, allowing you to scale your services while staying within budget — assuming you configure things correctly.

Your Product Can Now Be Shipped: Containers

Containers represent one of the most groundbreaking technologies of the past decade. They allow you to encapsulate an entire virtual machine (VM) holding your solution into a portable format. These containers can run on any modern operating system, including those provided by major cloud vendors like Azure, AWS and GCP.

The benefits of container portability extend seamlessly to the cloud. Cloud providers offer solutions for running and hosting containers, offering additional advantages. Here is what you gain:

  1. Cost Control: You can tailor your compute resources to your needs. Whether it is CPU, RAM, storage or other basic resources, you choose what suits your requirements and expectations.
  2. High Availability: Designing for high availability becomes straightforward. Deploying multiple replicas of your containers across various locations — within a country, across continents or even globally — ensures resilience based on your business needs.
  3. Managed Infrastructure: Cloud providers offer services that handle the underlying infrastructure for your containers. By providing the container definition (often a Dockerfile or a Containerfile), you can focus on your code and project. Services like Azure Container Apps/Instances or AWS ECS take care of the heavy lifting, abstracting away the infrastructure management.

In summary, containers empower you to build, deploy and manage applications with agility, scalability and cost efficiency. They truly bridge the gap between development and cloud deployment

Your Product Can Now Be Shipped, with Your Own Rules: Kubernetes

If you enjoyed what you read above, you might find the following information equally interesting. Kubernetes is an orchestration tool that enables you to deploy multiple services on a centralized control plane. To put it simply, you can manage various container definitions in an individual location.

Kubernetes offers several advantages when deployed on the cloud. It provides scalability, economies of scale and simplified configuration from the Infrastructure as a Service (IaaS) perspective. Cloud providers like Azure even allow you to use Serverless on Kubernetes, also known as Virtual Nodes.

Additionally, you can choose your preferred compute strategy by creating a node pool containing one or more virtual machines to handle all your deployments. Furthermore, you can start with a small number of virtual machines and dynamically scale up as demand increases, leveraging the cloud’s elasticity.

Kubernetes integrates well with cloud-native concepts. Applications deployed on Kubernetes can self-heal — when a container or pod becomes unhealthy, the platform automatically replaces it with a new one.

Scalability is another key feature: you can have multiple replicas for a single deployment, such as having five copies of an authentication service to handle user requests. As demand grows, you can adjust the number of replicas accordingly. If you encounter resource constraints during scaling, simply increase your compute capacity to meet the demand.

Moreover, Kubernetes provides in-house network, security and Role-Based Access Control (RBAC) setups. These features not only protect your products, but also allow you to expose them to end users. Additionally, Kubernetes is highly extensible — you can create custom definitions and components to manage deployments, networking, data storage, security and more.

Kubernetes Might Cause Lock-in

To conclude our discussion on Kubernetes, let us explore some nuances associated with choosing this platform. Kubernetes primarily relies on YAML definitions, which your developers (preferably) need to write.

Whenever you deploy something, you build a YAML file. Whether it is creating a network, configuring storage or defining roles within your Kubernetes cluster, YAML is the language of choice. While tools like HELM can simplify or even eliminate the need for direct YAML usage, it is essential to recognize that configuration and tuning remain necessary even when working with YAML.

Using Kubernetes introduces additional management requirements, but the benefits justify this investment. Scalability, self-healing and extensibility are among the compelling reasons to opt for Kubernetes.

Now, getting back to the title, let us discuss “lock-in.” In the world of successful platforms and products, trade-offs are inevitable. You do not entirely avoid lock-in; instead, you choose where it occurs — whether it is tied to a specific technology, a particular cloud vendor or even your chosen tech stack. When adopting Kubernetes, consider the trade-offs and the lock-ins you are willing to endure.

Going Serverless

Let us revisit the opening paragraph. Many companies today no longer want the burden of managing servers and virtual machines. However, do not be misled by the term “serverless” — it does not mean there are no servers involved. Instead, it refers to a deployment strategy where you relinquish certain choices.

When I say, “choose anything,” I mean that you no longer need to concern yourself with details like CPU, RAM or storage. The cloud provider handles all of that behind the scenes. Your main task is to write your code and select a serverless flavor — whether it is Azure Functions, AWS Lambda, GCP Cloud Functions or similar. You can think of it as “Function as a Service” (FaaS).

But there is a caveat. While this approach simplifies infrastructure management, it comes with nuances. Cloud vendors have specific requirements for your code or functions. You must adhere to their prescribed code structures, design patterns, decorators and descriptions to fit their concept of a function.

Sample of an Azure function that reacts to an event when adding a new blob to a Storage Account.

Once again, let us address the potential vendor lock-in. It might seem that you are inadvertently locking yourself into a specific cloud provider when you write code for AWS Lambda or Azure Functions. If you decide to switch from one to the other, you will need to rewrite that layer responsible for exposing your functions.

However, do not panic about causing lock-in. Choosing the right strategy involves evaluating options, and every choice entails some level of lock-in.

For instance, if you opt for Kubernetes, it is easy to migrate between cloud providers. Yet, keep in mind that your deployments and definitions will still tie you to YAML files or third-party tools for handling deployments. Even writing container files locks you into certain technologies. Ultimately, selecting the right technology always involves trade-offs.

Serverless and Economics

One of the biggest and major factors for moving to serverless comes initially for the economics. As mentioned in the previous section, when you go serverless, you do not have to manage any infrastructure or choose a VM flavor to handle your applications. Instead, every time your function is called, this code is placed on a compute and runs there.

The good thing is that you are only billed by compute usage while your code runs. However, there is a catch: functions are stateless, which means they run for a noticeably brief period. Some of them are constrained to a maximum of 10 to 15 minutes of execution.

This means you can even hold functions that scale down to zero at any time, resulting in a cost of zero while they are fully idle. But it also comes with massive support for autoscaling, so when more requests come to your system, functions can scale up automatically.

If you handle usage based on demand and plan your code design correctly, you can eventually save a significant amount of money on your billing and keep those responsible for the budget happy.

So, Containers, Kubernetes, Serverless?

Choosing between standard containers, Kubernetes, or even serverless is not a random or naive decision. You must be incredibly wise in your organization and platform requirements. Do not just follow technology trends or the hot topics on that YouTube channel you love.

Not all products are well-designed to run on serverless; not all products are ready or should be packaged as containers, and even less so for Kubernetes if they do not fit a container.

Be flexible; if your product is built on several services, you can easily mix those together. You might deploy some parts of it to Functions, which can easily connect to other cloud services. For Apps that require file reading, continuous processing and more active processing, consider using Containers. If you have an application with several microservices that need to interconnect, Kubernetes with a service mesh can make things easier, simpler, and more cost-effective.

What Is Coming for Architecting for Growth Series

My name is Santiago Arango. I have been working in the software industry for almost 13 years. In 2024, I have the chance to share my insights gained from being a technical lead and being involved in solutions to evolve systems architecture. I will be writing several blog posts with the title ‘Architecting for Growth Series,’ which will hopefully help you when making decisions about the cloud.

This is the first chapter of the series. In the next post, we will discuss how to exploit Kubernetes and its ecosystem to architect successful Apps.

Thanks for reading, have a good one!

--

--