15 Best Practices to Design Cloud-Native Modern Applications

There are two types of applications in this world: Cloud-native applications and non-cloud-native applications. In this story, we are going to discover the best practices to design and build the first type of applications.

Aymen El Amri
Aug 21 · 13 min read

Let’s be Clear

To be clear about the term “Cloud”, I’d like to start with this quote from Paul Maritz, former CEO of VMware and Pivotal:

The cloud is about how you do computing, not where you do computing.

Michael Dell, the founder of Dell Technologies also said:

The cloud isn’t a place, it’s a way of doing IT.

So let’s make sure from the beginning that the word Cloud in this story is not about Google or AWS Cloud, but about a philosophy and a set of concepts.

We may have different opinions about whether “cloud-native” is the same as “cloud-ready”, but one thing is sure, cloud-native applications are cloud-ready.

Think of cloud-native as an advanced form of software evolution during the last decades. Cloud-native approaches, like any other approach in software development, should not be considered as a replacement for another traditional software but should be seen as better adapted to the very different environment of the cloud.

Beak evolution — Darwin’s finches by John Gould (source: Wikipedia)

The long beak variation is an adaption to extract food out of narrow openings. Birds with shorts beak would slowly die off because of their inability to access their food. The software follows this analogy.

To produce one of the above species, nature needs some patterns. In this story, I will talk about these patterns but in software engineering, more specifically the patterns to design and create cloud-native applications.

During the evolution journey of modern software engineering, many approaches to developing and deploying software were conceived and adapted, from Netflix successful DevOps implementations, The 12 Factor App of Heroku, the Cloud Native Computing Foundation graduated and incubated projects, The 13 Factor App (Mesosphere), Beyond the Twelve-Factor Application (Pivotal) to Web-scale IT (Google, Amazon ..)..etc ..etc ..etc

In this essay, I am stopping by the most important and common patterns in the approaches mentioned above, in order to “extract” the best practices.

The Golden Triangle: People, Process, Technology

I stopped by 15 patterns, and all of them can be classified into three categories.

Think of this as an equation with three variables.

From a business view, your goal should be resolving the equation.

  • People: The most important part of the equation as this involves leadership and a vision.
  • Processes: They refer to business goals that must be considered to help people drive successful changes in the business.
  • Technologies: The technology aspect of this triangle should be determined after the people and the processes that are in place. One of the mistakes is trying to retrofit the people and processes around technologies since this should go in the opposite way.

1 — Products (Over) / Projects

Software development, considered as projects that are budgeted and delivered during a limited time slot — doesn’t fit the needs of the modern business.

The mental model of a project when something is planned, executed then delivered within defined time and budget slots were challenged by the Agile methodology.

The up-front determination should be replaced by an on-going discovery and continuous optimization model.

2 — Features (Over) / Releases

Same as the first principle, prioritizing releases over features, can have negative consequences on people, processes and the use of technologies.

Thinking about features makes implementing proven methodologies like “Feature Teams” and “Two Pizza Team” easier.

A feature teams is a long-lived, cross-functional, cross-component team that completes many end-to-end customer features — one by one (featureteams.org)

3 — Re-usability

Re-usability is more known in software development, think of functions and routines. The same principle can now be employed in infrastructure (like IaC) and during software builds.

Docker images are reusable and they are also using base images, that may use other base images .. etc.

4 — Technology Agnosticism

As the variety of computing devices and needs continues to expand, business demands become more complex and customers become more demanding, the software that contributes to those capabilities must stay ahead of that growth.

Polyglot microservices is a form of technology agnosticism where a team which is responsible for a certain set of services decide which tech stack they are going to use to solve the problems.

This approach, even if it has some drawbacks, increases the developer productivity and autonomy, therefore decrease the time-to-market.

5 — Portability

Portability is usually attributed to a computer program that can run on different OSs. Its alternative in the DevOps world would be a platform that can run on different infrastructures and cloud providers without requiring a major rework.

When you containerize your applications, you are sure that you will able to run them on Kubernetes and in Docker Swarm without changing the images.

Also, if you made the choice to orchestrate your containers using Kubernetes, you have the choice to deploy your cluster to GCP, AWS or any other cloud.

6 — Abstraction

Abstraction is more known in object-oriented programming. A programmer can hide all but the relevant data about an object in an abstract class in order to reduce complexity and increase efficiency.

If we take a look at this blocks diagram, we can see that most networking code today are Virtual Network Functions (VNFs) and those can run on top of OpenStack or Kubernetes. Kubernetes can run on top of bare-metal or any public cloud.

source: cncf

In the future, the CNCF expect more expect many of those networks functions to be repackaged not in virtual machines like it’s the case, but in containers to become CNFs or Cloud Native Network Functions.

This is when Kubernetes can become a universal abstraction layer that allows all kind of workloads to runs on top of it.

7 — Cost

No one can say that reducing costs is an afterthought or a secondary topic. From a business point fo view, sales margins increase when costs are reduced. During the latest years, new disciplines like FinOps have emerged to manage the variable cost of cloud and distributed systems.

Using a technology over another can also reduce the costs without reducing the quality. After the democratization of containers, developers are able to run multiple containers in the same virtual machine which helps IT organization reduce the number of machines and also the number of operating systems, which is also a cost-cutter.

8 — Speed

Because developing at a fast pace is one of the value cores of most methodologies and implementations like ITIL, DevOps, and Agile.

Serverless is one of the computing models whereby application developers don’t have to provision servers or manage to scale for their app.

Instead, those tasks are abstracted away by the cloud provider (AWS Lambda, GCP Cloud Functions), allowing developers to deliver code to production much faster than in traditional models.

9 — Scalability (Business)

A monolith application is scalable when it allows adding new features and adapting old features to new business needs without a major rework.

Note that a system becomes more complex every time code is added to the application.

Software entropy refers to the tendency for software, over time, to become difficult and costly to maintain. A software system that undergoes continuous change, such as having new functionality added to its original design, will eventually become more complex and can become disorganized as it grows, losing its original design structure. (webopedia.com)

10 — Self-service

Adopting self-service platforms within an organization helps teams deliver software continuously.

Let’s take the very common example of managing different environments within a single team:

Deployment of different environments like development, testing and staging environments should be automated. Using tools like Terraform, Ansible and cloud VM instances on-demand, any developer should be able to create and destroy an environment.

11 — Elasticity

Wikipedia defines elasticity as:

The degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible

Let’s take the example of time-based elasticity and volume-based elasticity.

Time-based elasticity means turning off the resources that are not in use (e.g: development environments during none-business hours).

Volume-based elasticity means matching scale to the intensity of demand like compute cores, storage sizes, or throughput. (e.g: AWS Elastic File System will scale its storage capacity dynamically so you don’t need any provisioning effort).

12 — Resiliency

The system stays responsive in the face of failure.

A failure can be produced by many scenarios like the incapacity to handle a greater workload/traffic or the sudden termination of a process.

Let’s compare monolithic applications and containers-based applications (microservices):

Each service is isolated in a way that eliminates the entire application from going down when the service itself goes down.

Monolithic applications have multiple single points of failure and if any one of them is impacted the whole application encounter more rik to be impacted too.

13 — Security

Security is not an afterthought. This should be known to all developers. Security should be implemented in design as it’s not a layer to add at the end of a project or a feature.

14 — Observability

Observability is a feature that makes a system monitorable. It enables “white box monitoring” and the understanding of a system behavior through its internals.

Whitebox vs Blackbox Monitoring

15 — Simplicity

With the complexity of distributed computing and cloud models, running an efficient computing infrastructure requires a spirit of radical simplification.

From Patterns to Practices

Let’s be clear, using containers or load balancers are not goals, they are tools or means.

In the part of the patterns, I tried to list and briefly explain the characteristics of a cloud-native modern application.

I started with the WHY now I’m moving to the HOW.

So how to build applications implementing the 15 patterns described above?

I created a 2d table mentioning the patterns in the rows and I added the practices to implement these patterns in the columns.

This is what I found:

Cloud-native is purely technical but the human side is important. There’s a link between your teams' structure and culture and what they produce as a result. This is what Melvin Conway coined:

Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.

Build microservices and decompose your monolith into simple, lightweight, loosely coupled services that can be developed and released independently of each other.

“I think the automation of vision is a much bigger deal than the invention of perspective.” ~ Trevor Paglen

Automation has always been one of the good practices when building infrastructures and online platforms. Automating cloud-native systems include:

  • Infrastructure using tools like Terraform, Ansible, and Saltstack
  • CI and CD using tools like Jenkins
  • System recovery and scalability

Make sure to simplify and secure before automation.

Stateless services are “smarter” than stateful services. They allow these features to be implemented easily:

  • Scaling up/down
  • Gracefull termination for replacement (repairing)
  • Rollback tasks
  • Load balancing

One may ask about the state of a service using a session persistence between the client and the server in a stateless service.

Well, in a stateless design, there is no state between client requests. Every client request will have sufficient info to perform the requested action.

As it’s described in The 12 Factor App: Keep development, staging, and production as similar as possible. Dev/Prod parity makes processes like CI and CD easy.

Cloud-native applications are designed in a way to run without having affinities with the server and the operating system.

As it’s described in the Reactive Manifesto, reactive systems are responsive which means they should absolutely have two features: elasticity and resilience. This is implemented using message-driven applications (e.g: streaming logs).

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency

Cloud-native services communicate using lightweight APIs that are based on protocols like REST, NATs or gRPC.

Develop, and deploy your services using highly automated platforms that provide a service abstraction. Abstraction and automation are highly related.

The example of abstracting the data and separating it from the computing layer and the business logic layer allows automation and makes operations on data like data caching easier and more performant.

DevOps is about aligning teams around common business goals, but this will not happen unless you design accordingly. So create business capability teams and deploy business services.

Developers used to continually SSH into servers to continually update the deployed artifacts, from a cloud-native perspective this is an artisanal/handicraft way of doing things.

Immutability in this context is replacing the old version of an application by building new servers from an artifact and totally replacing the old one.

Design observable applications that can interface with observability tools and give insights (metrics, logs, traces ..etc) into what your code does in production (or any other environment).

Making the difference between noise and signals when designing for observability make things easier.

Or develop with security in mind and secure by design. Security is not an afterthought.

Simple can be harder than complex but keep in mind that in order to build complex systems that work, you need to build it from simple subsystems that work.

Unix philosophy originated by Ken Thompson can be an inspiration to build complex systems like Linux while keeping in mind a philosophy of minimalism:

  • Write programs that do one thing and do it well.
  • Write programs to work together.
  • Write programs to handle text streams, because that is a universal interface.

Nature has 13 billion years of experience in research and development on her CV.

The dinosaurs, weighty and slow creatures have disappeared and left the earth for smaller, more agile and more intelligent species.

This is how software engineering is evolving: Monolith applications are giving the place to distributed systems composed of lighter and more sophisticated systems.


Don’t think that monolith will totally disappear, they will continue to support the IT ecosystem for years. The fact that more sophisticated approaches were invented, doesn’t mean a total instantaneous disappearance of other approaches.

When a whale dies, it supports a community of organisms and provides life for hundreds of marine animals for more than 50 years.

The other thing that nature can inspire us is the “afterward optimization” approach. Remember: Dinosaurs evolved into birds.

I am not an expert in this area, but during the transmission of DNA information, mutations can happen and in some cases, they can give birth to new phenotypes that are more sophisticated and evolved.

This is how nature works, things are never optimized from the beginning, they are post-optimized.

Richard P. Gabriel suggests that a key advantage of Unix was that it embodied a design philosophy he termed “worse is better”, in which simplicity of both the interface and the implementation are more important than any other attributes of the system — including correctness, consistency, and completeness. Gabriel argues that this design style has key evolutionary advantages.. (wikipedia)

Practically in the case of cloud-native, you can, for example, adopt the Monolith First approach.

Experts recommend taking the same approach by building a monolith application at the beginning, even if your intention is to create a microservices architecture.

This will help you in understating the domain and the business boundaries of your application first to make decomposing the application easier and avoid disrupting the organization.

Microservices and cloud-native are to monolith what birds are to dinosaurs, if you can’t start with since you can evolve into.

Similar Stories

If you liked this story, you will like other stories I wrote about similar topics:

Connect Deeper

Make sure to follow me on Medium / Twitter to receive my future articles. You can also check my online training Painless Docker and Practical AWS.

Unlike most stories on Medium is free and not behind a paywall. If you liked this work, you can support it by buying me a coffee here.

Follow us on Twitter 🐦 and Facebook 👥 and join our Facebook Group 💬.

To join our community Slack 🗣️ and read our weekly Faun topics 🗞️, click here⬇


The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts

Aymen El Amri

Written by

Cloud&DevOps, Maker/Entrepreneur, TechAuthor, Founder/CEO www.eralabs.io & www.faun.dev



The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade