Microservices Explained: Why Every Developer Should Care

Nadar Alpenidze
12 min readJul 22, 2024

--

Microservices Explained: Why Every Developer Should Care — Thumbnail

Imagine a crowded city where each building serves a unique purpose, yet together, they create a functional community. That’s the essence of microservices: an architectural approach where software applications are constructed as a collection of independent services that work together. Before the era of microservices, we had the monolith — akin to a large, imposing building housing everything under one roof. This structure typically comprises three layers: presentation (or UI), services (the business logic), and the database. As technology evolved, and the complexities of the monolith were more painful, the industry yearned for more flexible design strategies. Enter Service Oriented Architecture (SOA), which set the stage for microservices by advocating for distinct, service-based components in software design.

At its core, Microservices architecture is about building software as a network of autonomous services. Each service operates independently, sometimes in its own process or on a separate server. They communicate through various protocols: synchronous ones like HTTP (REST) and RPC for immediate interactions, and asynchronous methods such as using message brokers (e.g., AWS SQS, Kafka) for deferred communication.

The attractiveness of microservices lies in their plethora of benefits: horizontal scaling, resilience, improved availability, reduced latency, focused functionality, fault tolerance, streamlined deployments, and the freedom to use diverse technologies. However, these perks come with their own set of challenges. Microservices introduce complexities such as increased operational demands, intricate data management, and dependency on network reliability.

Despite these hurdles, the balance often tips in favor of microservices. In the past decade, tech giants like Amazon, Netflix, Google, Facebook, eBay, Apple, and many others have transitioned to microservices, reaping the benefits of this modern architectural paradigm.

The Monolith

Figure 1. Monolithic three-tier application

We briefly touched upon the monolith, the architectural design that dominated the software industry for many years. Why did its popularity decline, and what drove the shift to more modular approaches?

At its core, a monolith combines all application components into a single unit, including the user interface, business logic, and data access code. These components run in the same process space and communicate via function calls. For instance, in an E-commerce application, typical components include users, orders, shipping, sales, payments, etc. Drawing from our earlier city analogy, a monolith is akin to a building that houses every essential service, such as grocery stores, hospitals, schools, and workplaces.

Originally, monoliths were preferred for their straightforward development, testing, and deployment processes. A unified codebase allowed for rapid changes, greatly benefiting early-stage products still in the process of defining their service boundaries. However, this architecture presents significant challenges.

In a monolithic setup, all teams within an organization work on the same codebase, deploying the application as a single binary unit. Consequently, any modification or bug fix by one team necessitates redeploying the entire application, affecting all teams. This approach is not only error-prone but also sluggish. To manage this, organizations often resort to infrequent “deployment windows,” limiting their agility and speed.

Scaling is another major challenge. If the application struggles to keep up with specific processes like order handling, scaling up the entire application is the only solution. This means launching more instances of the monolith, demanding significant resources, and incurring high costs, just to scale out a single component.

Figure 2. Scaling monolithic applications

Technological adaptability is also hindered in a monolithic architecture. For example, if there’s a need to optimize a critical part of the application, like the order service, using a different technology becomes impractical. The entire application must adhere to the same tech stack since it operates within a single process space. Thus, transitioning just the order service to a more efficient technology like Rust isn’t feasible.

The emergence of Service Oriented Architecture (SOA) and subsequently, microservices, was a direct response to these limitations. While there’s debate over the distinctness of SOA and microservices, both fundamentally embrace the concept of breaking down a large application into smaller, independent services with specific contexts. The advent of cloud computing further propelled this shift, providing scalable, on-demand resources that mesh seamlessly with the microservices paradigm.

Microservices

Microservices are a collection of autonomous services that interact to form a larger software application. The term “autonomous” indicates that each service runs in its independent process, communicating with others over the network. Additionally, these services are deployed independently. In contrast, the earlier described monolithic architecture often relies on software libraries for the separation of concerns. In the microservices approach, separation of concerns is achieved by dividing the application into distinct services. These services function as separate components and interact using out-of-process communication over the network.

Figure 3. Microservices architecture

Service Communication and Integration

Services in microservices architecture expose APIs (Application Programming Interfaces) for communication. Collaborating services use these APIs, typically over HTTP, to interact across the network. These APIs are language-agnostic, often employing textual protocols like JSON or XML for definition. Some organizations opt for binary protocols, such as protobuf, to reduce network overhead, minimize packet size, and decrease both marshalling/unmarshalling time and latency. Think of these APIs as akin to interfaces in your codebase — they specify the functions a microservice offers, their input requirements, and expected outputs while keeping implementation details abstracted, facilitating service evolution.

Services communicate across networks, whether within an organization’s intranet or across the global internet. This communication can be synchronous or asynchronous.

Synchronous Communication: This approach, involving the request-response pattern, is a blocking operation but straightforward to understand. It provides immediate feedback on operation success or failure. For instance, when uploading pictures to AWS S3, a synchronous request will immediately confirm whether the upload succeeded or failed, indicated by response codes like 200 for success or 4xx/5xx for failure. However, the calling service is blocked until a response is received. REST and RPC are common examples of synchronous communication. The main drawback is the dependency on downstream service availability; if a relied-upon service (like S3) is down, it affects the functionality and potentially the entire service chain.

Asynchronous Communication: This communication pattern involves the sender issuing a message and receiving immediate acknowledgment, with actual processing occurring later. This method is often implemented using message buses like AWS SQS, Kafka, or RabbitMQ. It’s particularly effective for tasks not requiring immediate responses, such as order processing. A message indicating a completed order is sent to a bus, to be processed by a consumer service later.

Asynchronous messaging offers significant advantages. It decouples microservices, ensuring that if one service is down, others aren’t necessarily affected. It also manages back pressure effectively — when one service sends requests faster than another can process — and enables the “fan-out” pattern, where multiple downstream services simultaneously process a single request.

Figure 4. Synchronous and asynchronous communication patterns

Choosing between synchronous and asynchronous communication relies on various factors, including the nature of the task, performance requirements, reliability needs, and overall system design. There’s no one-size-fits-all answer; it requires engineering discretion to weigh the pros and cons. For instance, an asynchronous approach is impractical for a SQL query, where immediate results are expected. Conversely, for an Amazon purchase, immediate order processing is less critical than receiving confirmation that the order was successfully placed and will arrive by the due date.

Service Boundaries

A fundamental question in microservice architecture is: “How big should a service be?” or “What should comprise a service?”

Split Based on Business Capability (Domain): Primarily, microservices should be split around business capabilities. Taking the classic example of E-commerce, key business capabilities include users, orders, shipment, and tax. Services should therefore be organized around these specific capabilities.

When deciding on splitting a larger software system into services, the following principles should be considered:

  • Single-Responsibility Principle: Each service should have one primary responsibility and a single reason to change. For instance, if we are working on a microservice related to sales calculation, changes in the orders service should not necessitate modifications in the sales calculation service. These two should remain independent, interacting through APIs as needed.
  • Autonomy: Services should be as autonomous as possible, minimizing dependencies on other services. For example, if the orders service experiences an outage, other services should be designed to continue functioning to maintain overall availability. Utilizing a message bus can help decouple the services.

The term “service granularity” refers to the size and scope of a microservice within an application. Fine-grained services are narrowly focused on performing a specific function. They offer flexibility but can increase complexity in terms of inter-service communication and coordination. Conversely, coarse-grained services handle a broader range of functionalities. While they reduce inter-service communication overhead, they risk evolving into mini-monoliths, potentially losing some benefits like independent scalability.

In designing services and defining their responsibilities, it’s crucial not to over-engineer. Often, it’s more manageable to start with fewer services and decompose them further if necessary, rather than initially creating an overly complex system with significant drawbacks.

A popular approach to establishing service boundaries is by utilizing the bounded context pattern from Domain-Driven Design. Bounded Context is a key concept in Domain-Driven Design. It establishes the boundaries of a specific domain model, including its entities, service, and the shared language (ubiquitous language) within a particular context. This concept is instrumental in defining clear boundaries for microservices. Each service is aligned with a specific bounded context, encapsulating all the logic and data pertinent to that domain.

Figure 5. Microservices bounded context illustration

Deployment and CI-CD

A key advantage of microservices architecture is its facilitation of rapid software delivery to customers. This is achieved through an incremental and safe approach, underpinned by a well-designed CI/CD strategy.

Continuous Integration (CI) and Continuous Deployment (CD) represent practices of frequently integrating code changes into a project’s repository (service). Each proposed change (commit) undergoes verification through an automated build and testing process, quickly identifying errors. A detected error causes the build to fail, preventing the code from being merged into the master branch, thereby maintaining the code’s safety and production readiness. Following a successful merge into the master branch (typically via a pull-request process), the CD phase begins. CD is tasked with the automatic deployment of all code changes to testing and/or production environments post-build. Numerous tools exist to facilitate CI/CD, including AWS CodePipeline, Jenkins, CircleCI, and GitHub Actions.

Microservices greatly benefit from CI/CD, largely due to the architectural advantages previously discussed. Firstly, the independence of each service, with its own CI/CD pipeline, permits isolated building, testing, and deployment. This means that issues in one service’s CI/CD pipeline, such as broken dependencies or failing tests, do not hinder the continuous deployment of other teams’ services.

Secondly, the smaller codebases inherent in microservices result in quicker deployment cycles. Builds, typically faster than those in monolithic applications, require building only the specific service’s code, not the entire application.

Moreover, the independent deployment of services enables more sophisticated strategies for testing and safety in production environments. Notable among these are Blue/Green and Canary deployments. For instance, canary deployment is a method aimed at minimizing the risk of introducing problematic software versions in production. This is achieved by gradually rolling out changes to a limited user subset before a full-scale launch.

Figure 6. CI/CD Pipeline diagram

Despite the expedited development and deployment advantages brought by microservices and CI/CD, it’s crucial to recognize potential pitfalls. For example, with services independently deployed and interacting via APIs, there’s an elevated risk of inadvertently introducing API-breaking changes. These may not be detectable at build time, as they would be in monolithic architectures. Thus, comprehensive end-to-end testing is essential to identify integration issues before they reach production. When introducing breaking changes, API versioning should be employed.

Lastly, it’s important to acknowledge that CI/CD pipelines can also be implemented in monolithic architectures. However, in such setups, the CI/CD process typically involves a single pipeline for deploying the entire software application. This often results in prolonged build processes that can span several hours, hampering team agility. Additionally, the shared repository model frequently leads to build failures due to code changes across various teams interfering with one another. In contrast, microservices enable a significantly smoother and faster software development and deployment cycle.

Key Strategies for Robust Microservices Architecture

I want to outline some additional considerations to keep in mind when building software using the microservices architecture. Each topic mentioned here could potentially be expanded into a full chapter, but for the sake of brevity and readability, we’ll briefly touch upon them:

  1. Design for Failure: Services must be designed with resilience in mind, capable of functioning even when other services fail. Designing for failure is crucial; otherwise, many benefits of microservices architecture could be lost. This involves implementing mechanisms such as circuit breakers, adding fallback procedures, incorporating timeouts and retries, and ensuring robust test coverage.
  2. Decentralized Data Management: This approach involves each service owning and managing its own data, leading to a more resilient and loosely coupled design. Decentralized data management is a key feature of microservices that enhances their autonomy and scalability.
  3. API Versioning: Critical for allowing services to evolve without breaking existing clients. Strategies for API versioning include:
    - URI Versioning: For instance, using /v1/orders
    - Parameter Versioning: Such as /orders?version=1
    - HTTP Header Versioning: Where version information is included in HTTP headers.
  4. API Gateway Pattern: The API Gateway serves as a unified point of entry into the system for clients. Instead of clients needing to know and discover the URI of each service, they can send requests to the API gateway (e.g., www.myproduct.com/orders), which then identifies the appropriate service and routes the request accordingly. API Gateways also manage cross-cutting concerns like authentication, authorization, rate limiting, load balancing, and caching.
  5. Service Discovery: This enables microservices to dynamically discover and interact with each other. Given the prevalent use of horizontal scaling in microservices, where new instances are frequently spun up or down, service discovery is crucial. It allows microservices to register themselves in a service registry upon startup. When a service needs to connect to another, it dynamically resolves the IP address by retrieving the list of IPs associated with a specific service from the service registry. Service discovery models include:
  • Client-Side Discovery: Where the client (another service in this context) queries the service registry and manages routing.
  • Server-Side Discovery: Where clients make requests via a load balancer or API gateway, which then handles queries to the service registry and routing.

These considerations are fundamental to successfully implementing and benefiting from a microservices architecture.

Conclusion

As we’ve journeyed through the intricate landscape of microservices, it’s clear that this architectural style is more than just a buzzword in the tech community — it’s a paradigm shift in how we build, deploy, and maintain software. From enabling agility and resilience to fostering a culture of technological innovation, microservices are at the forefront of modern software architecture.

Whether you’re a developer, a software architect, or simply a tech enthusiast, the world of microservices offers a plethora of opportunities and challenges. It encourages us to think differently about problem-solving and to embrace the complexities of distributed systems.

However, the shift to microservices isn’t just a technological decision; it’s a strategic one. It requires careful consideration, thorough planning, and a willingness to adapt. As with any significant change, there are hurdles to overcome, but the potential rewards — scalability, flexibility, and efficiency — are immense.

Remember, the future of software development is modular, resilient, and adaptable. And it’s within your reach. Start small, think big, and embrace the microservices revolution!

About Me — Let’s Connect

Nadar Alpenidze

Hi and thank you for reading! I’m Nadar Alpenidze, a software developer at AWS. With a passion for knowledge sharing and improving others, I am committed to helping fellow developers grow and excel in their fields. Feel free to connect with me on LinkedIn — I welcome your questions, insights, or even a casual chat about all things software development. Let’s build a vibrant community together!

--

--

Nadar Alpenidze
Nadar Alpenidze

Written by Nadar Alpenidze

Software engineer at AWS. 💡My mission is to help you expand your knowledge and become a more exceptional engineer. https://www.linkedin.com/in/nadar-alpenidze