Software Architecture Patterns

Anvesh
SilentTech
Published in
5 min readJul 14, 2024

In this Article we will discuss about below Architecture Patterns.

Service Oriented Architecture
Event Driven Architecture
Microservice Architecture
Serverless Architecture
Containerized Architecture
Microkernel Architecture
Pipe and Filter Architecture

Service Oriented Architecture

Imagine a world before API’s (to understand I used API here), how different applications could communicate, how tightly coupled different layers of an application. It's all Monolithic then. And it's really hard to communicate with applications developed in different language.

Service-oriented architecture (SOA) is a type of software design that makes software components reusable using service interfaces that use a common communication language over a network.

The Services can be called with little or no knowledge of how the service is implemented underneath, reducing the dependencies between applications.

Applications behind the service interface can be written in Java, .NET or any other programming language.

The services are exposed by using standard network protocols — such as simple object access protocol (SOAP)/HTTP or Restful HTTP (JSON/HTTP) — to send requests to read or change data.

The SOA Implementation provides us with Re-usability, Ability to use legacy functions in modern applications.

An SOA approach makes it easier to adapt to changing business needs and to integrate systems that provide related functionality.

For Instance, if you need to create a new application, we can use google map service to show map interface in our application, we can use MFA by integrating with Facebook / google / LinkedIn login services.

Service-oriented architecture (SOA) focuses on building functional, scalable software systems from individual components, called services. Services can interact with one another to perform tasks and access a variety of business applications.

Event Driven Architecture

An event-driven architecture uses events to trigger and communicate between decoupled services. It has three key components: event producers, event router/broker, and event consumers. where Producer pushes an event to a Router (typically a Message Broker) Router will filter and pushes an event to a Consumer, in some cases Consumer will read the message from the router.

Main advantage of this pattern is we don’t expect the consumer is always available. Consumer can read the message as per the availability.
This Architecture also enables us to decouple the Services, where the Producer doesn’t need to wait till the operation is completed.
For example, User triggers an action from online banking portal requesting an e-statement to be sent over Mail, the user will not wait on portal till he receives an Email, he just triggered an event and moves on, the backend systems will read this message from Brokers and form an email template and send it to user Email address.

By decoupling your services, they are only aware of the event router, not each other. This means that your services are interoperable, but if one service has a failure, the rest will keep running. The event router acts as an elastic buffer that will accommodate surges in workloads.

Event-driven architecture is often referred to as “asynchronous” communication. This means that the sender and recipient don’t have to wait for each other to move onto their next task. Systems are not dependent on each other.
This pattern replaces the traditional “request/response” architecture where services would have to wait for a reply before they could move onto the next task.

Microservice Architecture

Microservices are small, Independent and loosely coupled. Each service can be developed and managed by a small team, and each service can be deployed and scaled Independently.
Services are usually focused on specific objective and are decoupled along business boundaries.
Each Service is responsible for persisting their own Data.
Services will communicate with each other with the help of API’s and State changes were triggered with the help of Events and managed via Message Broker.
Each Service can be written in its own Tech-Stack.
Changes or Updates to Individual service were tested and deployed immediately.
Only services that needs extra performance can be scaled.
API Gateway can be used to communicate with backend services.

Serverless Architecture

Serverless Architecture enables us to develop applications by focusing on business functionalities without having to manage underlying Infrastructure.
Applications are hosted by third party services.

One of the Implementation of Serverless is FAAS- Function as a Service. we can make use of Azure Functions or AWS Lamba to implement FAAS. These functions were billed for the running time and can be started with number in-built triggers.

Characteristics of Serverless applications
Hostless
Stateless
Elastic
Event-driven
High-Availability
Usage-based Cost

With PAAS, we deployed our application as a single unit. with FAAS we compose our application into individual autonomous functions, The way we trigger these functions will also differ. FAAS can run on demand basically using the triggers.

Containerized Architecture

A containerized architecture makes it possible to package software and its dependencies in an isolated unit, called a container, which can run consistently in any environment.
Containers are truly portable, unlike traditional software deployment, in which software could not be moved to another environment without errors and incompatibilities.

Containers are similar to virtual machines in a traditional virtualized architecture, but they are more lightweight – they require less server resources and are much faster to start up. Technically, a container differs from a virtual machine because it shares the operating system kernel with other containers and applications, while a virtual machine runs a full virtual operating system.

Containerization involves encapsulating an application and its dependencies into a container — a lightweight, standalone, executable package. Unlike virtual machines, which include an entire operating system, containers share the host system’s kernel but package the application code, runtime, system tools, libraries, and settings. This distinction makes containers more efficient, portable, and resource-friendly than traditional virtual machines.

Microkernel Architecture

Microkernel architecture, also known as plug-in architecture. Microkernel pattern has two major components. They are a core system and plug-in modules.
The core system handles the fundamental and minimal operations of the application.
The plug-in modules handle the extended functionalities (like extra features) and customized processing.

A simple example to help understand the microkernel architecture pattern is an IDE like VS Code, Eclipse IDE, Notepad++

The core system contains the minimal functionality needed to run the system. In other architectural patterns, if we replace, add, or change a rule in a system the whole system is affected. In microkernel architecture, this does not happen because we divide the rules into plug-in components. The plug-in modules include additional functionality and are isolated and independent of each other.

The core system needs to keep track of which plug-ins are available and thus keeps track of this via a registry. When plugging components to the core system, this registry is updated with information like the name, location, data contract, and contract format of the plug-in. Similarly, when removed, the registry updates accordingly by removing this information.

Pipe and Filter Architecture

Pipe and Filter is another architectural pattern, which has independent entities called filters (components) which perform transformations on data and process the input they receive, and pipes, which serve as connectors for the stream of data being transformed, each connected to the next component in the pipeline.

Each filter processes data and passes it to the next filter via pipe.
This Architecture enables us to decompose a task that performs complex processing into a series of separate elements that can be reused.

The time it takes to process a single request depends on the speed of the slowest filters in the pipeline. One or more filters could be bottlenecks, especially if a high number of requests appear in a stream from a particular data source. The ability to run parallel instances of slow filters enables the system to spread the load and improve throughput.

Thank you for reading.
you can follow me on LinkedIn and Medium

--

--

Anvesh
SilentTech

Sr. SW Engineer/ Blogger 🔔Follow me to support my journey on Medium/ 🌐LinkedIn: http://www.linkedin.com/in/anveshsalla