Why today’s software development can’t keep up with release cycles

Mayank Chander
Engineering the Skies: Qantas Tech Blog
7 min readApr 18, 2024

Imagine driving a sports car with an old engine. It’s fast-paced, like our current release cycles, but what if the engine isn’t built for speed? That’s the issue with today’s software development.

Our digital world is expanding. We’re not just linking systems within companies but across them, creating a complex web of interactions. What happens when these systems are built on outdated, monolithic structures? They can’t keep up. Like tweaking one gadget in a smart home and accidentally triggering something else, outdated software systems create unforeseen complications.

The Bottom Line

We need agile, responsive software that aligns with our business goals. This means adopting microservices to allow independent updates and applying continuous integration and deployment for faster, reliable changes.

If we don’t update our development strategies, we risk falling behind in the fast-paced digital race. Are your systems built to keep up?

Why Clear System Definitions and Mental Models Matter in Software Development

It’s essential not only to define systems by their service boundaries but also to ensure that the software itself reflects our mental model of these systems.

Here’s the catch: a service that begins lean can easily balloon into a cumbersome monolith. Imagine starting with a streamlined service handling simple operations but ending up with a giant, tangled system that manages everything from shopping cart functionalities and user authentication to payment processing; it strangles flexibility and agility.

Invites lots of problem around delivery contention.

Lack of clear service boundaries and coordination leads to conflicts, duplications, and dependencies

We want the delivery teams aligned around the product lines, and the services are aligned around the business domain, then it becomes easier to clearly assign ownership to these product-related delivery teams.

Clear service boundaries with dedicated teams for each product line

The Solution

In software development, think of introducing a new system as layering it gently over the existing one. This overlay strategy lets both systems coexist seamlessly, providing a safety net as the new system gradually matures and stands ready to ultimately take the reins.

Our redesigned approach transforms an overloaded service into a streamlined proxy that efficiently routes events to specialised microservices. This system leverages lambda functions for enhanced scalability and flexibility. It significantly boosts performance without overburdening any single component, ensuring that each part of our horizontally scalable architecture functions optimally. This proof of concept demonstrates the user journey flow, showcasing how easily it can be expanded to handle various operations consistently across the platform.

Key Components:

  • Feature Flags: contains an individual feature flag mapping for each operation, allowing us to gradually roll out the implementation in production
  • Item Service Lambda : It acts as a bridge that manages communication, data transformation, execution, and error handling for various order-related operations.
  • Operation Strategy Class: Defines a common interface for all operation strategies and serves as a central point for declaring flag logic. This ensures that each strategy class implements specific methods required to execute operations, such as executeLocalFunction and executeLambda.

Example: Strategy and Factory in Action

The Strategy pattern allows us to encapsulate operation-specific logic in distinct strategy classes. These classes handle operations by either executing current implementations or invoking lambda services, guided by feature flags. The OperationsStrategyClass orchestrates which strategy to use based on the event type.

export class OperationsStrategy<T extends OperationsResult> implements Operations<T> {
private readonly operationStrategies: Record<OperationType, OperationStrategy<T>> = {
[OperationType.AddItem]: new AddItemOperation(),
[OperationType.RemoveItem]: new RemoveItemOperation(),

};

async executeOperation<U extends T>(args: any, ctx: any, operation: OperationType): Promise<U> {
const strategy = this.operationStrategies[operation];

if (!strategy) {
// handle missing strategy
}

const { ENABLE_OPERATIONS_FLAG } = process.env;

const featureFlag = (await getFeatureFlag(ENABLE_OPERATIONS_FLAG))[operation];

if (featureFlag) {
return (await strategy.executeRemoteFunction(args, ctx)) as U;
} else {
return (await strategy.executeLocalFunction(args, ctx)) as U;
}
}
}

The AddItemOperation class handles the addition of items to a service, offering methods for both local and remote execution.

export class AddItemOperation<T extends ServiceResult> implements OperationStrategy<T> {
async executeLocalFunction(
args: {
addItemRequest: util.ItemDraft;
},
ctx: any,
): Promise<T> {
// execute current implementation
}

async executeRemoteFunction(args: any, ctx: any): Promise<T> {
const payload = createServicePayload(args, ctx, Operations.ADD_ITEM);

const response = await RemoteService.invokeRemote(payload);

if (!response) {
logger.error('invokeRemoteService: response undefined');

return new ServiceError('invokeRemoteService: response undefined') as T;
}

return RemoteService.processRemoteResponse<T>(response);
}
}

To elevate our Strategy pattern to the next level, we can integrate it with an event-driven architecture, enhancing its flexibility and adaptability. Conceptually, this approach involves deploying handlers that filter and map incoming events, ensuring that each strategy receives only pertinent data in the required format. This enriched model allows for dynamic response to a diverse range of events, supports seamless scaling, and promotes decoupling, making our systems more modular and easier to manage

Step 1: Basic Event Processing Framework

  • EventProcessor: This class is responsible for managing a collection of handlers (Handler[]). It can process incoming events by invoking the appropriate handlers based on the event's data. The handleEvent method takes the event name and data as parameters, applying each handler to the data.
  • Handler (Interface): This interface requires any implementing class to define two methods, filter and map. The filter method determines whether a handler should process an event, and the map method transforms the event data.
  • ProcessedEvent: Represents the data structure for events that have been processed. It includes properties such as eventName and data, indicating the type of event and the processed information respectively.

Step 2: Incorporating Advanced Processing and Strategies

  • EventStrategy (Abstract Class): Introduces a structured approach to handling specific types of events through an abstract class, which requires subclasses to implement an execute method for executing the strategy's specific logic.
  • Specific Strategies: These classes (AddToCartStrategy, DiscountStrategy, etc.) inherit from EventStrategy and each implements the execute method to handle specific operations related to events, such as adding an item to a shopping cart or applying discounts.

Step 3: Advanced Processors and Service Integration

  • Basic and Advanced Event Processors: These classes represent variations of the EventProcessor, equipped with handlers specific to their complexity and purpose. The Advanced Event Processor may include more sophisticated handling and additional features.
  • FeatureToggleMixin: This mixin allows strategies to be dynamically enhanced based on feature flags. It can modify strategy behavior at runtime depending on whether certain features are enabled.
  • Service Classes: These classes (OrderService, PaymentService, NotificationService) utilize the Advanced Event Processor to handle specific types of business operations, such as processing orders, payments, and sending notifications.

While this approach has its benefits, it does raise some concerns, particularly with the potential for complexity and logic to accumulate in the proxy. For a single service, this may not seem problematic, but as the proxy handles protocols for multiple services, the workload can increase significantly. Ultimately, although we aim for independently deployable services, having a shared proxy layer that requires edits from multiple teams could hinder the speed and efficiency of making and deploying changes.

One way to fix this is to decompose our monolith proxy into a modular monolith proxy such as

Key Components:

  • Node.js Modular Monolith: It contains distinct modules like Orders and Recommendations, each handling different aspects of the application.
  • Event Processor : This component acts as a centralised event bus that receives events from various modules. It helps in managing the flow of data between the monolith and lambda functions, reducing tight coupling and increasing the scalability of our system.
  • AWS Lambda Functions: Separate lambda functions for processing orders and recommendations. These are triggered by events from the EventSource.

Obviously, there will be many more considerations when implementing this approach, but if followed correctly, it allows the work to be broken into stages that can be delivered alongside other ongoing projects.

Takeaway

We need architectures that not only match the conceptual scope but are also flexible enough to evolve without becoming entangled. This approach prevents the heavyweight, monolithic structures that can drag a digital business down. To ensure your systems remain agile and aligned, ask yourself: How well is your software keeping pace with your business’s evolving needs?

--

--