Evolution of System Design, From Micro-Functions to Micro-Services

Mohit Gupta
Feb 14 · 11 min read

Since I started my career in Software Development, I experienced various design principles & patterns. However, one principle standout significantly, so much that many new development methodologies, design patterns, and even team structure are evolved following this.

This is the ‘Single Responsibility Principle’ aka SRP. And why not, SRP opens up the potential for much more innovation and effective software development with the right code structuring, better readability and maintainability, better release management, and most importantly team accountability and agility also.

*Refer here to read more about SPR and other Key Architecture Principles. This principle is about owning one responsibility by one unit of code.

‘Unit of Code’ is contextually based on who is the consumer. Hence application, module, package, class, function all could be termed as a unit of code depending upon ‘who is the client’.

Whatever the unit is, the design evolution and efforts were always to enable more modular implementation which can support better maintainable and scalable systems.

Another important aspect was to enable independent development, which could encourage accountability and agility in teams.

We developed many products starting with basic monolith applications, to very well managed big modular monoliths, to very distributed applications.

Here is a journey of evolution of system design, collating experience from the last two decades.

The key takeaway from the journey would be the understanding of basics, how system designs evolved organically to address various challenges, motives along with learnings.

System Evaluation: Smaller Size, Clear Intention

The journey begins with monolith application designs.

Although monolith, the need for high-quality code (refer Clean Code) and clean structure was always intact.

Hence, in the journey, the first and foremost focus was on defining smaller code structures (functions and classes), for clear intent. This helped to make code easy to read and maintain, and opened the possibility for reuse.

I remember having many discussions about the size of the method and classes, the right number of lines of code. Ultimately, I learned that

Small size is not about number of lines, but more about intent which any logical mind can grasp in first sight.

Learning

  • Keeping size ‘smaller than small’ is very useful. It helps in making code and intent clean opens up the possibility to reuse. Read more here ‘Art of Clean Code
  • There is no single optimum number for lines of code. It should be based on the intent of the ‘unit of code’. As we see another clear intent coming out of code, we should separate it.
  • Don’t break code just to make it small. That can spoil the structure by creating many more classes or interfaces, by blindly following the size parameter only. The right approach is to look for separate definable intent.

System Evaluation: Code Structured in Right Packages

Another structural improvement was driven by adopting the right package structure (and naming conventions), which can convey the intentions.

This may feel like an obvious choice, but many of the monolith application evolves over years into the highly cross-referenced structure, with the codebase in no better shape than spaghetti.

I remember various monolith and mammoth projects where we took the challenge to refactor legacy code in the right (package) structure. It was a much more tedious job than it sounds.

Since these projects were evolved over many years, much of the code was cross-referenced without any clear boundaries of modules, or layers. Making one code change would either need years of experience in the system or some magic wand otherwise.

Monolith Spaghetti Structure

Learning

  • The right structure helps to produce cleaner code by defining where any piece of code should go.
  • It helps to enforce the respect for modules and layers boundaries and even encouraged to use right interfaces and interaction protocols, if defined early.
  • Define an effective structure in beginning. It has immense value for overall system quality.
  • Refactoring is the way of developer life, but an early definition of the right structure could have a major impact on code quality and structure. Hence it is an important architecture consideration.

Code structured in the right packages helped in improving developer efficiency a lot in the first place, followed by many more improvements for deployments, etc., which we shall discuss in the next paragraphs.

System Evaluation: Reusable Components and Utilities

The next evaluation was in form of structuring code in reusable components and utilities. It helps to keep reusable code in one place for ease of reuse and to reduce rework (DRY).

Utility Classes

A group of generic reusable functions providing commonly used implementation.

Reusable, Stateless, Only behavior No Data. The simplest form of reusability.

Components

Next level of grouping of reusable implementation, but with the possibility of maintaining states as well along with behavior.

So, anyone can use functionality just by using the main component interface, which hides all the complexity.

Components and Utilities helped a lot in structuring code in a better readable, and maintainable form. It also reduced rework. With better clear boundaries, it also seeded the direction for end-to-end ownership by autonomous teams.

System Evaluation: Code Structuring in Separate Projects

A lot of improvements so far, but one challenge was still intact, which was the separation of code based on business features/concerns (let us call it module).

Utilities/ Components helped to pull reusable boilerplate code separate, however, business features implementation was still left to packages only for structuring.

I remember when one of our product's codebase become so huge, that it was practically impossible to manage this. Packaging the whole codebase in one Jar/War was causing issues due to obvious challenges like build time, size of binaries, etc. Searching for the right code location was a pain without years of experience.

We solved it by implementing a custom build to bundle module code based on the packages structure. It worked, but it was tedious and complex work. Any change in structure or packages could break the whole build.

The better solution was to divide the codebase of all utilities, components, and modules into separate projects. Code was still coupled with runtime dependency. However, it helped to have a better structure for development.

Learning

  • Separation of concerns works better with the separation of physical assets.
  • The human mind has a limited capacity to grasp the context. Breaking it into separate physical structures helps for better understanding too.

System Evaluation: Independent Deployable Projects with Individual Schema

The challenge of tight coupling was still there.

We still needed to build and deploy the whole system as one unit. It means the whole system needs to be tested, deployed together along with database changes. This means any small change, needs a mammoth effort of verification before it can go to production.

The key blocker for cross dependency was the database. Hence, the next stage was to separate the database also along with the codebase in separate projects.

We defined individual schemas for each module. Interaction across modules for any data or operations was defined strictly through interfaces (EJB, Spring,...).

We also defined the dependency hierarchy of modules (like networking layers) to avoid cross-dependency loops and introduced modules to access common data used by all other modules.

With the above separation, it becomes possible to build and deploy modules separately. It helped a lot to achieve independent development, deployment, and release-ability.

Independence has cost.

We hit that very soon due to the challenges of distributed systems.

  • Performance: As we moved in different modules, multiple cross modules interactions through remote interfaces, servlet, or XML-based protocols caused performance issues.
  • Distributed Tx Management: Cross-module transaction management was another challenge, which can either be solved by distributed transaction or eventual consistency patterns. Both are not viable or easy options.
  • Failure Management: More challenges were in the area of failure management when one service responds asynchronously later with failure and the original caller needs to manage the whole rollback to sanitize the states. The solution was to retain the states, and logic to rollback across the services
  • Reporting or Cross-Referenced Data Use Cases: Reporting or any query which needs data from more than one service was heavy on performance. The solution was either de-normalize the structure or have reporting data where all the data can be replicated.

All of these solutions were doable, but with their own cost and hence needed cost-benefit trade-offs.

A simple workaround was done in the EJB tech stack by using local interfaces or calling by reference in Java world, which means breaking the boundaries of separate deployment, and remote interaction protocols like SOAP, RMI and communicate across the modules as local reference (instead of remote service).

This solution helped to manage the first three of the above challenges while keeping an independent project structure.

The Challenge of bulk cross-referenced data for reporting etc. was solved by having de-normalized data schema and replicate required data here.

Learning

  • Over de-normalization could be costly. It needs proper design to contain the challenges of distributed systems.
  • It is ok to be less than perfect if it makes life easy. At least, as long we don’t have the right solution to manage perfection (of distributed systems).

System Evaluation: Modular Monolith worked well

Key left-out use cases were independent deployment, scalability, and how to be technology agnostic.

Scalability was still tied for the whole application, which was inter-dependent using direct references.

Technology independence was also not possible as the whole system was based tightly on one technology ecosystem.

However, these were not an immediate challenge for us as the whole product was planned to be on the same tech stack, and the desired level of scalability can be achieved by deploying coupled modules together on multiple nodes.

With a proper modular structure, that worked well. It worked well in many of the product implementations.

Learning

  • Modular Monolith can work well to manage many use cases.
  • With proactive guide rails in place to ensure sanity of modular structure, and defined process for releases and deployment.

System Evaluation: Distributed Services Model

All worked well so far. However, who does not want more?

Hence, we explored distributed Service model. We implemented services using SOA/web services stack decade back using Axis, UDDI, WSDL, SOAP.

But due to known challenges of distributed systems, we limited it to independent services which did not require data across the modules. Or when there was a need to expose the application interface to the external world.

Managing pitfalls of distributed service model was still a challenge. Out of all, managing failure scenarios in lack of distributed transactions was tricky.

It would need every service to retain the states for rollback, wait for another service interaction to complete, and rollback in case of any failure. Keeping all this logic in services was cluttering the services and actual business flow.

We experimented further by using a workflow system as an orchestrator for business flow, by developing independent services with separate deployment, and registry, etc.

Responsibility of inter-services communication was given to workflow, which managed all the logic/complexities of cross-service interactions, fallback logic, eventual data consistency, common services like logging of service flow, etc.

It was an interesting solution, as all the complexities of distributed service model world were contained in workflow, out of core service business logic.

It worked well, however, with many complexities. For example, debugging was a pain. Developers have to be very trained in the system, and highly skilled as well to maintain and understand this highly distributed system.

Learning

  • Using new technologies and design patterns is attractive. It is easy to fall for these.
  • However, it should not be used at the cost of making system maintenance complex, and hence by creating pain for team and product both.
  • It is important to stabilize the product, grow the team’s understanding of the system and then opt for more complex designs.

Workable Solution for Many Cases

After evaluating all the models, their pros and cons, we used Monolith Modular structure for most use cases, or mixed pattern sometimes. It worked well by meeting the standard of a highly modular and elegant solution while avoiding the pitfalls of distributed service model.

It was not a perfect (microservices kind of) solution. However, with a mix of reusable components, and various service-oriented constructs, it helped to have a balance of modularity, performance, and independent development to a greater extent while meeting desired use case years back.

We used orchestrator kind of highly distributed patterns only if use cases demand that. For example, in the case of a dynamic system, where any new services can be added to the system on the fly. Here orchestration layer (workflow) contained all the complexities of distributed service interactions.

Summary

The journey was primarily following the wish of having a system, which can support independent development and deployment, better scalability, with better maintainable code while seeding direction for end-to-end ownership and accountability in teams.

Here are the key markers of the journey:

  • Smaller functions, classes with good naming conventions
  • Better packages defining layers clearly
  • Reusable boilerplate code as Utilities, Components, Framework
  • Separation of modules using right package structure and naming
  • Separate Projects for modules, to help developer work on a focused area
  • Separation across projects including database, promoting independent development and deployment
  • And finally fully independent services (Webservices, EJB or whatsoever.. )

Good system designs are possible with any underlying technology, in any era. We experienced and experimented above journey with diverse technologies like EJB, Spring, Webservices, and more. However, technology can surely help with the availability of reusable language constructs and frameworks.

Finally, we reached a very important ‘Principle of Independence’, which we can sense in this whole journey. A journey to have:

  • Independent codebase, own its behavior.
  • With an independent database, own its states
  • Independent technology stack, which can evolve independently
  • Independent packaging and shipping, to build and deploy independently
  • To support independent scalability
  • Independent ownership, to promote end-to-end accountability and autonomous teams structure (agility)

BTW, there is no such principle ‘of Independence’ :). However, still

Attaining independence is the key driver of system design as then every part can flourish in its own way (similar to real life, humanity).

The key takeaway is, this is how many of the products and teams evolved.

Principles and needs are age-old, in wish of achieving better maintainable code, reusability, scalability, and better accountable and efficient team structure, while promoting independence for execution.

With the advancement of technology, patterns, and tool suites, supporting these requirements is becoming more efficient. One such good addition is the microservices pattern, which is not a silver bullet for every challenge but provides a nice well-defined pattern and technology stack.

We shall discuss microservices in the next article.

If you have any suggestions, feel free to reach me on Linkedin: Mohit Gupta

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Sign up for Best Stories

By Dev Genius

The best stories sent monthly to your email. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Mohit Gupta

Written by

Enjoy building great teams and products, a learner, an explorer of matrix

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Mohit Gupta

Written by

Enjoy building great teams and products, a learner, an explorer of matrix

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store