Designing Layered Architecture (3/3)
Reusing business logic and exposing our application to a larger ecosystem
In this chapter we’re going to be looking at how we can isolate our business logic layer, and why it’s important to do so. Bear in mind that this is a topic that I’ve seen a lot of developers struggling to understand, which is why we’re going to follow the already established formula of designing things with the easiest solutions in mind, then identifying how they can be improved, and finally iterating on them.
Discovering the bottleneck in our design
The last time we left off, our design looked like this:
In our previous session, we have successfully linked our backing services to individual repositories, and we have designed a single entity that is responsible for defining what our data structure should look like. Our system now contains isolated components and can be easily maintained.
However, the diagram above also has a bottleneck — we notice that our Controller is currently responsible for:
- Analysing a Request.
- Validating the Request data.
- Executing the business logic by aggregating data from multiple repositories.
- Generating a Response.
Due to the fact that a single class is accomplishing multiple roles, we’re breaking the Single Responsibility Principle (SRP) and its promise. As we look at solving the design problem, we need to remember that there are two flaws in our design and that they are both related to the fact that our Controller also handles business logic.
Problem A: Managing multiple response types
We can of course argue that our business logic needs to be somewhere, and that a Controller would be a pretty good place for that — while that may initially be satisfactory, as applications evolve, Controllers stop serving only HTML and instead start responding to different types of Requests in different formats:
The first flaw in our design is that our Controllers handle I/O, but also define the business flows that our application should follow — ideally, Controllers would analyse a Request, store its data into a common data structure, execute the business logic through another component, and then just serialise a Response into the appropriate format.
In the example above, if we look on the JSON chain, this means deserialising the JSON, having another component generate the business logic, and then serialising the data structure that we have received from said component. This would then work the same for XML and HTML.
Problem B: Going beyond HTTP
The second flaw with our system is regarding the type of events that we support — we can handle XML, JSON, HTML and other HTTP-friendly formats quite well, but how would our application support CLI interactions, or logic that should only be applied when an event is pushed to a certain queue?
To simplify our CLI scenario, we might structure it a bit like in the diagram below:
It seems tempting to just create an adaptor and handle the logic of processing Requests and Responses in our CLI interface. Unfortunately, this will only add complexity and another system that we need to manage into our the stack, not proving to be an effective long-term solution.
What happens if we decide to add an authentication layer to our CLI application, such as Oauth 2 with JWT support? Depending on our implementation, we might need to add support for managing refresh tokens, unauthorised calls, downtimes, etc; all of the previous items are beyond the scope of a CLI adaptor.
To further complicate our previous example, we might want to expose some logic through the CLI that should not be available through HTTP, which is often the case when working with this type of scenario.
As we come to realise that making HTTP calls from the CLI is not the best approach when we have access to the application’s codebase, we decide that we can just create a CLI interface for our framework, that has access to execute internal code from within our system. This is definitely a step on the right track, since now:
- Authentication can be handled through the CLI layer, and can bypass any HTTP layers.
- We can call internal application logic (i.e. our repositories) from our CLI, without exposing the functionality in a controller endpoint.
So we decide to refactor our design to look a bit like this:
We now have two paths that we can use in order to access our application — we can interact with it via HTTP or via CLI. We’ve definitely solved the two issues that we had above, but unfortunately, we’ve also introduced two more:
- Our logic is now duplicated in the Controller and the CLI — changing it in one adaptor will require changing it in the other;
- If we want to respond to queue events, we would need to add an additional adaptor, and duplicate the logic there again;
As was the case with Problem A, the correct solution involves shifting the responsibility of handling the business logic to another component and simply having adaptors that handle the I/O for the various system events that we need to respond to: HTTP, Process Signal, etc.
Duplicated business logic Out, Services In
The final main actor participating in layered architecture is the Service, also known as the Business Logic Layer (BLL). As its name implies, its responsibility lies in defining the business flows of a system.
It’s important to note that Services should be defined to work with Entities, and that all the SOLID principles still apply to them, as you might expect.
The external API of Services focuses on what your system does and what makes it custom, and thus, it should not have references to tables, API calls or process forks — it defines for instance, that Products can be manipulated, retrieved and aggregated, not that an API call needs to be made in order to fetch a particular entry from a third-party system. Luckily, we’ve already extracted that particular logic into Repositories, so we just need to pass the responsibility to a specific implementation and then work with the Entity objects that get passed as a response.
Examples of poor external APIs for Services:
Good examples of external APIs for Services:
If we look back to our diagram, it should now look a bit like this:
We can now see that:
- Our business logic is completely isolated.
- Our business logic is no longer duplicated.
- Our business logic can be called as a response to different types of events.
- Our business logic is well-structured and maintainable.
As an added benefit, since Services are just standard objects with a fixed set of dependencies, they can now also be very easily unit tested, mocked and injected into a DI container.
While not always present, it’s important to note that we want our data structures to be isomorphic and to have a process through which their transformation can be easily accomplished. This is where we will encounter the presentation layer, which handles the abstraction of a data source into a Response.
You will often see this layer when dealing with serialisation of data into multiple formats:
The Big Picture
We’ve now established all of the actors of the layered architecture paradigm, and below is a quick overview of all of them and their responsibilities:
- Data Access Object / Entity: Transferrable Data Structure
- Data Access Layer / Repository: Handles backing services
- Business Logic Layer / Service: Handles business logic
- Controller / CLI Adaptors: Handles I/O
- Presentation Layer: Provides data structures abstractions in different formats
One thing to note is that the layers always go deeper towards the backing services: Controller — Service — Repository — Entity; A Service is not allowed to interact with a Controller, and an Entity is also not allowed to interact with a Service.
My initial remarks from part one still stand — you don’t need to start adjusting your MVC design towards a layered one unless you are affected by the changes that we’ve previously discussed, or if you want to secure your implementation through automated testing.
It’s always better to iterate as quickly as you can, bring in as much value as possible for your customers and identify the needs of your market; as we’ve seen, layered design can be applied in increments, later in the development cycle. Resist refactoring and only design as much as is needed for your team and your product right now, and focus on improving when the time comes.
Once you do reach the point where the implementation needs to be improved, if you are targeting an enterprise environment, or if you want to implement hexagonal architecture, moving towards a layered design is a very solid approach— it’s a robust system that can be easily managed by large teams, with clear responsibilities, structure and multiple benefits.
Revisiting our initial design, it looked like this:
A lot has changed and we’ve adjusted the design in order to:
- Support new backing services (NoSQL, third-party API);
- Respect SOLID principles;
- Abstract our data structures (Entity);
- Support new ports and adapters, in line with hexagonal architecture principles;
- Support multiple response structures;
Our completed design looks like this:
If you would like to read more about layered architecture, have a look at some of the articles below:
We’ve covered a lot of material, and this concludes our series on Designing Layered Architecture. There’s definitely other things to point out, but this material should provide a stable foundation for understanding its actors, how they behave, and when they are needed.
I hope that you’ve enjoyed this series — did you find the topic interesting, and if so, would you like to explore it further? What other topics would you be interested in reading about next? Leave your comments and opinions in the comment box below.
Finally, if you’ve enjoyed the content and want to support us, feel free to hit the heart icon below and share with your friends.