Why MVx architectures are usually not working
Intro
Hi, my name is Vladimir and I have a controversial topic to discuss, but first of all let me explain where it begins.
I got acquainted with the concept of “architectural patterns” and MVC, in particular, back in 2012, and from that moment I fell in love with the idea of software architectures. I admired people who were writing architectural frameworks. I spent days and weeks reading their sources and watching YouTube videos. But the more I got into this, the more I felt myself in a state that I will call the “beginner illusionist”: while you’re watching tricks, they are magic, but when you’re starting to do them yourself, then they become mediocre.
Over the years I’ve tried different patterns and frameworks in different languages and projects. I was able to write some implementations on my own, and watch others in different projects, but nearly every time I ran into their imperfections. And each time it was painful. I had been thinking that next time will be better, and I’ll do everything right, but something was going wrong anyway.
And each time it was painful. I’d been thinking that the next time I’ll do everything right, but something was going wrong again. That’s how I decided to look into this issue and made it my mission to find the very approach that they mean when they say “do it right - it’ll be right”.
Did I find it? Maybe. At least I have something to show, but that is for another time.
On my way I found something equally interesting: the reason why MVx always turns out badly. And that is what we’re talking about today.
Three issues with MVx
Let’s take a look at three issues with MVx architectures. The ‘x’ is just a wildcard, so you could assume that it is MVC, MVP, MVVM, etc. It doesn’t matter, I promise =)
The Remainder issue
The Remainder issue or decomposition issue is up first. Our chosen MVx architecture dictates how to split a feature into components to implement it. And in theory the feature should be well splittable into these components.
But what if the feature is “smaller”? Then we could end with “flickering” components which could appear or disappear eventually. Also we will face useless components, which we have to create because of our rules, but in a “smaller” feature we don’t need them. In the end we will end up writing a bunch of boilerplate code, which is not doing anything useful, but which we have to compile and test.
And there is more. What if our feature is “bigger” than the feature from the architecture example? You probably already know the outcome: such features start to breed all sorts of delegates, factories, utils, helpers and other “abstractions”.
And it gets worse: after a couple of such features we could have the thought: “isn’t it time to expand the architecture template with new components?”. Our small and clear architecture becomes bigger. What next? We’re applying a new template to the features, which were fitting in it precisely in the past, but now these features are “small” for this architectural template. And we end up with a lot of “unnecessary” code, which you have to write, test and compile.
And what will we do when we have to write the same code several times in a row? That’s right, we’re making a generator. Which is now generating much more useless code.
Now it is much harder and takes much more time to write the code, longer to compile; we need more tests to check these components, it is harder to debug a feature, because we need to know how all these components work together, and we will spend a lot of time onboarding newbies.
As I said: the larger the divisor, the larger the remainder. We’re either left with indivisible parts of a feature, which are overgrown with new abstractions, or have to write unnecessary architectural components, which just consume resources.
The Scalability issue
The next issue we face is the scalability issue.
Imagine that we have a feature and the Remainder issue doesn’t exist. So we got lucky and were able to split the feature exactly into architectural components and implement it without problems. And everything was good, unless we weren’t asked to add new functionality.
First of all it must be split into arch components. Keep in mind that there’s still no Reminder issue. Thus we were able to split everything right by defined layers, and now we just should implement all this.
This is where it all starts: what will be our intuitive approach to adding new functionality? Peek into your code if your project had something similar.
Most likely, the so-called data layer will simply be supplemented with new classes that can work with new data sources.
Moving on to the domain layer or logic layer, or whatever you call it. Here it becomes more interesting, the new logic begins to intertwine with the existing one: we take data from the old sources and mix it with data from the new ones, and there the logic begins to branch. It goes into the new domain from the old one and back. At least we’re expecting it to get back, but it becomes harder and harder to track.
Next — all kinds of presentation and UI layers. Here, most likely, we just write new code in existing components and add new methods to interfaces. Сomponent implementations begin to bloat from logic, and files begin to grow with new and new lines.
Does this look familiar?
What happens next? There is a lot of code, and it fails often, and it is difficult to test. What should we do? That’s right, time to create new components!
And so we start pulling out pieces of code to increase their testability. One by one creating specialized components. Yes, they are covered by interfaces, but consumers of these interfaces are expecting a very specific implementation. And to keep track of this, we write a bunch of integration tests that try to persist the structure and behavior of these dependencies.
Does this sounds familiar? =)
As a result, our beautiful feature, when changes are made, begins to swell up and breaks in a plenty of new components that are highly coupled to existing ones, although they try to show with their interfaces that the connection is not so rigid and they can be replaced at any time.
I am sure that with the next modifications, the problem will persist and only aggravate the consequences. There will be more code, more components, components will be larger, more coupling, and more testing because unit tests will no longer be reliable and more integration and end-to-end tests will have to be used.
Long story short: the scaling problem is when a feature is expanded, it starts to bloat, because the intuitive approach to scaling is wrong.
The logic gaps issue
The last one is the Logic gaps issue. It is my favorite because it lies on the surface, causes a lot of damage to our applications every day, and not only MVx architectures are suffering from it.
Let’s imagine the execution path of our logic as a continuous line. As the execution progresses we’re diving deeper into the hierarchy of our components, performing some useful actions related to the logic of the feature. Then we gradually exit these components, set some system event listeners to continue the logic from there, and finally return control to the environment in which we were launched. From now on, we just wait for one of our listeners to be called so that our logic continues to run as we planned.
But what if an event happens in the system that we do not expect? Then our logic will go in a completely different way. And there can be many such events. Not exactly uncountable, but clearly more than we can control. One is enough.
This is the moment I’m calling the Gap: when we give control to the system in the hope that the logic will continue sometime in the future exactly where we’re expecting.
To make it clearer, most likely we need to look at some class that View directly interacts with, such as ViewModel or Controller (even if View doesn’t interact with them directly, but just sending events that are consumed by them), or your Model (if you’re using them for hosting the logic). Such a class is usually a set of entry points (methods) to run logic, and implies that these entry points will be executed in a certain order. But we do not have mechanisms that could guarantee this sequence of execution. Except integration or end-to-end tests. Or at worst comments with terrible warnings that another method must be called before this one.
Anyway I am ready to argue that because of this one problem you, I, and the entire industry are losing a huge amount of developers’ time, testing’s resources and money. Simply because according to our plan the logic should be executed in a certain order, but the architecture is designed to force it to be fragmented, thus putting us in a position where we can’t easily prove the order.
In short: the Gaps issue is when we cannot guarantee the execution order of our logic, because it was split into weakly related parts during implementation.
Epilogue
Three issues: reminder, scalability, and logic gaps. All implementations of MVx architectures that I have seen have suffered and are suffering from them. And non-MVx architectures usually solve only a subset of those three. For example, ELM- or Flux-like architectures, which are based on the State Machine, do their best to solve the scalability and gaps issues, but suffer from the reminder issue when it comes to asynchronous operations (say hi to effects and similar abstractions).
And I want to draw your attention to the following: none of the MVx or other architectures solve the Gaps issue. And I can understand why: it is not obvious, and usually the existence of a component such as View leads to its appearance. Logic can’t help but break if it’s designed to break every time the user needs to do something.
How did it happen? Why does an algorithm, when transferred from paper to code, become so fragmented, so unreliable? Can we do something about it?
Sure we can…
I have an idea that I want to share, but it’ll take time to write another post. And for now I’ll be more than happy if you will share your thoughts in the comments regarding this topic.
Next part is here