It’s been some time since I’ve wanted to look at the different options of overriding dependencies defined among multiple Spring contexts so I decided to to use this post to dig a little bit into the subject. Let’s do some coding!
As background, I’ll be following this “base” Spring context configuration:
And I want to override the “bar” bean with a different instance defined in a different Spring context file:
The objective is to have a proper FooBar bean injected with the correct Bar implementation. This simple JUnit test class will help us to check the correctness, or not, of our experiments:
The easy approach: A single context
If we use a single Spring context then the process is actually pretty straightforward. The last bean definition takes precedence over the previous ones and the bean dependency resolution happens once Spring has built the whole dependency graph and selected the appropriate candidate for each bean.
Just adding OverrideBarConfig as the last configuration definition when creating the context means our test passes. So far, so good.
However, what I really want to analyse is how this applies to hierarchical contexts. Here at EmpathyBroker we use parent-child contexts in many situations in order to provide a flexible way to define custom behaviours for each customer on our search platform.
We want to share some common infrastructure and sensible defaults, but we have to be able to customize some steps in our pipeline to adapt the process to our customer needs. Using a single context could be a valid approach also, of course, but it would mean we’d have to reload ALL our beans, or at least a set of them, each time we instantiate a new context for a customer regardless of whether any bean is overridden or not.
Using parent-child contexts
So let’s define our hierarchical Spring context and run the tests.
But the test fails :(
What’s happening? The FooBar bean is created as soon as the parent context is loaded. Since it’s not redefined in the child context, Spring uses the bean defined in the parent context and it’s not recreated. Even worse, if we use the scope prototype for the fooBar bean it would be re-instantiated but the bar bean dependency would continue to be the parent one because dependencies are resolved from child contexts up to parent ones but not vice versa. This means that while re-building the FooBar bean, Spring will never look up for dependencies in the child context and therefore won’t use the overridden definition.
One possible solution to this problem is to redefine the fooBar bean in our child context:
And, of course, this time the test passes. Taking a look at what we had to do, a couple of changes were needed in order to inject the correct bar:
- Override the fooBar bean definition itself.
- Inject every single dependency of fooBar in our Configuration class in order to be able to build the new instance, in this case, the foo bean.
In this little example this isn’t really a big deal, just a few lines of code are needed to adapt our Configuration. But, if we think of this in a really big Configuration class with a very complex dependency graph, things get worse really fast. Think what would happen if that fooBar and bar were also a dependency for another number of beans and so on.
Register new bean definitions at runtime
Thinking about a way to make this process more automatic, we could use a BeanFactoryPostProcessor. This is a hook Spring provides to allow custom modifications of the application’s context bean definitions giving, for example, a chance to change or add definitions at runtime.
Be aware that only bean definitions can be manipulated in this processor. Unintentionally creating bean instances in a BeanFactoryPostProcessor could lead to undesired behaviours because it will force a bean instantiation too early in the Spring context loading process and produce, probably, wrong results.
Essentially, the idea is to analyse the dependency graph of the parent context using the child-defined beans as the starting point. If some bean definition in the parent context depends on one or more beans defined in the child context, then we would export the definition to the child context and let Spring create a new fresh instance. This could be an initial implementation of our BeanFactoryPostProcessor:
The dependency analyser implementation is out of the scope of this post, although it’s not difficult to implement using the ConfigurableBeanFactory.getDependenciesForBean method, but basically it maps a bean name with every dependant bean, whether the dependency is direct or not.
Now, we need to register our ExportParentBeansFactoryProcessor when creating the child context.
But the test fails again!
What happened this time? Well, the problem is related to the way we have defined the fooBar bean in our BaseConfig. Let’s take a look at it:
The dependencies are resolved inside the factory method. When using this kind of dependency resolution Spring uses the same context where the dependency was defined to resolve the bean. Therefore, Spring uses the bar bean defined in the parent context.
However, in Spring we can also specify our bean dependencies using method arguments:
That way, the Spring dependency resolution happens before calling the factory method and the beans are resolved using the context which contains the bean definition (in our case, the child context because we have the exported definition). Using this mechanism, the test passes!
We have seen that it seems to be feasible to analyse the whole dependency graph for a bean and automatically export the definition of the dependant ones to the child context. The objective of the experiment was to be able to reduce the amount of code we have to write when overriding beans in child contexts.
Sometimes we end up overriding lots of stuff just because we want to have a single bean that is used across many others. Probably still some conscious tests are needed to check if there is any undesired behaviour with this approach, but it seems to be a good starting point. Let me know your thoughts!