Floyd’s Nonsensical Stance on Dependency Injection

Floyd May
7 min readJan 24, 2020

--

I’ve seen a discussion flare up and fade away over and over, and that is whether or not a fully-fledged dependency injection (DI) container, like (we’re talking C# here) Autofac, or Castle Windsor, or Ninject, is appropriate for an application, or if it would be better to do what has become known as “Vanilla DI;” that is, a minimal DI framework that is built into the application itself, and only contains the features that the application actually needs. What I think is missing in this conversation, though, is a common understanding of what we need out of the system as a whole. And, well, also, I think we need some nonsense. Because that always helps.

First and foremost, let’s talk about test coverage. For a brand new, larval web application that only does the equivalent of “hello world” over HTTP, it’s not that complicated, regardless of what language or framework you’re using, to put that under automated test. As that larval web app grows and gains functionality, something very important starts to appear in this app’s code: system boundaries. A “system boundary” as I define it is where the code defers control to something outside of the control of the code. It is outside of the system that is the the code itself, but is still required for the code to work correctly. That’s things like databases, web APIs, filesystems, and so on. Maybe even some of that cloud service bus fabric stuff, too, just for good measure (or because sometimes we’re stuck with nonsense. I did promise you some nonsense, after all, didn’t I?). In order to put this app under automated test, we either need to instantiate and control one of these external systems ourselves, like starting a database server, or passing a folder path for file storage to the app when we start it, or we can place a service stub at one of these system boundaries and mock the service behind it. Sometimes service stubs are our only feasible option — unless you’ve got a lot of in-house expertise running ũberCloud HyperMessage BusFabricServer (or whatever) yourself. Given that we can intercept each of the system boundaries of our hypothetical web app, it should be possible to thoroughly cover this app’s code with automated tests using nothing but HTTP and the system boundaries for input and output.

This is probably a terrible goal to shoot for. For a system of any appreciable utility, the effort spent on the setup of many of those automated tests will dwarf the value of the coverage that the test provides. Instead, it’s a whole lot smarter to isolate parts of the system and write fine-grained tests for each part. This is where I’ve seen dependency injection go horribly wrong again and again. I’ve seen teams put tons of effort into isolating and testing individual components, and trust that when the DI framework puts together the application at runtime, the sum of the parts will just work. All the while, they use the DI framework and mocking to hand-wave away the fact that they don’t have their system boundaries under control:

“We don’t need to test the database access code, we can just mock it in the tests.”

“We can’t test that part because it talks to the bank’s servers.”

The Religion of Dependency Injection

A DI container is supposed to help you do application composition, that is, it helps you put together all of the little pieces of your application. For instance, let’s say that you’ve got a web framework that uses controllers, and those controllers defer to some sort of service layer, and the service layer deals with repositories and other services, you might, in order to deal with a given HTTP request, need to construct an object graph that looks like this:

An object graph. System boundaries in blue.

First off, let’s assume that this object graph exists because we actually need these various separations of concerns. Not every application needs all this, so please, don’t don the coconut headphones. The value of a DI container is that the construction of this object graph is automatic. Why would that matter? Well, first, FooRepository, BarRepository, and BankService are all at system boundaries; that is, they all deal with external systems. The repositories probably need a connection string (or similar) in order to be able to interact with a database, and BankService probably needs some sort of credentials to deal with the bank’s API. If it was Controller's responsibility to build this object graph, it would get polluted with wrangling connection strings and credentials for things that it doesn’t need, but its dependencies (or its dependencies’ dependencies) need. This isn’t great for separating concerns. Another benefit of a DI container is that if some other controller needed to use BarService, it, and its dependencies, would get built the exact same way. That way, if BarService needs a new dependency, or needs to drop a dependency that it no longer needs, that change can happen in one place, at BarService, instead of everywhere that uses BarService as well.

There’s another thing that DI containers do really well, and that’s managing object lifetimes. Let’s say our object graph is a bit more complex, like this:

FooService and BarService share the same instance of BazService

In this case, a DI container can ensure that both FooService and BarService, who both depend on BazService, get the exact same instance of BazService instead of their own individual copies of them. In some cases, that can be really important. Maybe there’s a cache inside of BazService to prevent querying the database multiple times for the same data, or there’s only one connection allowed to ũberCloud HyperMessage BusFabricServer at a time.

For applications that actually need this level of complexity, a dependency injection container can be very useful for eliminating large piles of tedious boilerplate code. But there’s this near-religious attitude that tends to creep into development teams:

“Good applications use dependency injection, and so I’m not writing good code unless I’m doing dependency injection.”

So then everything becomes a dependency that’s managed by the DI container. Avoiding the new operator becomes a “best practice.” The bamboo airport is erected. The next thing that I’ve seen — over and over — is I think the worst pathology of code constructed by the Church of Dependency Injection:
No effort is spent putting the DI configuration under test.

“Sacrilege!” they say. “That’s not our code, that’s the DI container’s responsibility. Why should we unit test the DI framework? They’ve already got their own tests!”

Because the way you use the DI container influences your system’s behavior. Yet The Church will not listen to reason. They insist: the world is flat.

“Heresy!” they cry. “Burn the witch!”

“A duck!” (Sorry. Got a little carried away. )

Science, Not Magic

Let’s think back to the larval web app. It should be possible to intercept the system boundaries and validate every bit of the system’s behavior. Every. Single. Part. That means that if the system does the wrong thing if two instances of some service get instantiated instead of shared, that is system behavior that needs to be validated. Let me put this another way:

If I can change your DI configuration and it doesn’t break tests, your test coverage is crap.

So what does this have to do with deciding between an off-the-shelf framework versus Vanilla DI? Well, magical thinking can creep in when dealing with dependency injection containers. Many developers see them as things that are beyond their reasoning, and so they don’t see their use of the DI container as something that they’re responsible for. How do you unit test magic? Going the Vanilla DI route can really force the issue of taking responsibility for the dependency injection concerns of your application. There’s no magical third-party library that’s doing it for you. The downside is that, well, now it’s your code, and if you built it wrong, it’s your job to fix it.

On the other hand, if you use an off-the-shelf DI container, there’s a pretty good chance that it does things right. That doesn’t absolve you of your responsibility, though. DI containers aren’t system boundaries. You should be able to intercept the system boundaries and cover every single bit of your application’s logic, which includes how you use the DI container.

So how do you choose? First, if you’re already using a DI container, make sure your usage of it is under test. Make sure that you’ve got a fair enough number of end-to-end tests that mock nothing but system boundaries. Those end-to-end tests should be able to validate that you’ve configured your DI correctly. If you’re not sure, make a change to your DI config that should make the system misbehave, and run your tests. Did they fail? If not, that’s a problem. Fix that.

Second, if you’re starting fresh, see how far you can get without a DI framework, and the only mocks you’re allowed to do are system boundaries. Give Vanilla DI a try. When your app reaches a level of complexity that it needs, and I mean truly needs DI container features, then decide which would be less effort — the DI container, or just writing the feature I need right now.

And third, please, PLEASE, for the love of all that is good, don’t use more than one DI container technology at the same time. I don’t want to see your AutofacNinjectChimera monstrosity. This is the way of pain.

Finally, do a good job of putting your app under automated test. At its most basic level, every program just has inputs, outputs, and logic. Those inputs and outputs happen at system boundaries. Control the boundaries, and you can make sure your system is doing the right thing… until ũberCloud HyperMessage BusFabricServer has an outage… again.

--

--