How to Avoid Being Dogmatic

Mario Bittencourt
SSENSE-TECH
Published in
6 min readOct 6, 2023
Photo by Brett Jordan on Unsplash

The software development business is intriguing and fast-paced, with an ever-evolving technological landscape that expects its members to follow the industry’s best practices. Sooner or later, as a developer, you will encounter concepts ranging from SOLID principles and hexagonal architecture to microservices, cloud, and serverless services. There seems to be a divide among developers when it comes to following those practices, with some fervently doing so and others rejecting them altogether.

In this article we’ll discuss the dangers associated with these practices and paths you can take to help bring balance to the force.

The Beginning

As is often the case, things start modestly and evolve over time. You might read a blog, watch a video, or participate in a conference where a topic discussed captures your interest. You are sold on the idea and eager to start applying that new knowledge every chance you get.

Typically, this is where problems begin to appear. It is easy to focus on the promised positive aspects of a given technology, language, framework etc., without taking context into account.

That new thing you just learned, that worked so well for others, may not fit your specific needs.

This can be considered a dogmatic approach, which can lead us to follow something without considering anything else. Conversely, those potentially burned by an ill-advised adoption tend to go to the opposite end of the spectrum, denying its use because of said bad experience.

But why does this happen?

Confusing Process with Benefits

When approaching something new, you are going to encounter at least three intertwined aspects:

  • What is being proposed
  • Why adopt/use it
  • How to adopt/use it

Most sources emphasize these points to convince you to try that new thing. As developers, we are practitioners, so we often gravitate to the last aspect and start using what we just learned to see how it feels.

It gives us insights into how easy and practical it is to put that concept or tool to use. While that is perfectly fine — after all on paper everything is possible — I have seen this lead to neglecting to dig deeper into the supposed benefits and moreover, the drawbacks and/or limitations.

Let’s look at some examples and explore a simple strategy to help you navigate this conflicting reality.

Dependency Injection

Dependency Injection (DI) is a software design pattern that focuses on managing and providing the dependencies (external objects or services) that a class or component requires to perform its functions, rather than having the class or component create or manage those dependencies internally.

Figure 1. Injecting a dependency via the constructor.

Common benefits include helping with the separation of concerns, fostering reusability, and making maintenance and testing easier.

But when it comes to an implementation that leverages DI, we see this usually translated by always having to use a Dependency Injection Container (DIC). So, in many projects you start seeing the use of a DIC, such as Inversify, and a proliferation of interfaces for every dependency you need.

The biggest argument against the use of DI is that it adds overhead and provides no real benefits. But who’s correct?

First of all, we must acknowledge that in order to use DI you are not obliged to use a DIC. A DIC shines at automating the dependency injection, especially when you have a chain of dependencies.

Figure 2. Long dependency list.

If your application is already split into smaller components, and the number and depth of dependencies are small, then adding a DIC may offer limited value. If you are unfamiliar with the DIC, then it additionally adds unnecessary complexity to the development and computing resources.

It means then, that you can use DI to achieve the benefits but only associate a DIC if it really provides value.

What about the practice of using interfaces with DI?

Interfaces

An interface serves as both a contract and a promise to anyone who uses an implementation based on the fact that it will accept certain parameters and provide the expected result.

The benefits tied to its use include helping with the separation of concerns and allowing it to change its implementation without affecting its clients. During testing, a concrete implementation can be replaced with a mock version without having to make any changes to the code that depends on it.

Figure 3. Example of an interface and its usage.

A prevalent argument against using interfaces is that they provide no real value, ranging from “my language allows me to replace the concrete implementation during testing”, “if I ever need to change the implementation I would have to change the client usage” to “I will never have a different implementation”.

Unfortunately this argument misses the point. It’s not about if you will ever need to replace the implementation or not, but rather the fact that you are thinking about the contract that a given class or function will provide, independent of how it will be implemented.

This practice, similar to Test-Driven Development (TDD), puts you in a different position, where you can reason through the solution before building it entirely.

Function-as-a-Service

Function-as-a-Service (FaaS) is a cloud computing model that falls under the umbrella of serverless computing. In a FaaS architecture, we write and deploy units of code that perform specific tasks or operations. These units, known as functions, are executed in response to events or triggers, and the cloud provider manages the underlying infrastructure.

Key advantages include eliminating the need to oversee scalability and the actual availability of the computing resources associated with the execution of the function. Additionally, FaaS follows the mantra “costs go to zero” when not used.

Among various positions on the topic, let’s focus on a rather unusual one: the size of the function. Some could take the definition of function too literally and come up with the following:

Figure 4. Decomposing a function into smaller parts.

Assuming the non-functional requirements are met, there shouldn’t be any issues, right? Not so fast! The problem could come from the overhead of having disjoint pieces of functionality, each with its own deployment and management (concurrency, security) settings, all to serve a potentially narrow interpretation of what a function is.

Having this breakdown should be driven by specific conditions: the API call can take a long time to respond, is prone to constant failures, or needs to be throttled separately. If neither of these takes place, you could go with the more direct approach of having a single function perform all 3 parts.

You can still achieve reuse and single responsibility by organizing the code referenced by the function, independently of the approach taken. So there is no need to compromise in that area either.

Being Pragmatic

As we have just seen, it is somewhat easy to blindly follow best practices and end up with a dogmatic approach.

While there is no silver bullet to avoid following a dogmatic path, a starting point is to always focus on the principles. Read the fine print, dig deeper, avoid having an obtuse vision, and continuously question what is behind adopting this architecture or methodology.

In the end, don't forget that education is key in a knowledge industry like ours. We touched on this when discussing how to scale the architecture practice [1][2][3] in your company.

If you foster a learning culture, it will guide you in answering questions such as “Am I preparing myself to reap the benefits of it?” or “Am I doing this without actually understanding why?”

With this type of analytical discourse, you will be well on your way to avoiding the pitfalls of dogmatism.

Cheers!

Editorial reviews by Catherine Heim & Gregory Belhumeur.

Want to work with us? Click here to see all open positions at SSENSE!

--

--