Software Design, Separation Of Concerns & Unit Testing

Anthony Anyanwu
Upstart Tech
Published in
11 min readNov 9, 2020

Introduction

Software is inherently complex (Booch et al, 2007). It’s easy to find ourselves lost in the weeds when trying to improve the speed and reliability of tests on an existing application without an end in sight. However, there are times when we need to take a step back to analyze the situation from an eagle-eyed perspective. Sometimes it is useful to inquire if there are fundamental issues that can be addressed at the architectural level of the software itself.

This article was initially prepared for a talk I gave at a Testing Guild event here at Upstart. Lately, there have been other great contributions from several Upstarters on improving the efficiency and reliability of our tests. This article attacks the problem by demonstrating how adjustments in the design of a web application can help with building software that is easier to maintain and test. In particular, adherence to the principles of separation of concerns and modular design can make software easier to test. Before diving into the main discussion, let us talk about complexity.

On complexity

Think of any complex system. As humans, we have learned how to navigate complexity. We often do this by applying abstractions. This process often involves breaking down the complex system into subsystems to which we associate a set of well-defined responsibilities. After defining the subsystems, we then imagine the complex system as an interaction of these “Blackbox” subsystems.

For example, modern-day smartphones are relatively complex devices. Yet we can describe the phone as consisting of a screen display, camera, battery, processor, memory/storage, etc. Note that each of these components has its own responsibility. The responsibility of the battery is to store and deliver electricity. The responsibility of the screen display is to show content. The responsibility of the different memory devices is to store/retrieve information. Likewise, the responsibility of each of the processors is to provide compute resources. In short, each has a clear responsibility independent of the other components.

A phone technician can independently test that the battery works, the screen display works, the camera works, etc. Once each part is validated independently, it can be tested together to confirm there are no integration issues. Each of these individual components provides interfaces through which other components can interact with it. Each component only needs to connect to the exposed interfaces of other components.

Another example is the modern-day automobile. The brake system provides an independent functionality than the engine, hence we can test, fix, and replace the brake system independent of the engine. The engine only needs to know how to interface with the brake system.

Learning from real life

What characteristics of these systems make them relatively easy to understand, test, and maintain? They all have the following qualities:

  1. They are built in such a way that we can understand them as an interaction of component building blocks (let’s call these components subsystems)
  2. Each of these subsystems has a clear responsibility
  3. We can test any of these subsystems independently based on their responsibility. We do this through its input and output interfaces.
  4. Each subsystem only relies on interfaces of other subsystems to function. They do not care about how the job is done.

For example, the screen display doesn’t care whether the content is fetched from the internet or the memory card, provided it is passed on to its interface, it would display it. Conversely, while trying to replace my phone battery, I would not suddenly discover that it can also take photographs.

In summary:

  1. Separation of concerns is a very powerful tool.
  2. Respecting the boundaries of responsibility is a very powerful tool.
  3. Relying only on the interfaces (not implementation) of other subsystems is a very powerful tool.

These key points allow a subsystem to be independently testable and help manage complexity.

Applying this to web development

For this discussion, we can categorize object classes required to build a web application as follows:

  1. Presentation classes: These classes are responsible for taking input from the user and sending a response back to the user in the appropriate format. Controllers fall into this category. Presentation classes are typically not expected to make changes to objects directly.
  2. Business/Domain classes: These classes represent the domain models. Each domain model is expected to contain relevant business operations and logic according to the real-world entity which it represents (models).
  3. Service Object Classes: These classes are responsible for coordinating logic that spans multiple concerns and business objects. Perhaps except for enforcing cross-cutting business rules, Service Objects would typically not do the tasks themselves. They merely delegate the task to the concerned object.
  4. Integration classes: These classes are responsible for interacting with systems external to the application. Examples of these external system interactions are database calls, sending emails, 3rd Party API calls, etc. Our data access objects (repositories) and email classes fall in this category.
  5. Support/utility classes: These just provide utility functions. E.g. special string utilizes, time zone functions, etc. which are used to support other classes.

Having defined the categories, let’s consider how to test each type of class with separation of concerns in mind. Just as the smartphone screen display should not concern itself with the process of storing electricity, controllers should not contain logic for retrieving or persisting data. SQL queries should not be in the controller for instance. Such SQL responsibility belongs in the data repository class (a data access object). Also, even though data repositories are often used to persist and load business objects, business logic is not their responsibility. For this reason, data repositories should not contain business logic. This is the responsibility of business model classes (and probably a servicer if it spans multiple objects).

Respecting the boundaries of responsibility turns out to be a very important factor in managing software complexity. For instance, moving all SQL queries to data repository classes means other areas of the code can depend only on the interfaces of these classes for anything SQL related. When applied to all integration classes, this means we can completely separate all external entity integration tests from the rest of our code. In other words, the rest of our tests can run completely offline. For instance, we can test all business logic without relying on any database.

Note that moving all integration logic to Integration classes means moving them there and writing tests on every exposed interface. Tests for integration classes are always integration tests since this is their responsibility. For example, tests for a data repository method confirm that the query returns expected results.

When refactoring an existing application to improve the tests, moving all logic for interacting with external entities into relevant integration classes may be a good start.

Unit testing your components

Once our web application is organized into smaller subsystems, each component can be tested in isolation via unit tests. What does “respecting the boundaries of responsibility” mean with regards to testing a web application? I propose that the unit test for a class should only test what it knows about.

See the code snippet below for method do_a_stuff(b). Notice that even though do_a_stuff needs b to support do_b_stuff, how do_b_stuff works is not the responsibility of class A. So a test for A should mock out do_b_stuff since that test should be covered in unit tests for class B.

class A
def do_a_stuff(b)
result = b.do_b_stuff
...
new_result
end
end

So to test a.do_a_stuff, we pass in a mock instance of B, specifically mocking out the method which we are using. This is important since tests for class A should only test what A knows about and simply mock functionality which is not its responsibility. We only need to ensure the mocks exactly mimic the mocked interface (i.e both the input and output behavior). To test a mobile phone screen display, we only need to provide the electric charge it requires. A phone technician can easily fulfill this by supplying a mock source different from the phone battery. A similar concept applies here.

What happens if the behavior for do_b_stuff changes (e.g. the return value changes) without updating class A? With our approach, since the unit tests for class A were based on the old behavior of do_b_stuff, we have an incorrect situation where both tests pass independently. This is a problem that integration tests are expected to solve. First, integration tests would fail at the point class A invokes do_b_stuff, leading us to update class A and the corresponding unit tests. Now the correct functionality is restored at the unit test level.

Another approach to this problem is to write a contract test whenever we replace an interface with a fake. In our case, if the unit test for class A mocks out do_b_stuff, we include a test case in the unit test for class B which verifies the assumption. The idea is to ensure that all assumptions made in other tests about that interface are verified. The task of keeping track of assumptions and checking for a verification test case is then left to the test framework.

Bringing it back to our web application

Wrapping up, let us put everything together to refactor some dummy rails order management application. Imagine a user wants to cancel an order and we want to refactor the following code:

Code without separation of concerns

class OrderController
def cancel_order
order = Order.find_by(id: params[:order_id], user: current_user)
if order.blank?
@errors = ["Could not find order with id: #{params[:order_id]}"]
else
if [Order::SHIPPED, Order::DELIVERED].include?(order.status)
@errors = ["Cannot cancel order #{order.description}"]
else
ActiveRecord::Base.transaction do
OrderHistoryRecord history_record = OrderHistoryRecord.new(order)
history_record.save
OrderRepository.update(order, status: Order.CANCELED)
ShippingService.cancel_shipping(order.shipping)
InventoryService.update_stock(order.productInfo)
end
flash[:success] = "Order #{order.description} successfully cancelled" return redirect_to view_order_path
end
end
end
end

Let us assume that we already have classes and unit tests for OrderRepository, ShippingService, and InventoryService. Notice that the controller code does a lot of things that are unrelated to presentation logic. First, it queries the database, then it applies some business logic and also manages a transaction.

Now to achieve tests that do not hit the database, it is possible to mock out all interfaces which cancel_order depends on. This includes Order.find_by, OrderHistoryRecord.new, history_record.save, and so on. Because this would result in so many mocks, besides being a pain to write and maintain such a test, it also makes the unit test too coupled to the implementation details of cancel_order.

Given this problem, many engineers would simply opt to not mock anything, leading to a hybrid test (unit/integration test). Such a test might look like this:

The unit test (sort of)

...describe "#cancel_order" do
context "order exists" do
let(:user) { FactoryBot.create(:user) }
let(:order) { FactoryBot.create(:order, user: user) }
before do
allow(controller).to receive(:params).
and_return({ order_id: order.id })
allow(controller).to receive(:current_user).
and_return(user)
end
it "cancels the order" do
controller.cancel_order
order.reload
expect(order.status).to eq(Order.CANCELED)
end
it "creates order history record" do
controller.cancel_order
expect(OrderHistoryRecord.
find_by(order_id: order.id, version_no: order.version_no)).to be_truthy
end

it "cancels shipping" do
expect(ShippingService).to receive(:cancel_shipping).
with(order.shipping)
controller.cancel_order
end
it "updates inventory stock" do
expect(InventoryService).to receive(:update_stock).
with(order.productInfo)
controller.cancel_order
end
it "redirects on success" do
expect(controller).to receive(:redirect_to)
controller.cancel_order
end
end
context "order does not exists" do
...
end
context "order already canceled" do
...
end
context "order already shipped" do
...
end
end
...

Looking at the above, it is clear that we are testing a lot of things unrelated to presentation. For example, if the business decides that cancellation should only be honored for unshipped orders placed within 24hours, then the controller implementation would have to change. We will also have to update the unit tests to capture this scenario. Without appropriate layering and separation of concerns, we end up testing the entire system at once.

cancel_order with separation of concerns

The controller

class OrderController
def cancel_order
cancellation_report = OrderService.cancel_order(params[:order_id], current_user)
if cancellation_report.successful?
flash[:success] = "Order #{order.description} successfully cancelled"
return redirect_to view_order_path
else
@errors = cancellation_report.errors
end
end
end

The servicer

class OrderService
def cancel_order(order, user)
unless order.is_a?(Order)
order = OrderRepository.find_by_id(order)
end
# Order model contains all business logic related to orders
# order.cancel_order applies all business logic rules required to cancel order
# On success, order.cancel_order updates status to canceled
# order.cancel_order can return the list of failed validations or just raises an exception
errors = order.cancel_order(user)
if errors.any?
return CancellationReport.new(errors: errors, status: CancellationReport::ERROR)
end
ActiveRecord::Base.transaction do
OrderRepository.update_cancelled(order)
history_record = OrderHistoryRecord.new_from_order(order)
OrderHistoryRecordRepository.create(history_record)
ShippingService.cancel_shipping(order.shipping)
InventoryService.update_stock(order.productInfo)
end
# cancel order simply returns a report
# or just raises an exception if it is unable to cancel the order
CancellationReport.new(errors: nil, status: CancellationReport::SUCCESS)
end
end

The unit test

Now let us see how controller tests look with the changes we have made.

describe "#cancel_order" do
let(:current_user) { double(:user) }
let(:order_id) { SecureRandom.random_number }
before(:each) do
allow(controller).to receive(:params).
and_return({ order_id: order_id })
allow(controller).to receive(:current_user).
and_return(user)
end
context "successful cancellation" do
let(:cancellation_report) do
CancellationReport.new(errors: nil, status: CancellationReport::SUCCESS)
end
it "cancels the order" do
expect(OrderService).
to receive(:cancel_order).
with(current_user, order_id).once
controller.cancel_order
end
it "has no error messages" do
allow(OrderService).
to receive(:cancel_order).
and_return(cancellation_report)
expect(controller.instance_variable_get(:@errors)).
to be_nil
controller.cancel_order
end
it "redirects on success" do
expect(controller).to receive(:redirect_to)
controller.cancel_order
end
end
context "unsuccessful cancellation" do
let(:cancellation_report) do
errors = ["Error1", "Error2"]
CancellationReport.new(errors: errors,
status: CancellationReport::ERROR)
end
it "attempts to cancel the order" do
expect(OrderService).
to receive(:cancel_order).
with(current_user, order_id).once
controller.cancel_order
end
it "has error messages" do
allow(OrderService).
to receive(:cancel_order).
and_return(cancellation_report)
expect(controller.instance_variable_get(:@errors)).
to eq(cancellation_report.errors)
controller.cancel_order
end
end
end

Now the test for the controller only focuses on invoking the interface for OrderService and the type of response the controller returns depending on the behavior of OrderService.cancel_order. The controller is no longer coupled to the different business scenarios for cancel_order.

Similarly tests for OrderService.cancel_order should only test the interfaces it knows about and what they return for each scenario scoped out from using the interfaces. For example, it would not test the explicit business logic for canceling orders (since that responsibility belongs to the Order class). It merely needs to test that if the interface returns a value, then OrderService.cancel_order does the right thing. It is expected that the business scenarios related to canceling an order would be captured in the unit test for order.cancel_order(user). This is because that is the responsibility belonging to the Order domain class.

Closing Points

Building software is a complex business. By adopting a clearly defined separation of concerns strategy, code becomes easier to maintain and test.

Additional Resources

Booch, G. et al. (2007) Object-Oriented Analysis and Design with Applications (3rd Edition). Addison Wesley Longman Publishing Co., Inc., USA. pp 5 — pp 28.

https://nalexn.github.io/separation-of-concerns/

https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html

https://vimeo.com/80533536

--

--