TDD: Writing Testable Code

Eric Elliott
JavaScript Scene
Published in
12 min readJan 21, 2024

Writing testable code is a vital skill in software engineering. Let’s explore practical advice, strategies, and tactics for writing more testable code, unlocking the benefits of modularity, reusability, and high quality software in your projects.

Embracing testability in your coding practice isn’t just about catching bugs; it’s about fostering a culture of quality and efficiency in your projects. A good quality process is an essential prerequisite to continuous delivery, which is the ability to ship code to production at any time, allowing you to iterate quickly and respond to user needs and changing expectations.

We’ll start by exploring the importance of separation of concerns — a principle that’s crucial in achieving testable code. This approach isn’t just a technical necessity; it’s a mindset that simplifies complexity and enhances the overall quality of your work. Whether you’re a seasoned developer or just starting out, this article aims to provide practical advice and strategies that can be immediately applied to your projects.

Overview of Testability in Code

Testability in code refers to how easily a software system can be tested. Highly testable code allows for more efficient and effective identification of defects, ensuring higher quality and reliability. Key characteristics of testable code include modularity, where the code is organized into discrete units; clarity, meaning the code is understandable and its purpose is clear; and independence, where units of code can be tested in isolation without reliance on external systems or states.

Understanding the Problem: Tight Coupling in Code

Tight coupling is often a subtle yet significant barrier to achieving highly testable and maintainable code. Coupling is the degree to which a change in one part of the code may impact or break the functionality in another part of the code. Tight coupling occurs when two or more parts of the code are highly dependent on each other, making it difficult to change one part without affecting the other. This can lead to a cascade of unintended consequences, where a small change in one part of the code causes a chain reaction of bugs in other parts of the code.

Causes and Forms of Tight Coupling

Coupling is often caused by dependencies between different parts of a codebase. Some typical forms of dependencies that contribute to tight coupling include:

  • Parent Class Dependencies: When a subclass depends heavily on the implementation of a parent class, changes to the parent class can significantly impact all subclasses.
  • Shared Mutable State: Reliance on state variables or objects that are shared and mutable can lead to unpredictable behavior and tight coupling between components that use this shared state. This one has several sneaky variants: Global variables, singletons, DOM state, etc. Anywhere that you have a shared mutable state, you have the potential for tight coupling.
  • Concrete Class Dependencies: Depending on concrete implementations rather than abstractions (like interfaces or abstract classes) makes it hard to swap out or modify those implementations without affecting dependent code. This is why checking to see if something is an instance of a particular class is an anti-pattern, in JavaScript, or any other language.
  • Event Chains: When a system’s components are tightly coupled through complex chains of events or callbacks, a change in one event can ripple through the system, leading to unintended consequences.
  • State Shape Dependencies: When a component relies on the shape of the state, it can break if the state shape changes. This is why I recommend using selector functions with Redux, or any other state solution where components can share access to complex state trees. Selector functions allow you to derive state properties from the state, and then if the state shape changes, you only have to update the selector functions, rather than every component that uses the state.
  • Control Dependencies: When a component knows too much about the direct interface of another component, it can break if the other component changes its interface. This is why I recommend using a facade pattern to wrap external dependencies, such as network calls or database access, so that you can isolate the effects of those dependencies from the rest of the application.
  • Temporal Coupling: When a component relies on the order of operations, it can break if the order changes. Pure functions are a good way to avoid temporal coupling, because they don’t rely on shared mutable state, and they don’t have side effects. Given the same input, a pure function will always return the same output, regardless of what other functions are called before or after.

In order to write more testable code, it’s crucial to consciously reduce these forms of tight coupling in your applications. This involves embracing practices that promote loose coupling and modularity, such as pure functions, facades, selector functions, action creators, central dispatch (e.g. Redux stores or using a central event bus), dependency injection, interfaces, etc. “Program to an interface, not an implementation.” ~ Gang of Four, Design Patterns.

This approach not only makes testing easier, but also promotes a more sustainable and scalable codebase, capable of handling new requirements and technologies with minimal disruption. In other words, writing testable code and writing maintainable code are often the same thing.

Test First vs Test After

There are two main approaches to testing: Test First and Test After. Test First, also known as Test Driven Development (TDD), involves writing the tests before writing the code. The rationale behind this approach includes several key benefits:

  • Test the Test (Watch It Fail): In TDD, you first write a test for a specific functionality and watch it fail. This failure confirms that when the test passes, it passes because you properly implemented the tested requirement, and not because there was a bug in the test that causes it to always pass (a common mistake). It ensures that the test is actually testing what it’s supposed to.
  • Better Developer Experience: Testing first forces you to think about how the code will be used, even before delving into implementation details. This often leads to more thoughtfully designed APIs, as it prevents the leaking of implementation details into the API. The developer is encouraged to think from the user’s perspective, enhancing usability and clarity.
  • Clearer Requirements and Design: Writing tests first helps in understanding and solidifying requirements before implementation begins. It clarifies the purpose and expected behavior of the code, leading to a more focused and effective implementation.
  • Ensures Test Coverage: By writing tests first, you ensure that testing is not an afterthought but an integral part of the development process. This typically leads to higher test coverage and more reliable code.

In contrast, the Test After approach, where tests are written following the implementation, has a different set of implications:

  • Reduced Code Coverage: There’s a risk that some aspects of the implementation may be overlooked since the focus is on testing the existing code rather than defining its expected behavior upfront. Writing tests after the fact might lead to lower test coverage because it can be tempting to consider the job done once the code appears to work, rather than ensuring that all requirements are well tested.
  • Possibility of Biased Tests: When tests are written after the code, there’s a tendency to write tests that conform to the code’s current behavior, rather than independent calculations of the expected output. This could lead to false positives, because you placed too much faith in the implementation instead of independently calculating the expected output prior to implementation.
  • Reduced Emphasis on API Design: Since the code is already in place, the opportunity to shape the API based on how it’s going to be used (as opposed to how it’s going to be implemented) is reduced.

Separation of Concerns

Separation of concerns is a design principle for separating a computer program into distinct sections, where each section addresses a separate concern. In the context of testability, this principle is critical because it allows each part of the codebase to be tested independently. By dividing the code into sections such as business logic, user interface, and data access, developers can isolate problems more effectively and make changes without unintended consequences. This separation not only aids in testability but also enhances maintainability and scalability of the software.

I like to isolate code into three distinct buckets:

  • Business and state logic — the core logic of the application, and the part where most of the testing will occur.
  • User interface — the part of the application that interacts with the user, such as a web page or mobile app. We’ll focus on testing using JavaScript and React, and discuss concepts like display components vs container components. Note that I’ll use the terms “container” and “provider” interchangeably, because the container components wrap the display components to provide links to data, behavior, actions, network access, context, etc.
  • I/O & Effects — the part of the application that interacts with the database or other external data sources. We’ll talk about how to isolate these effects from the rest of the application, so that you can test other parts of the application without triggering side-affects like network calls or database writes.

All three of these layers should be in modules that can be isolated, tested, understood, and debugged independently of each other, and independently of the larger application context. This creates a feature called locality, which means that the code is self-contained and doesn’t rely on external state or context. Locality is essential to managing complexity in large applications, because no human can hold the entire application in their head at once. By isolating the code into modules, we can focus on one part at a time, and understand how it works without having to understand the entire context of the whole application all at once.

State Management and Testability

The core component that we want to test is our application data management and logic. In the JavaScript + React ecosystem, there are several popular solutions to this problem, and any of them can work well, as long as they allow you to isolate and test the logic independent of the user interface and data access layers.

Architecture Breakdown in React Applications

Let’s take a brief look at some popular data management choices. This is not intended to be prescriptive or exhaustive, but rather to give you some specific examples of how React developers typically isolate data management from React components. These are all good alternatives to relying strictly on React’s built in state management hooks, such as useState, useReducer, and useContext to manage your application state. Of course, the downside to using useState, useReducer, and useContext directly within your React components is that it often makes it difficult to test the logic in isolation, because the state is tied to the component.

Redux: A predictable state container that allows for centralized state management. Redux is often used alongside Redux-Saga, a middleware that handles side effects in Redux applications, enabling more efficient data fetching and state updates.

MobX: This library introduces reactive state management, making state as observable data. It’s appreciated for its simplicity and scalability, allowing for more intuitive state management in larger applications.

Zustand: A newer and more minimalist state management solution. It’s straightforward and hooks-centric, offering an easy setup with less boilerplate, making it a popular choice for simpler applications.

Each of these tools offers unique advantages and can be chosen based on the specific needs of the application. For instance, Redux and Redux-Saga might be preferred in complex applications with multiple state dependencies, while MobX or Zustand could be more suitable for applications where simplicity and ease of use are paramount.

Any of them can contribute to the separation of concerns by isolating state management from UI logic, enhancing testability and maintainability.

Testing React State Hooks

For local component state that is not persisted or shared with other components, it does not always make sense to use centralized state management. For example, calendar widgets or other complex UI elements frequently need to maintain their own state that is independent of the core application data logic.

If you need to use React’s built-in state management hooks, you can still isolate the logic into a separate module, and test it independently of the React component.

Perhaps the easiest way is to use useReducer, and test your reducer function directly, and then import the reducer function into your display component for integration with the UI. For simple getter and setter state updates, testing the setter function may be overkill - you'd effectively be testing React's useState hook, which is probably not necessary. But if you have more complex state updates, such as multiple state properties that need to be updated together, or state properties that depend on other state properties, then it can be useful to test a state reducer function in isolation, similar to how you might test a Redux reducer function.

This can have additional benefits, such as colocating multiple state property updates together into a single action handler instead of independently updating a bunch of properties without logically grouping those updates together.

Likewise, if you have state properties that depend on other parts of the state, you can use selector functions to derive the state properties from the state, and then test the selector functions independently of the UI. This can be especially useful for complex state management, where you have multiple state properties that depend on each other, and you want to ensure that the state is updated correctly when any of the dependencies change.

Another alternative is to build a custom hook that encapsulates the state management logic, and then test the custom hook in isolation. This can be a good option if you have multiple components that need to share the same state management logic, and you want to avoid duplicating the logic in multiple places. There are tools for testing hooks and context in React Testing Library.

reducer-test.js:

import { describe, test } from "vitest";
import { assert } from "riteway/vitest";
import { itemsReducer, addItem, setNewItem, getItemCount } from "./reducer";

describe("ListItems state", () => {
test("Items Reducer", () => {
assert({
given: "no arguments",
should: "return the initial state",
actual: itemsReducer(),
expected: { items: [], newItem: "" },
});

assert({
given: "addItem action",
should: "handle addItem action",
actual: itemsReducer(undefined, addItem({ id: "1", name: "Test Item" })),
expected: { items: [{ id: "1", name: "Test Item" }], newItem: "" },
});

assert({
given: "setNewItem action",
should: "handle setNewItem action",
actual: itemsReducer(undefined, setNewItem("New Item")),
expected: { items: [], newItem: "New Item" },
});

{
const actions = [
addItem({ id: 1, name: "Test Item 1" }),
addItem({ id: 2, name: "Test Item 2" }),
addItem({ id: 3, name: "Test Item 3" }),
];

const state = actions.reduce(itemsReducer, undefined);

assert({
given: "multiple addItem actions",
should: "report correct item count",
actual: getItemCount(state),
expected: 3,
});
}
});
});

reducer.js:

import { createId } from "@paralleldrive/cuid2";

const slice = "list";

export const addItem = ({ id = createId(), name = "" } = {}) => {
return {
type: `${slice}/addItem`,
payload: { id, name },
};
};

export const setNewItem = (name = "") => {
return {
type: `${slice}/setNewItem`,
payload: name,
};
};

const initialState = {
items: [],
newItem: "",
};

export const itemsReducer = (state = initialState, { type, payload } = {}) => {
switch (type) {
case addItem().type:
return {
...state,
items: [...state.items, payload],
newItem: "",
};
case setNewItem().type:
return {
...state,
newItem: payload,
};
default:
return state;
}
};

export const getItemCount = (state) => state.items.length;

React Component Usage:

import React, { useReducer } from "react";
import { itemsReducer, addItem, setNewItem, getItemCount } from "./reducer";

const ItemList = () => {
// Initialize the reducer with the initial state by passing it to useReducer
// and calling the reducer function with no arguments.
const [state, dispatch] = useReducer(itemsReducer, itemsReducer());
const { items, newItem } = state;
const itemCount = getItemCount(state);

const handleItemChange = (e) => {
dispatch(setNewItem(e.target.value));
};

const handleAddItem = () => {
dispatch(addItem({ name: newItem }));
};

return (
<div>
<input
type="text"
value={newItem}
onChange={handleItemChange}
placeholder="Enter item name"
/>
<button onClick={handleAddItem}>Add Item</button>
<ul>
{items.map((item) => (
<li key={item.id}>{item.name}</li>
))}
</ul>
<div>Item Count: {itemCount}</div>
</div>
);
};

export default ItemList;

For more on testing React components, see Unit Testing React Components.

Strategies for Isolating I/O Operations

I/O operations can be isolated from your main application logic using an event bus or other form of central dispatch such as a Redux store. This allows you to test your application logic without triggering side effects like network calls or database writes. You can then test the I/O operations separately, using mocks or stubs to simulate the network or database, or using a tool like redux-saga to prevent tests from triggering side-effects. Remember to test error states in I/O operations.

Conclusion

Testability is not a standalone feature but a fundamental aspect of good code design. Adopting a Test First approach encourages developers to consider the end-user experience and system requirements right from the start, leading to more robust and maintainable code. It ensures that our code not only meets its functional requirements but is also resilient and adaptable to change.

Isolating different parts of the application — business logic, UI, and data access — enhances testability and clarity. This isolation enables us to manage complexity, making large applications more understandable and maintainable, while laying the groundwork for faster, more efficient continuous delivery.

Next Steps

The fastest way to level up your career is 1:1 mentorship. With that in mind, I cofounded a platform that pairs engineers and engineering leaders with senior mentors who will meet with you via video every week. Topics include JavaScript, TypeScript, React, TDD, AI Driven Development, and Engineering Leadership. Join today at DevAnywhere.io.

Want to meet with me 1:1 for 1-hour? Book here now.

Prefer to learn about topics like functional programming and JavaScript on your own? Check out EricElliottJS.com or purchase my book, Composing Software in ebook or print.

Eric Elliott is an Engineering Manager for Adobe Firefly, a tech product and platform advisor, author of “Composing Software”, creator of SudoLang (the AI programming language), creator of EricElliottJS.com and cofounder of DevAnywhere.io. He has contributed to software experiences for Adobe Systems, Zumba Fitness, The Wall Street Journal, ESPN, BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.

--

--