How Bad Things Happen … in the Data Access Layer (Or Why DTOs Are the Devil)

Doug Wilson
Look, Fuckers
Published in
4 min readMar 30, 2024

The unintended but entirely predictable consequences of quick & dirty data design and structures, overly simplistic demo code, ceding control to ORMs, unexamined assumptions, and the lack of standards and discipline

Or “Look, fuckers. You people ain’t thinkin’ about this stuff.”

Everything is not fine. Your code’s not great. Your customer’s aren’t happy. This ship is sinking. And it’s your fault. M’kay?

A man sits looking at a wrecked ship on the rocks
Photo by Walid Ahmad from Pexels

This is another in a series of articles about the sorry state of “modern” software development and how we got here. These articles will all be part of my Look, Fuckers publication here on Medium, which debuted with the general introduction How Bad Things Happen … in Software Design & Development.

Last time in “How Bad Things Happen … in the Data Layer”, we started with the foundation: data. Today, as we work our way from the back end to the front, we’ll take a look at how data is passed between the data and service layers. To accomplish this, teams often choose to create Data Transfer Objects (DTOs).

I consider DTOs part of the “pile” or “acretion layer” of unexamined software development assumptions that have gradually been accepted over the years as “best practices” without ever seriously being challenged.

These practices don’t hold up well under close inspection for the following reasons:

1. Time and Cost to Create

If we accept the underlying assumption that we don’t want to pass fully-implemented objects with public, private, and inherited members, e.g. attributes, methods, events, and nested objects, between local or remote methods or services, the quick and easy “solution” would seem to be to strip away everything but the necessary data, resulting in a DTO. But this approach can literally double the amount of time/effort and expense needed to analyze, design, build, test, and deploy the system’s objects in order to add this new, one-to-one set of objects for data transfer.

If our system has hundreds of classes, e.g. Customer, Product, Service, Order, OrderItem, etc, etc, etc, this can easily result in 100+ fully-implemented classes -> 100+ DTOs / 5 DTOs per 2-week sprint = 2 x 20 sprints = 40 weeks x $100/hour x 40 hours/week = $160,000+.

Now the DTOs exist and are being used, but it cost 3/4 of a year and cost 1–2 full-time developer’s annual salaries to accomplish — time and money that could have been spent instead on delivering customer value rather than expensively working around an avoidable problem.

Thinking before coding is important.

2. Total Cost of Ownership (TCO)

The time and expense of creating software pales in comparison to the cost of maintaining it over its lifetime. Finding and fixing defects, changes needed as the system changes, and preventative maintenance resulting from issues and changes in the environment (updates, vulnerabilities, etc) can cost 15% to 25% of the original cost to create.

For our DTO example this could be $160,000 x 15% (taking the low end of the range) x 5 years = $120,000 (or $240,000 for 10 years). Just for DTO maintenance.

3. Unrestricted Proliferation

Now things get messy. Maybe our initial assumption of a one-to-one set of objects for data transfer was too aggressive. Maybe some system objects don’t need to be passed around, just instantiated and used locally. That’s fair.

But once we open the door to DTOs in general, it becomes very difficult to keep the little buggers from multiplying out of control. The “logic” seems to be that only object data subset A is needed for transfer to service A, but object data subset B is needed for transfer to service B.

With multiple DTO consumers, e.g. services, and multiple teams (or even a single, not-perfectly-diligent team) a one-to-one set of objects for data transfer becomes a two-to-one, three-to-one, or even larger set of purpose-built DTOs compared with the fully-implemented system objects for which they stand in with corresponding creation and maintenance time and expense. Suddenly, the tail is wagging the dog and DTOs have become a time-consuming, expensive anchor, limiting rather than enabling agility and progress.

So, what’s the alternative? We’ll introduce a new approach to system design and development as we push on into the service layer in the next installment.

Follow me or my Look, Fuckers publication here on Medium to be notified of future articles.

And for more data DOs and DON’Ts, check out “How Bad Things Happen … in the Data Layer”.

About the Author

Doug Wilson is a mission-focused technologist, software development leader, and trusted advisor with 25+ years of proven innovation and problem solving experience, putting technology to work in the service of business.

Today, he advises select organizations on how to increase business agility while avoiding serious business and technology risks. He believes strongly that our systems should encourage not penalize us for learning and improving.

Learn more about the services he provides through his Cygnus Technology Services consulting organization, or schedule a free consultation to learn how to put his experience and unique point of view to work for you.

--

--

Doug Wilson
Look, Fuckers

Doug Wilson is an experienced software application architect, music lover, problem solver, former film/video editor, philologist, and father of four.