One of the core principles of good programming is don’t solve the same problem twice. If someone has already invented the perfect bubblesort, you have no business rolling your own. If you’ve got a reasonable regular expression to validate email addresses, you don’t need to do it yourself. And so on.
This logic is easy to understand. Every time you reinvent a piece of functionality, there’s a risk that things will go sideways. You could introduce new bugs or stumble into unexpected shortcomings. At best, your code will suck up extra testing time. At worst, you’ll create problems that will hide in the seams and joints of your application, like bedbugs in the corners of a old bed frame.
So it’s easy to understand the allure of design patterns. If we’re going to solve the same problems over and over again, wouldn’t we be wise to use the canonical solutions, ones created by far smarter programmers and tested over the eons? Or, to put it another way, don’t we have the responsibility to use battle-tested patterns to save time and ensure the best possible final product?
This is how design patterns reel you in.
A brief history of design patterns
The idea of patterns — conceptual models that you can define and reuse — has deep roots, stretching back to real architecture (of buildings) and the work of Christopher Alexander. But design patterns as most programmers know them sprang into existence in 1994, when four coding geniuses wrote a book called Design Patterns: Elements of Reusable Object-Oriented Software.
Design Patterns set out 23 foundational patterns grouped into three categories: Creational, Structural, and Behavioral. You can review all of them here. Amazingly enough, when people talk about design patterns today — some 25 years later — they’re usually referring to one of the ancient patterns first codified in this book.
This sort of success is no accident. And there’s no denying that the original design patterns were written by sharper programmers than you or me. But design patterns aren’t a neutral part of software design, and using them has a price that’s often overlooked.
The cost of complexity
Design patterns are often sold to programmers with architectural analogies. Imagine you were building a new home. Would you want the tradespeople doing the work to reinvent domestic plumbing systems? Would you want the electrician to cook up his own approach to wiring fuses?
But building software systems is very different than building houses. For one thing, design patterns aren’t ingredients you can drop straight into your code, like a handy function from a class library. Instead, each pattern is a model that needs to be implemented. Most design patterns define an interaction that spans different objects, which means you need to make changes to several classes. The sheer weight of this extra code complicates your design. They’re especially dangerous for new developers, who never see a coding side-trip they don’t want to take.
Even when design patterns are at their best, they force you to trade simplicity for something else. Often, that “something else” is just a vague promise of good encapsulation and a warm fuzzy feeling.
Design patterns are opinionated. They embed themselves in your code, and they pull your classes in specific directions.
Even the simplest patterns have a cost and introduce complexity. Consider the humble Singleton pattern—a class that only allows one instance. Despite its conceptual simplicity, there’s roughly a dozen different techniques for implementing the Singleton pattern, depending on whether you need thread safety, lazy loading, serializability, support for inheritance, or you just love enums.
It’s not that Singleton design is an advanced concept. It’s just impossible to design any single code ingredient to be perfectly generalized and perfectly suitable to every use case. And to this day, architects still debate if the Singleton is a virtuous gold-plated pattern or an anti-pattern— something you should strive to avoid, because someday it will betray you.
Design patterns are all about increasing abstraction in your code. Patterns like Proxy, Bridge, Adapter, and Facade add layers in between objects. At first, this seems like programming paradise. What virtuous programmer doesn’t want less dependency between objects?
We all know the rule: All problems in computer science can be solved by another level of indirection. But there’s a side effect, too. Every extra layer of indirection adds a new place where you can put a solution.
In other words, the more you abstract your design with patterns, the more places you open up for someone else to change the code. Future programmers are going to have trouble figuring out what part of the system to modify, and how they can extend the code without having their work collide with someone else’s changes.
The worst offender is the Mediator pattern, which aims to let two objects interact without knowing anything about each other. The result is either the a holy nirvana of abstraction, or a way to seriously confuse responsibilities in your class model.
There are two ways to wreck a car: 1) Tear it apart. 2) Call it a generalized road-limited transport container and start adding to it.
Mismatching and bad fits
It’s easy to rush into implementing patterns without understanding the context—in other words, how do these patterns fit into your chosen language, framework, and type of application?
The answer can be murky. Modern language features like generics change the way patterns are used. Dynamic languages from Lisp to Python make many patterns obsolete, according to no less a programmer than Peter Norvig. And functional programming languages exist in a parallel universe with completely different patterns.
These inconsistencies aren’t limited to language features. Other patterns don’t play nicely with certain types of infrastructure. For example, you don’t want chatty objects if you’re dealing with network protocols, and multithreaded code can break the standard implementations of most of the original 23 design patterns.
The antidote: Be simple
If design patterns are dangerous, what’s the solution? The answer is to take a simple, solemn pledge. It’s a sort of Hippocratic Oath of the programming world:
First, be simple.
If you’re deep in a thorny problem, in a fog of semicolons and class relationships, trying to untangle responsibilities and keep everything manageable, pause. Don’t let design patterns short-circuit your critical thinking. After all, having a pattern does not protect you from a bad design. There’s no guarantee that the problem you think you’re solving with a pattern is the problem you need to solve. And adding patterns that don’t address the right problems — or any problem at all — is a certain path to Software Maintenance Hell.
If you can guarantee nothing else about your program, promise to keep it simple.
Patterns are a design language
The real value of design patterns is not prescriptive (telling you what to do). It’s descriptive (telling others what you’ve done). Design patterns aren’t recipes. They’re a language.
Good things happen when you think of design patterns as a language that can help you talk about application design. You don’t need to start out trying to use patterns. Instead — with experience — you’ll begin to recognize the outlines of patterns crystallizing in your code. For example, if you code web services, you’re almost certainly using the Facade pattern, whether you recognize it or not. Once you recognize the emerging structure of your code, you can use the language of design patterns — concepts like factory, decorator, and facade — to formalize what you’ve done.
Design patterns can’t teach you software architecture. They aren’t meant as an excuse to write a lot of code or a way to avoid thinking deeply about design. But they can help you think about your designs at a higher level of abstraction. And that’s probably what the Gang of Four were hoping all along.