Anti-Open-Closed Is The New Black

Clayton Long
7 min readJan 31, 2020


The Open-Closed Principle was first presented by Bertrand Meyer in his 1988 book Object Oriented Software Construction. It effectively stated that a software module should be open for extension, but closed for modification. Meyer’s assertion was that consumers of software modules need those modules to be complete, or closed. However, software developers need modules to be open to add operations, change behavior, add data fields, etc.

Since The Open-Closed Principle’s original presentation, it has established itself as one of the core principles of object oriented design. This is evidenced by its relevance in the body of knowledge in the field of software engineering and through numerous applications and advocates. It has also garnered some criticism and outright contempt. So, let’s take a closer look at The Open-Closed Principle and why it is or is not relevant and/or worthy of practice.

The Original Definition of The Open-Closed Principle

The original definition of The Open-Closed Principle from B. Meyer’s Object Oriented Software Construction [1988] was as follows.

A module is said to be open if it is still available for extension. For example, it should be possible to expand its set of operations or add fields to its data structure.

A module is said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (its interface in the sense of information hiding). At the implementation level, closure for a module also implies that you may compile it, perhaps store it in a library, and make it available for others (its clients) to use.

How can a software module seemingly be open and closed at the same time based on the above definition? Let’s remember, this was 1988, 7 years before Object Oriented Design Patterns by GoF was published. At that time, inheritance was the best known means in which to extend object oriented software without changing it. And in 1988, that’s what Meyer proposed.

To get us out of the change or redo dilemma, inheritance will allow us to define a new module A` in terms of an existing module A by stating the differences only

It probably seems obvious to most software craftsman of today that inheritance is not the only means to change the behavior of software modules without modifying them. In many cases, it’s not even the best way. However, here is where the “what” must be separated from the “how.” Inheritance may not be the catch-all solution that allows object oriented software to change without modification. But I don’t think that means the principle of extending software without requiring modification is invalid.

Open-Closed Clarified For The Present

In 1996, Robert C. Martin (aka “Uncle Bob”) published an article titled The Open-Closed Principle in The C++ Report. In that article, Martin said the following about The Open-Closed Principle.

It says that you should design modules that never change. When requirements change, you extend the behavior of such modules by adding new code, not by changing old code that already works.

Martin argued that the way to achieve software modules that are both open for extension and closed for modification was through abstraction. He gave an example using an an abstract Shape class, where Circle and Square were classes that extended Shape by defining their own functionality unique to their individual behaviors. He also illustrated how operations that accept a generic Shape type can change behavior based on the specific Shape extension implementation. This is not unlike the idea behind some object oriented design patterns. In other words, the secret sauce behind Open-Closed is depending on abstractions instead of implementations.

Open-Closed Dissenters

A blog post from 2013 by Jon Skeet was recently shared with me. I came to find out he’s somewhat of an influencer in some software engineering circles. He also notably has the highest reputation score on Stack Overflow.

Anyway, Skeet kind of poo-poo’ed Martin’s 1996 article on The Open-Closed Principle. In addition, I don’t think he was 100% correct in his analysis. In my opinion, criticizing Martin’s writing style and his lack of clarity from an article that has been well-regarded (and heavily referenced) for decades says more about the reader than it does about the article. That’s not to say Skeet isn’t a bright guy — look at his credentials and his other posts; he is. But what I am saying is that I don’t think the criticism he levied against Martin’s article was appropriate, nor did it contribute to his argument. And judging by some of the comments (in particular, the comment from Philip Schwarz), I’m not alone in that assessment.

Skeet then argued that he supported Craig Larman’s view on Protected Variation as an alternative. However, Larman agued that “OCP is essentially equivalent to the Protected Variation pattern.”

The Protected Variation Principle (PV)

Identify points of predicted variation and create a stable interface around them.

To be clear, I think Larman’s article was very good. But the entire article was an argument for PV. However, based on Meyer’s original intent and on Martin’s clarification of Open-Closed, PV is only equivalent to Open-Closed in the context of interfaces. Meyer goes further than that and he stated in his original explanation that being “closed” refers to interfaces and implementation.

Another article that was shared with me was titled Say “No” to the Open-Closed pattern. That post seemingly replaced Open-Closed with a skewed interpretation and argued that “open for extension” violated YAGNI. That argument was similar to the one used by Mark Rogers in this conversation, where Ron Jeffries [Agile Manifesto, Extreme Programming — coined the acronym YAGNI] disputed the same interpretation of Open-Closed. It should be noted that Martin also disputed overly rigid interpretations of Software Engineering Principles, in general.

The Real Value of Open-Closed

I don’t dispute Skeet’s assertion that maintaining “closed” interfaces is important. And I also agree that software should be built around stable interfaces (PV). Indeed, if you break an interface then you break your client. But as I stated above, Open-Closed is more than that; it applies to the interface and the implementation.

Open-Closed is about building your software entities so that they shouldn’t have to change. It doesn’t mean that you never refactor, as Ron Jeffries so eloquently put it. And it doesn’t necessarily mean that existing software entities won’t ever have to change. But it does mean that you approach software design with an eye towards limiting the impact of future changes.

How do we do that?

  1. Consider the external interface, first.

This is priority #1. If something is exposed externally to a client, then you have a contract with whatever is using that interface, whether it’s well-defined or not. Make it well-defined, simple and resistant to breaking changes.

2. Consider internal interfaces, next.

Internal interfaces are nearly as important as external interfaces. Interfaces between classes, modules, and layers within an application are internal contracts within your application. Sure, changing them within the context of a single program, service or application might not break external clients. But think of the impact on internal software entities that depend on those interfaces and the potential shrapnel resulting from change in the context in which it is used.

3. Finally, consider the impact of changes to implementations.

While I agree that implementation changes are in general less problematic than breaking interface changes, they should not be taken lightly. I also agree that if you duck behind a stable interface, there is less damage externally if you do change an implementation. But changes to an implementation do have a cost. If you’ve ever been on a project that spent seemingly all of its time refactoring due to bad design decisions then you probably understand that part of good design means insulating software entities from the need to change. That was the point Martin was making in his 1996 article.

Insulating software entities from change means having appropriately abstracted software entities that have a very clear and distinct purpose (i.e. a Single Responsibility). It doesn’t mean you won’t ever have to crack open a class and change the internals, but great care should be taken to guard against future changes. And if you find yourself frequently changing existing implementations then that should trigger a “bad code smell” alarm in your head.

Don’t Be Quick to Dismiss Open-Closed and Other Foundational Principles

It’s easy to dismiss things you don’t fully understand as being wrong or inadequate. But remember, many of these principles (Open-Closed, included) have been peer reviewed, they have been battle tested and they have stood the test of time. They were created and refined by people who are regarded as experts in the field of software engineering. And they represent a common understanding in that field.

Really understanding the value in something like Open-Closed is hard. Martin had the following to say about learning foundational software engineering principles in his post Getting a SOLID Start.

“There is no royal road to Geometry” Euclid once said to a King who wanted the short version. Don’t expect to skim through the papers, or thumb through the books, and come out with any real knowledge. If you want to learn these principles well enough to be able to apply them, then you have to study them. The books are full of coded examples of principles done right and wrong. Work through those examples, and follow the reasoning carefully. This is not easy, but it is rewarding.

In short, just reading blog posts won’t get you there, not even this blog post ;) But hopefully I’ve given you something to mentally gnaw on. And perhaps some of the references in this post will help improve your understanding.



Clayton Long

Cloud Automation Manager by profession, Programmer in my free time, and lover of anything that makes my life easier.