All 23 OOP software design patterns with examples in Python

Design patterns from the classic Gang of Four book explained

Niels Cautaerts
47 min readOct 1, 2023
A different Gang of Four. All images from Pixabay.com.

Introduction

Object oriented programming is probably the dominant paradigm for writing software, and has been for the last three decades. Most popular programming languages support at least some object oriented concepts. Since its inception, there has been fierce debate over whether OOP is the best thing since sliced bread or the biggest mistake made in human history. Additionally, few can agree on what constitutes good OOP.

However, nearly all OOP practitioners will agree that “design patterns” are important. In this subdomain, the “Gang of four” (GoF) book (Design Patterns: Elements of Reusable Object-Oriented Software by Gamma, Helm, Johnson and Vlissides) might as well be the old testament. Similar to the bible, the GoF book is pretty dry, boring and arcane, which means most people in the modern age never read it but have strong opinions about the subject matter regardless. Yet it seems that reading the book is a right of passage to ascend into the ranks of elite OOP developers.

Python is a popular multi-paradigm language with a bias towards OOP. Yet Python programmers are probably the last people to read the GoF book. Many people who write Python don’t have a computer science background and don’t write serious software. To them, Python is primarily a tool for data analysis and scripting. In this realm, design patterns are useless.

However, some Python projects do eventually evolve into serious software. In this case, maintainability and good software architecture become more important, and design patterns could be helpful tools in your toolbox. Therefore, as someone who writes quite a bit of Python, I read the GoF book and present you in this article with the sparknotes version and an illustration of each pattern in Python. I hope you find it useful.

Note that I do not retain exactly the same ordering of the patterns as in the book. Also, I try to use typed Python everywhere as there is limited value to explicitly defined interfaces (a core concept in the book) without types. For a more in-depth explanation on this, I’ve written another article about writing maintainable Python, which you can check it out here:

https://medium.com/better-programming/7-rules-for-a-maintainable-python-code-base-6c7d0cdeed43

Prerequisites

To understand the rest of this article you need to understand a few OOP concepts:

  • class: template definition of an object. A class defines the data that will be stored in the object, as well as the behavior the object will exhibit through the implemented methods. A class is defined in code. In some languages classes no longer “exist” at runtime, but in Python you can treat them kind of like objects at runtime. In Python, a class is defined as follows:
class MyClass:
...
  • object: instance of a class. An object encapsulates state which may be altered at runtime by calling the object’s methods. An object only exist at runtime. Instantiating an object from a class is done as follows in Python:
my_object = MyClass()
  • state: data and references to other data that are stored in an object.
  • inheritance: a mechanism whereby a new class (child) inherits behavior from an existing class (parent). Multiple classes can inherit from the same parent, yielding a tree-like relationship between classes. Inheritance is typically recommended for modeling “is-a” relationships. We can inherit from MyClass in Python as follows:
class MySubClass(MyClass):
...
  • composition: a mechanism whereby complex objects are built up by combining multiple simpler objects through references. Basically whenever one object stores a reference to another object it can be considered composition. Composition is typically recommended for modeling “has-a” relationships. An example of composition in Python could be something like:
class MyCompositeClass:
def __init__(self) -> None:
self.contained_object = MyClass()
  • interface: a contract that defines which methods should be implemented on classes in order for them to adhere to the interface. An interface is a way to enable polymorphism. In Python, we can define an interface using typing.Protocol or abc.ABC .
class MyInterface(Protocol):
def needed_method(self) -> str:
...
  • polymorphism: when functions or methods accept objects of different classes as if they are from the same class. To illustrate in Python:
class ConcreteImplementationA:
def needed_method(self) -> str:
return "concrete_a"

class ConcreteImplementationB:
def needed_method(self) -> str:
return "concrete_b"

def polymorphic_function(obj: MyInterface) -> None:
print(obj.needed_method())
  • abstraction: writing code at a higher level hiding implementation details. Polymorphism is a form of abstraction because functions that code to interfaces don’t know anything about underlying implementations.
  • dependency injection: passing one object into another object’s methods or constructor, as opposed to creating these objects internally.
  • mocking: passing a minimal implementation that adheres to the correct interface into functions/methods for the purpose of testing.
  • encapsulation: the idea that related data and operations on that data should be grouped together in a class. It is also related to the idea that a class has public and private data and methods, which allows the class to hide internal implementation details. In Python we can do this as follows:
class MyClass2:
def __init__(self) -> None:
self._private = 1

def get_private(self) -> int:
return self._private
  • design pattern: a high level solution to a common problem in designing software.

Creational patterns

Creational patterns deal with how objects get created.

Singleton

Implementing a class such that it can only ever be instantiated once. Instantiating the class again just returns a reference to the existing object.

Example implementation

Creating a Singleton in Python requires us to override the special __new__ method that defines how an object is created:

class Singleton:
_instance: Optional[Singleton] = None

def __new__(cls) -> Singleton:
instance: Singleton
if cls._instance is None:
instance = object.__new__(cls)
cls._instance = instance
else:
instance = cls._instance
return instance

The intermediate instance variable serves to satisfy the static type checker, since if we return cls._instance it will always be considered as having the type Optional[Singleton] which conflicts with the signature (we never return None .

We can now use and verify our singleton as follows:

>>> singleton1 = Singleton()
>>> singleton2 = Singleton()
>>> singleton1 is singleton2
True

Even though we call the constructor twice and bind each result to separate variables, the variables point to exactly the same instance.

Advantages and when to use

A singleton provides an alternative to a global variable, however with some added benefits:

  • Cleaner namespace. We only need the class itself in the namespace, not an additional variable pointing to an instance of the class.
  • If properly implemented it is not possible to create another instance of the same class and bind it to another variable; all variables will point to the same instance.
  • It’s should be possible to change our mind later to allow for more instances of the same class.
  • The singleton can control how and when clients access its data.
  • Because the creation of the object only happens when the constructor is first called, it can be created by passing it data that is only known at runtime.

Disadvantages

Singletons introduce globally shared, potentially mutable, state. If many objects reference the singleton that can create very tricky to debug applications, as from each object’s perspective it is unclear which other objects may depend on the singleton. It also makes singletons difficult to test in isolation, because by their very nature they become highly coupled to other classes. Classes that depend on the singleton can also not be tested without the singleton i.e. by using mocks and dependency injection. For all these same reasons, singletons can be tricky to deal with in multi-threaded applications.

Additional notes

Factory method

The Factory Method pattern is a way to abstract the creation of different objects that adhere to the same interface. The Factory Method pattern should not be confused with a factory method. A factory method is simply a function or method that creates and returns an object.

Example implementation

We define a Creator interface that describes the creation flow of objects that adhere to the Product interface:

class Creator(Protocol):
def factory_method(self) -> Product:
...

class Product(Protocol):
def product_method(self) -> int:
...

In this case, classes that adhere to the Creator interface must implement a factory method that returns an object that adheres to the Product interface. We can define a client function that uses the creator to make a product, which is then consumed:

def client_function(creator: Creator) -> int:
product = creator.factory_method()
return product.product_method()

The client function neither has to care about how product is created, nor about which concrete Product is produced; it is completely decoupled from implementation details.

We can now create multiple different concrete implementations of creators and products:

class ConcreteCreatorA:
def factory_method(self) -> ConcreteProductA:
return ConcreteProductA()

class ConcreteProductA:
def product_method(self) -> int:
return 0

class ConcreteCreatorB:
def factory_method(self) -> ConcreteProductB:
return ConcreteProductB()

class ConcreteProductB:
def product_method(self) -> int:
return 1

Here we use a one-to-one mapping, but one can also imagine a scenario where there are multiple creators for making the same product in different ways.

These can now be used in the client function. Putting it all together:

>>> creator_a = ConcreteCreatorA()
>>> creator_b = ConcreteCreatorB()
>>> client_function(creator_a)
0
>>> client_function(creator_b)
1

Advantages and when to use

  • If it is not a priori known which product should be created, the Factory Method pattern gives us flexibility (we can reuse the same code with different types of objects) and extensibility (we can easily create new products and creators).
  • The object creation process is encapsulated in separate objects which should make this behavior easier to modify.
  • We separate high level code that only cares about interfaces (the client function) from lower level implementation. This separation should make code easier to test, since we can easily create and insert mock objects that adhere to the same interface as Product and Creator.

Disadvantages

Like many other design patterns we will discuss below, the Factory Method pattern results in additional complexity and a proliferation of classes. This may make the code more difficult to navigate and understand.

Additional notes

  • In the example implementation above, the factory method only calls the constructor of the concrete product. Therefore the factory objects are not so useful, because we could just pass the different concrete products directly into the client function. This construction only becomes useful when the creation of the object requires some more complex logic.
  • The book mentions a number of variations on the Factory method. In our example above, the creator is an interface, which is supposed to be a purely abstract type. However, the creator may also be a concrete class. Variations in the type of product that is created can be built through subclassing and overriding some of the methods. If the creator is a concrete class, it may use the product itself instead of relying on a client function. Finally, the creator can implement a default factory method that maps onto a default product.

Abstract Factory

The abstract factory is way to abstract entire families of objects and their creation process. It can be viewed as an additional layer of abstraction above, or a generalization of, the Factory Method pattern.

Example implementation

We define an interface for a factory object that can create a number of related products. Also these products are abstracted.

class AbstractFactory(Protocol):
def create_product_a(self) -> AbstractProductA:
...

def create_product_b(self) -> AbstractProductB:
...

class AbstractProductA(Protocol):
def get_int(self) -> int:
...

class AbstractProductB(Protocol):
def get_str(self) -> str:
...

We can then have a client function that uses the factory to create products, so that it can consume them:

def client_function(factory: AbstractFactory) -> None:
product_a = factory.create_product_a()
product_b = factory.create_product_b()
print(product_b.get_str() * product_a.get_int())

Now we can create different implementations of the factory and the products:

class ConcreteFactory1:
def create_product_a(self) -> ConcreteProductA1:
return ConcreteProductA1()

def create_product_b(self) -> ConcreteProductB1:
return ConcreteProductB1()

class ConcreteProductA1:
def get_int(self) -> int:
return 1

class ConcreteProductB1:
def get_str(self) -> str:
return "b1"


class ConcreteFactory2:
def create_product_a(self) -> ConcreteProductA2:
return ConcreteProductA2()

def create_product_b(self) -> ConcreteProductB2:
return ConcreteProductB2()

class ConcreteProductA2:
def get_int(self) -> int:
return 2

class ConcreteProductB1:
def get_str(self) -> str:
return "b2"

We can then use our client function with the two concrete factories as follows:

>>> factory_1 = ConcreteFactory1()
>>> factory_2 = ConcreteFactory2()
>>> client_function(factory_1)
b1
>>> client_function(factory_2)
b2b2

Advantages and when to use

Most of the advantages mentioned in the Factory Method pattern apply here as well. An additional one is that it promotes consistency among products, i.e. that it enforces an application to use all objects from a single family.

Disadvantages

Most of the same disadvantages mentioned in the Factory Method also apply. Additionally, by grouping products into a family and tying them together with a factory it becomes difficult to modify this structure. For example, to add a new product, all product families must be modified.

Additional notes

  • The book notes that concrete factories are often implemented as singletons. In Python we can simply define factory methods on factory classes as a classmethod or a staticmethod, which means we could avoid instantiation altogether.
  • While the book notes that factory methods are the most common approaches for creating objects in the concrete factories, this need not be the case and the prototype pattern (see below) can be used instead. By using the prototype, we can avoid having to create a new factory for each new product family.
  • To avoid having to modify each family when adding a new product, one could define a single make factory method instead of creating a factory method for each product. The single factory method would then take an argument that indicates which product to make. The downside of this approach is that the return type of make becomes a Union or very generic type, which means that clients can not safely use specific operations defined on only specific products.

Prototype

The prototype pattern creates new objects by cloning a prototype instance and potentially modifying it.

Example implementation

The prototype interface simply needs to implement a way to clone itself. We will also implement some operations on it to modify its internal state and for it to be useful to a client.

def Prototype(Protocol):
def clone(self) -> Prototype:
...

def change(self, x: int) -> int:
...

def get_result(self) -> int:
...

Suppose we have two concrete implementations of the prototype:

def ConcretePrototype1:
def __init__(self, x: int) -> None
self._x = x

def clone(self) -> ConcretePrototype1:
return self.__class__(x=self._x)

def change(self, x: int) -> int:
self._x = x

def get_result(self) -> int:
return self._x + 1

def ConcretePrototype2:
def __init__(self, x: int, y: int) -> None
self._x = x
self._y = y

def clone(self) -> ConcretePrototype2:
return self.__class__(x=self._x, y=self._y)

def change(self, x: int) -> int:
self._x = x

def get_result(self) -> int:
return self._x + self._y

A client class could then use these prototypes in another operation like create :

class Client:
def __init__(self, prototype: Prototype) -> None:
self._prototype = prototype

def create(self, x: int) -> Prototype:
new_obj = self._prototype.clone()
new_obj.change(x=x)
return new_obj

We can now use our prototypes with our client as follows:

>>> client1 = Client(ConcretePrototype1(x=1))
>>> new_1 = client1.create(7)
>>> new_2 = client1.create(9)
>>> new_1.get_result()
7
>>> new_2.get_result()
9
>>> client2 = Client(ConcretePrototype2(x=1, y=2))
>>> new_3 = client2.create(3)
>>> new_3.get_result()
5
>>> new_3 = client2.create(4)
>>> new_4.get_result()
6

The client could also directly use the newly created objects.

Advantages and when to use

Prototypes can reduce the number of classes that need to be defined, especially compared to patterns like Factory Method or Abstract Factory. Instead of defining products and creators in a class hierarchy and using factory methods, different products can be defined using instances. A new product “class” can be created dynamically at runtime through composing different objects and registering it as a prototype. The prototype can be helpful if construction logic for objects is complicated and should not be duplicated across the code base.

Disadvantages

The main challenge with prototypes is that it is unclear whether a deep or shallow copy is (or should be) made. In the example above we only used integers for internal state, but if we are dealing with complex composite objects one should consider whether all data should be copied or only all references. In the case of shallow copies, a mutation in the state will mutate all objects deriving from the same prototype. Deep copies can be problematic when dealing with circular references.

Additional notes

  • The book notes that the prototype pattern may be less useful in dynamic languages like Python. In Python it is possible to directly store a reference to a class in a variable, so it can be used directly as a prototype instead of creating an instance of a class and cloning it. This is not possible in some other languages like C++.
  • In the example above, the client manages the prototype directly. However, the management of different prototypes could be delegated to a prototype manager, e.g. some global dictionary, where clients access specific prototypes.

Builder

The builder pattern aims to separate the construction logic of a complex object from the object itself, so that the construction logic can be reused to configure an entirely different object.

Example implementation

Suppose we want to create a product that consists of multiple parts:

class Product:
def __init__(self) -> None:
self._parts: List[str] = []

def add_part(self, part: str) -> None:
self._parts.append(part)

def get_parts(self) -> List[str]:
return self._parts

We can create a builder class for this product as follows:

class ConcreteBuilder:
def __init__(self) -> None:
self.product = Product()

def build_part_a(self) -> None:
self.product.add_part("part A")

def build_part_b(self) -> None:
self.product.add_part("part B")

def get_result(self) -> Product:
return self.product

The responsibility of operating the builder is handed to a Director . To allow the director to handle multiple builders, we can also put a builder interface in between:

class Builder(Protocol):
def build_part_a(self) -> None:
...

def build_part_b(self) -> None:
...

def get_result(self) -> Product:
...


class Director:
def __init__(self, builder: Builder) -> None:
self._builder = builder

def construct() -> None:
self._builder.build_part_a()
self._builder.build_part_b()

Finally, we can use our builder as follows:

>>> builder = ConcreteBuilder()
>>> director = Director(builder)
>>> director.construct(builder)
>>> product = builder.get_result()
>>> product.get_parts()
["part A", "part B"]

Advantages and when to use

Builders are mainly useful for complex objects that be configured in many different ways. A key benefit is the splitting of the instantiation process into multiple stages, which contrasts with most other creational patterns. Separating the creation logic from a product can make the product implementation simpler.

Disadvantages

Mostly added complexity and additional classes. When factory methods are enough, builders are probably overkill.

Additional notes

  • In the example above, all state is directly stored and mutated in the product class itself. It is also common to store configurable state inside the builder and only instantiate the product when build or construct are called. In this way products can be made immutable while still building them in an iterative way.
  • Often builder patterns do not abstract the builder class or implement a Director. These are only relevant if multiple products need to be built in a similar way and builders can be made to adhere to the same interface. The point of the director is to hide the construction logic from the client. The client only needs to know which builder is needed and the director is responsible for operating it.

Structural patterns

Structural patterns are ways in which objects can be combined to create larger structures with additional functionality.

Adapter

An adapter is a pattern to make incompatible objects work together. The idea is to create a middle layer that converts the interface of one object to another. A common alternative name for adapter is wrapper.

Implementation example

Suppose we have a client function that expects an object that adheres to a particular target interface

class Target(Protocol):
def expected_function(self) -> int:
...


def client_function(obj: Target) -> str:
integer = obj.expected_function()
return f"The number was {integer + 1}"

Suppose we now want to use client_function for the following incompatible class:

class Adaptee:
def unexpected_function(self) -> float:
return 3.1415

We want to be able to use unexpected_function in the client_function without modifying anything. For this we can create an adapter as follows:

class Adapter:
def __init__(self, incompatible: Adaptee) -> None:
self._incompatible = incompatible

def expected_function(self) -> int:
return int(self._incompatible.unexpected_function())

Our Adapter is compatible with the Target . We can then use it as follows:

>>> incompatible = Adaptee()
>>> adapted = Adapter(incompatible)
>>> client_function(adapted)
The number was 4

Advantages and when to use

The adapter pattern is useful for integrating code that does not work well together, without modifying any existing code. This is an example of the open-closed principle: without modifying existing source code, existing modules can be extended.

In the simplest case, it can be used for aliasing methods, in more complex cases it can be used to entirely change the behavior of an object. Adapters can be useful when the objects that need to be adapted come from third-party libraries that can not be modified.

Disadvantages

  • There can be additional performance overhead due to transformation operations in the adapter.
  • It can complicate the code base.

Additional notes

In the example above, we use an object adapter, where adaption occurs through composition: we create an object that takes as argument the object we want to adapt. There are also class adapters that are implemented through multiple inheritance of both the target interface and the incompatible class. Class adapters can be more challenging to get right.

Bridge

A bridge puts a layer between an interface (or abstract class) and an implementation (or concrete class), so that they can vary independently. In some ways it is like adding an interface above an interface, which means that the interface expected by the client can vary independently from the interface exposed by classes that do the heavy lifting.

Implementation example

Suppose we again have a client function that expects an interface:

class Abstraction(Protocol):
def operation(self) -> int:
...


def client_function(obj: Abstraction) -> str:
integer = obj.operation()
return f"The number was {integer + 1}"

Suppose we now have a number of implementation classes that adhere to a common but different interface

class Implementor(Protocol):
def create_integer(self) -> int:
...


class ConcreteImplementorA:
def create_integer(self) -> int:
return 4


class ConcreteImplementorB:
def create_integer(self) -> int:
return 5

We can now create an implementation of the Abstraction interface expected by the client that creates a bridge to the Implementor :

class RefinedAbstraction:
def __init__(self, implementor: Implementor) -> None:
self._implementor = implementor

def operation(self) -> int:
return self._implementor.create_integer() + 5

We can then use this object with the client_function :

>>> impl_a = ConcreteImplementorA()
>>> concrete_a = RefinedAbstraction(impl_a)
>>> print(client_function(concrete_a))
The number was 10
>>> impl_b = ConcreteImplementorB()
>>> concrete_b = RefinedAbstraction(impl_b)
>>> print(client_function(concrete_b))
The number was 11

Advantages and when to use

Adding an interface above an interface decreases coupling, since the abstraction the client depends on can be changed independent from the implementation. We can independently create more classes that adhere to the ImplementorInterface as well as new classes that adhere to the Abstraction interface.

The bridge is very similar to the adapter, but it is typically introduced in a design phase, whereas an adapter is usually created after the fact.

Disadvantages

Due to the similarity with the adapter, a bridge has the same disadvantages of potentially decreased performance and increased complexity of the code base.

Additional notes

In the book the Abstraction is not a pure interface, but already contains some implementation code like a constructor with a link to the Implementor . This means the bridge is hard-coded at this level. Extension of this class then happens through inheritance.

Composite

The composite pattern is one way to treat a complex combination of objects as a single object. The composite requires that the objects relate to each other as nodes in a tree-like hierarchy. The pattern ensures that the client can deal with a tree of any shape.

Implementation example

Suppose we have a client function that expects an object with a particular interface Component :class Component(Protocol):
def operation(self) -> int:
...

def client_function(obj: Component) -> str:
integer = obj.operation()
return f"The number was {integer}"

We can create a composite that models a hierarchic structure but still adheres to the Component interface as follows:

class Composite:
def __init__(self) -> None:
self._children = []

def add(self, component: Component) -> None:
self._children.append(component)

def remove(self, component: Component) -> None:
self._children.remove(component)

def operation(self) -> int:
return sum(child.operation() for child in self._children)


class LeafA:
def operation(self) -> int:
return 1


class LeafB:
def operation(self) -> int:
return 2

The Composite class represents the key component that allows us to build a tree structure, as instances can refer to other instances of itself (additional layers in the tree) or base instances that don’t have children (in this case LeafA and LeafB ). We can build up a composite tree and use it in the client function as follows:

>>> tree = Composite()
>>> branch1 = Composite()
>>> branch1.add(LeafA())
>>> branch1.add(LeafB())
>>> branch2 = Composite()
>>> branch2.add(LeafB())
>>> tree.add(branch1)
>>> tree.add(branch2)
>>> print(client_function(tree))
The number was 5

The operation method will be called recursively to add up the result from all the leaf nodes. The client does not have to care about the structure of the composite; the behavior is fully controlled by the structure of the tree.

Advantages and when to use

Composite is convenient when you need a uniform way to deal both with simple and complex objects. In the book, the examples are primarily focused on graphical applications: a picture might be composed of sub-pictures, which in turn may be composed of primitive shapes like lines and rectangles.

Disadvantages

  • It can be difficult to reason about the behavior of the code since the structure of the tree may be dynamic.
  • There may be a tendency for different leaf classes to proliferate, which can make the design too general. This means type checking will be less useful.

Additional notes

  • You can also implement references from child classes to their parent class to make traversing the tree easier.
  • In the example above, add and remove are only part of the Composite class, not of the Component interface. For this example, only operation is relevant to the interface, so leaves and composites are identical. However, one could envision a scenario where it is desired to dynamically update the tree, and then leaves and composite nodes no longer have compatible interfaces. It is possible to include the tree updating operations in the Component interface, but then they also need to be implemented on leaves, which is meaningless. How to best deal with this dilemma depends on the specifics of the problem.
  • The recursive operation may become computationally expensive if the tree grows in size. If this is the case, one should look into caching some results.
  • Children don’t need to be stored in a list, one could implement something similar with a dictionary.

Decorator

A decorator adds additional functionality to an object dynamically. Just like an adapter, a decorator is sometimes called a wrapper. But unlike an adapter, a decorator should not change the interface of the object; it only gives it additional responsibilities.

Implementation example

Suppose we have a client that expects an object with a particular interface:

class Component(Protocol):
def operation(self) -> int:
...

def client_function(obj: Component) -> str:
integer = obj.operation()
return f"The number was {integer + 1}"

We can create an implementation of Component :

class ConcreteComponent:
def operation(self) -> int:
return 5

The decorator should also adhere to the Component interface, but must be able to reference an existing Component :

class Decorator:
def __init__(self, component: Component) -> None:
self._component = component

def operation(self) -> int:
print("Do some additional work")
return self._component.operation()

We can now use the decorator as follows:

>>> component = ConcreteComponent()
>>> decorated = Decorator(component)
>>> print(client_function(component))
The number was 6
>>> print(client_function(decorated))
Do some additional work
The number was 6

Advantages and when to use

Decorators allow objects to be extended without subclassing, so it is very flexible. Specific extension functionality implemented in decorators can be reused and applied to many different objects. The book gives the example of adding a border to a window in a graphical application. Decorators are also used extensively in Python web frameworks like Flask, where functions can be turned into web pages.

Disadvantages

Decorators can typically be stacked, which leads to a lot of nested functionality of small components. This can make the program difficult to understand and debug. There may also be a performance cost.

Additional notes

Python has special syntax to apply decorators. Functions, classes and methods can be decorated using the following scheme:

@Decorator
class ConcreteComponent:
...

The downside of this syntax is that the decorator is applied to all objects of the class, so it is not as dynamic as the decorator design pattern intends.

Facade

A facade aims to provide a simplified interface to a complex subsystem. It aims to hide the functionality of related classes or modules and expose only the essentials to clients outside the subsystem. In a sense, this is the extension of the idea of public vs. private functionality at the level of multiple classes and/or modules. Classes inside the subsystem may be more coupled than would be desirable if there is no facade.

Implementation example

Suppose we have a number of classes that together form a subsystem

class ClassA:
def method1(self):
...

def method2(self):
...


class ClassB:
def method3(self):
...

def method4(self):
...

A facade may then expose some or a combination of the underlying functionality

class Facade:
def __init__(self):
self._subsystemA = ClassA()
self._subsystemB = ClassB()

def operation(self):
self._subsystemA.method1()
self._subsystemA.method2()
self._subsystemB.method3()
self._subsystemB.method4()

All clients should then make use of the Facade , instead of crafting custom interactions with ClassA or ClassB .

Advantages and when to use

Especially in large software projects, a facade can be helpful to reduce complexity and increase abstraction. It also decouples clients from the components in the subsystem, allowing the subsystem to evolve independent from any clients. The facade is pretty loosely defined in terms of implementation; most software projects make use of some kind of facade to hide complexity of low level objects.

Disadvantages

If the abstractions are not properly thought out in advance, different clients may require different facades to the same subsystem. This can lead to additional complexity and a pile-up of different interfaces.

Additional considerations

The objects in the subsystem should not depend on or refer to the facade; the link is only one-way.

Flyweight

The flyweight pattern is a technique to reduce an applications memory footprint and share as much information as possible. Instead of instantiating multiple objects with the same data, flyweight objects can be shared and reused to provide the illusion of many objects.

Implementation example

The flyweight pattern relies on a FlyweightFactory to manage the Flyweight objects. A client should always request flyweight objects through this factory; the factory is then in charge of either returning an existing object or create and store a new one:

class Flyweight:
def __init__(self, intrinsic_state: str) -> None:
self._intrinsic_state = intrinsic_state

def operation(self, external_state: str) -> str:
return f"{self._intrinsic_state}{external_state}"


class FlyweightFactory:
def __init__(self) -> None:
self._flyweights: Dict[str, FlyWeight] = {}

def get_flyweight(self, key: str) -> FlyWeight:
if key not in self._flyweights:
self._flyweights[key] = FlyWeight(key)
return self._flyweights[key]

The FlyweightFactory to access flyweight objects:

>>> flyweight_1 = factory.get_flyweight("a")
>>> flyweight_2 = factory.get_flyweight("b")
>>> flyweight_3 = factory.get_flyweight("a") # will not create a new object
>>> print(flyweight_1.operation("1"))
a1
>>> print(flyweight_2.operation("2"))
b2
>>> print(flyweight_3.operation("3"))
a3

In a real application, the get_flyweight function might be called create_<object> instead, which would obscure the fact that a new object may or may not actually be created.

Advantages and when to use

The flyweight should really only be used if memory constraints are a concern and the application calls for the use of a huge number of objects. The book provides the example of a text editor, where each character can be modeled as an object. Instead of storing each character with all of its formatting individually inside the objects, the characters are flywheels and their formatting is stored as external data in State objects.

Disadvantages

  • limited applicability.
  • reduced encapsulation. In order to use flyweights effectively, state that is not shared between all virtual instances of an object must be factored out and stored elsewhere as external state. In the example from the book, fonts and styling for characters was moved to external state.
  • potential thread safety issues if multiple threads access the same underlying object simultaneously.
  • potentially increased computational inefficiency. External state needs to be supplied to the operations defined on the flyweight in order to modify the behavior; this may require operations to be recomputed multiple times.

Proxy

A proxy is an object that acts as an intermediary between a client and a real object. It pretends to be the real object and should have the same interface.

Implementation example

A proxy can be used as an intermediate for an object that may use a lot of resources (memory, cpu, …). The proxy object can then delay using these resources until a specific operation needs it. For example, consider an object that reads a file and stores its data:

class RealSubject:
def __init__(self, data: str) -> None:
self._data = data

def operate_on_data(self) -> None:
print(self._data)

@classmethod
def from_file(cls, filename: str) -> RealSubject:
with open(filename, "r") as f:
data = f.read()
return cls(data)

The data is really only necessary when operate_on_data is called but the way the class is defined means that we need the data up front. We can create a proxy with an identical interface that delays the creation of the RealSubject until when it is strictly necessary:

class ProxyObject:
def __init__(self, filename: str) -> None:
self._filename = filename
self._real_obj: Optional[RealSubject] = None

def operate_on_data(self) -> None:
real_obj = self._real_obj
if real_obj is None:
real_obj = RealSubject.from_file(self._filename)
self._real_obj = real_obj
real_obj.operate_on_data()

The proxy object can then be passed around as if it were the real object, but it acts as intermediary.

from typing import Protocol


class ExpectedObject(Protocol):
def operate_on_data() -> None:
...


def client_function(obj: ExpectedObject) -> None:
obj.operate_on_data()

Then it can be used as follows:

>>> proxy = ProxyObject("big_file.txt")
>>> client_function(proxy)
... # file contents printed

Advantages and when to use

A proxy is mainly used for three purposes:

  • delaying the creation of expensive objects or execution of expensive operations.
  • hiding the fact that an object may exist remotely.
  • adding additional logic before and after calls to an object, for example authorization logic.

Disadvantages

Identical to the decorator: decreased performance and increased code complexity.

Additional notes

The implementation of a proxy is very similar to a decorator. However, the reason for using either is different. The decorator aims to add additional responsibilities to an object. The proxy aims to modify the way an object is accessed.

Behavioral patterns

Behavioral patterns are concerned with the way objects interact and how responsibilities are distributed. Whereas structural patterns provide a somewhat static description of how objects are related, behavioral patterns describe the dynamics and ways of communication.

Chain of responsibilities

In the chain of responsibilities, multiple linked objects may handle a request. If the first object in the chain does not handle a request, it passes it along the chain and so on until it arrives at the object that will handle the request. In case there is a return value, it is propagated back up the chain.

Example implementation

A client function expects an object with an interface that can handle a request:

class Handler(Protocol):
def handle_request(self, request: str) -> Optional[str]:
...


def client_function(
request: str,
handler: Handler,
) -> None:
print(handler.handle_request(request))

We can implement a concrete handler that shows the chain of responsibility model as follows:

class ConcreteHandler:
def __init__(
self,
request_strategy: Strategy,
successor: Optional[Handler] = None,
) -> None:
self._strategy = request_strategy
self._successor = successor

def handle_request(self, request: str) -> Optional[str]:
if self._strategy.matches(request):
return self._strategy.operation()
elif self._successor is not None:
return self._successor.handle_request(request)
else:
return None


class Strategy:
def __init__(
matching_request: str,
to_return: str,
) -> None:
self._matching_request = matching_request
self._to_return = to_return

def matches(self, request: str) -> bool:
return request == self._matching_request

def operation(self) -> str:
return self._to_return

The key idea here is that our handler can optionally store a reference to a successor, which is also a handler. In handle_request , the handler first tries to handle the request. If this fails, it checks if the handler has a successor, and if so tries to pass the request along. If we’ve reached the end of the chain and the request has not been handled we return None .

In this example we also use a form of the strategy pattern (see further) to change the behavior of the ConcreteHandler . The Strategy object stored in the handler determines how the handler treats the request. Alternatively, the chain of command pattern can also be implemented using inheritance, by defining the initialization and logic of passing the request along the chain in the parent class and creating multiple different concrete handler classes.

We can use our example as follows:

>>> chainlink_1 = ConcreteHandler(Strategy("request_a", "foo"))
>>> chainlink_2 = ConcreteHandler(Strategy("request_b", "bar"), chainlink_1)
>>> chainlink_3 = ConcreteHandler(Strategy("request_c", "baz"), chainlink_2)
>>> client_function("request_a", chainlink_3)
foo
>>> client_function("request_d", chainlink_3)

Advantages and when to use

It is one way to reduce coupling between sender and receiver, since the receiver does not know which receiver will handle the request. The chain can be built up and modified at runtime, giving a lot of flexibility. In some ways it can be seen as a very dynamic if-else or try-except stack.

Disadvantages

  • Possible performance issues if the chain is long or the intermediate operations are complex. This is a similar concern with linked list data structure.
  • It may be difficult to deduce which object will handle the request.
  • The chain is only as robust as the weakest link. If one object in the chain handles the request incorrectly, the rest of the chain will likely function incorrectly.

Additional notes

The chain of responsibility can be used in conjunction with the composite structural pattern, where a parent (if stored in the children) can be used as the successor.

Command

The command pattern encapsulates a request or action as an object. This allows one to parametrize the request, and add additional functionality to it (such as an undo operation).

Example implementation

The command interface could be as simple as:

class Command(Protocol):
def execute(self) -> None:
...

We want a command to communicate with and invoke methods on a receiver object:

class Receiver:
def do_something(self, arg: str) -> None:
print(f"Receiver received {arg}")

A concrete command would store a reference to the receiver and perform do_something when execute is called:

class ConcreteCommand:
def __init__(
self,
receiver: Receiver,
argument: str,
) -> None:
self._receiver = receiver
self._argument = argument

def execute(self) -> None:
self._receiver.do_something(self._argument)

Finally, we need an object to work with, potentially store, and execute the commands, the Invoker :

class Invoker:
def __init__(self) -> None:
self._commands = []

def add_command(self, command: Command) -> None:
self._commands.append(command)

def execute_commands(self) -> None:
for command in self._commands:
command.execute()

Putting it all together:

>>> invoker = Invoker()
>>> receiver = Receiver()
>>> invoker.add_command(ConcreteCommand(receiver, "foo"))
>>> invoker.add_command(ConcreteCommand(receiver, "bar"))
>>> invoker.execute_commands()
foo
bar

The Invoker and Command could be expanded to include undo operations.

Advantages and when to use

  • Decoupling: it’s easier to extend the system by creating more concrete commands without modifying the Invoker .
  • Adding extra functionality by default to operations like logging, undo, scheduling, … which is difficult to do with a simple function call.

Disadvantages

A lot of additional complexity which may not be necessary, and a potential explosion in the number of concrete command classes.

Additional notes

In Python, one does not necessarily need to create a custom object to implement the command pattern. Since functions are also objects, you can also pass them around, then call them when appropriate. The book notes that the command pattern is the OOP equivalent of callback functions.

Interpreter

The interpreter pattern provides a way to define a grammar and evaluate expressions based on it. A grammar is essentially a language of which you invent the rules. Rules are represented by classes. A valid expression can then be represented by composing objects of these rules classes. Interpreter is design-pattern-ception when it comes to Python. Python, the programming language, is text which can be parsed as expressions, then subsequently evaluated by the Python interpreter. The interpreter and grammar are written in C. The interpreter pattern therefore gives you a way to create a new language using another language.

Example implementation

Suppose we want to create a language to represent basic arithmetic. First, we represent an expression with an interface that just has an interpret method. The method takes a context variable, in this case a dictionary (but it does not need to be), which represents some global state for the interpreter.

class Expression(Protocol):
def interpret(self, context: Dict[Any, Any]) -> Any:
...

Second, we should have two types of expressions: terminal expressions and non terminal expressions. The terminal expressions represent the atoms of the language, something which can not be broken down further and can only be literally interpreted. Non terminal expressions represent combinations of expressions. They both have the same interface, but we can define them explicitly for differentiation in the types of concrete classes.

class TerminalExpression(Expression):
...


class NonTerminalExpression(Expression):
...

In our example, our terminal expressions would be numbers (for now let’s just define integers):

class Integer(TerminalExpression):
def __init__(self, value: int) -> None:
self._value = value

def interpret(self, context: Dict[Any, Any]) -> int:
return self._value

Our non-terminal expressions would be arithmetic operations like:

class Add(NonTerminalExpression):
def __init__(
self,
left: Expression,
right: Expression,
) -> None:
self._left = left
self._right = right

def interpret(
self,
context: Dict[Any, Any],
) -> Any:
return self._left.interpret(context) + self._right.interpret(context)


class Multiply(NonTerminalExpression):
def __init__(
self,
left: Expression,
right: Expression,
) -> None:
self._left = left
self._right = right

def interpret(
self,
context: Dict[Any, Any],
) -> Any:
return self._left.interpret(context) * self._right.interpret(context)

We can then express integer arithmetic with addition and multiplication by combining these classes:

>>> # representing 3 * 4 + 2
>>> expression = Add(Multiply(Integer(3), Integer(4)), Integer(2))
>>> print(expression.interpret({})) # there is no global context
14

This example is of course a silly example, as it is a very verbose way to express simple concepts. Additionally, if one would want to represent the language in a string so that one could code in it, then a parser would also be necessary.

Advantages and when to use

The interpreter pattern makes most sense for creating a domain specific language, where high flexibility is needed on the types of operations that need to be represented. An example would be regular expressions, which provide a condensed way to match patterns in strings.

Disadvantages

Writing a useful interpreter can be quite complex. There will also be a performance penalty associated with parsing and evaluating a complex hierarchy of objects, compared to hard-coding the logic in the lower level language.

Additional notes

The interpreter may create a very large number of objects, so it may be useful to combine with the flyweight pattern, especially for keeping track of the terminal expressions. The interpreter pattern is effectively a specialized form of the composite pattern.

Iterator

The iterator is a way to access elements of a collection sequentially without exposing how the elements are represented or how the collection is implemented. Typically, we think of looping over the elements of a list, but an iterator is more abstract than that. The elements do not even have to exist in memory, but can be generated on the fly as we loop over the elements. This idea can be used to represent data streams, which are never ending iterators.

Example implementation

Python already has a lot of built-in magic to make it very easy to create custom iterators. I’ll describe some of that further in additional notes. Here we will take a more first principles approach, which is not recommended for a real application.

First we define the interface an iterator must adhere to (according to the book):

T = TypeVar("T", covariant=True)


class Iterator(Protocol[T]):
def first(self) -> None:
...

def next(self) -> None:
...

def is_done(self) -> bool:
...

def current_item(self) -> T:
...

To use our iterator, we can define a client function as follows:

def print_all(iterator: Iterator) -> None:
iterator.first()
while not iterator.is_done():
print(iterator.current_item())
iterator.next()

We can then implement different ways to iterate over a collection, for example traversing a list in the forward and backwards directions:

class ForwardListIterator(Iterator[T]):
def __init__(self, lst: List[T]) -> None:
self._lst = lst
self._index = 0

def first(self) -> None:
self._index = 0

def next(self) -> None:
self._index += 1

def is_done(self) -> bool:
return self._index == len(self._lst)

def current_item(self) -> T:
return self._lst[self._index]


class BackwardListIterator(Iterator[T]):
def __init__(self, lst: List[T]) -> None:
self._lst = lst
self._index = len(lst) - 1

def first(self) -> None:
self._index = len(self._lst) - 1

def next(self) -> None:
self._index -= 1

def is_done(self) -> bool:
return self._index == -1

def current_item(self) -> T:
return self._lst[self._index]

We can then use the iterators as follows:

>>> lst = [1, 2, 3, 4]
>>> forwards = ForwardListIterator(lst)
>>> print_all(forward)
1
2
3
4
>>> backwards = BackwardListIterator(lst)
>>> print_all(backwards)
4
3
2
1

Advantages and when to use

The main advantage of the iterator is separation of concerns. An aggregate does not need to be concerned about how its elements should be returned in sequence. By defining custom iterators, we can iterate over the same collection simultaneously using different traversal strategies as in the example above.

Disadvantages

The iterator makes the most sense for linear data structures and less so for e.g. tree-like data structures. Writing custom iterators as in the example above introduces unnecessary complexity and a potential performance hit.

Additional notes

Iterator is a protocol which already exists in the standard library. To implement a Python native iterator, only two “dunder” methods must be implemented: __next__ and __iter__ . The __next__ method is a concatenation of the next , is_done , and current_item methods from our example above: calling __next__ immediately returns the current item and changes the internal state of the iterator to go to the next item in the next iteration. If there are no more elements, a StopIteration exception is raised, which is used to break out of the loop.

We can implement our BackwardIterator more simply as:

from typing import Iterator


class BackwardIterator(Iterator[T]):
def __init__(self, lst: List[T]) -> None:
self._lst = lst
self._index = len(lst) - 1

def __iter__(self) -> BackwardIterator:
return self

def __next__(self) -> T:
if self._index == -1:
raise StopIteration
item = self._lst[self._index]
self._index -= 1
return item

The advantage is that we can now simply use a Python for loop on the iterator:

>>> lst = [1, 2, 3, 4]
>>> iterator = BackwardIterator(lst)
>>> for i in iterator:
... print(i)
...
4
3
2
1

This is because when you use the Python for loop, Python will call the __iter__ method on the object you try to iterate over, and then continue to call __next__ until a StopIteration is raised. An iterator typically just returns itself in the __iter__ method, but there are also other objects, e.g. custom collections, that implement __iter__ which returns a default iterator to iterate over the object. These objects are called Iterable.This is why you can use the for loop syntax directly on default python collection objects like lists, dictionaries and sets. The details are wonderfully explained by this Real Python article.

Mediator

When many objects need to communicate with each other, the network of references can become very complex. A mediator is an object that serves as a central object to receive and pass messages between objects. Typically objects that act as mediators are also called “managers”, “directors” or “controllers”.

Example implementation

Suppose we have many objects that send messages to each other. These we call “colleagues”. Each object keeps track of the messages it receives. In order to send a message to all other colleagues but avoid storing a reference to them in every colleague, it is more convenient to pass through a mediator and store a single reference:

class Colleague:
def __init__(self, mediator: Mediator) -> None:
self._messages: List[str] = []
self._mediator = mediator

@property
def messages(self) -> List[str]:
return self._messages

def send_message(self, message: str) -> None:
self._mediator.send_message(message=message, sender=self)

def receive_message(self, message: str) -> None:
self._messages.append(message)

Colleagues don’t store references to other colleagues but pass on their message to the mediator. It is then the job of the mediator to call receive_message on the other colleagues. This Mediator can be implemented as follows:

class Mediator:
def __init__(self) -> None:
self._colleagues: List[Colleague] = []

def send_message(self, message: str, sender: Colleague) -> None:
for colleague in self._colleagues:
if colleague is not sender:
colleague.receive_message(message)

def add_colleague(self, colleague: Colleague) -> None:
self._colleagues.append(colleague)

The mediator stores a list of references to colleagues, which can be updated with add_colleague . When send_message is called, the mediator will go through the list of colleagues and call receive_message , except on the sender. To demonstrate our set-up:

>>> mediator = Mediator()
>>> colleague_1 = Colleague(mediator)
>>> colleague_2 = Colleague(mediator)
>>> colleague_3 = Colleague(mediator)
>>> mediator.add_colleague(colleague_1) # register colleagues with mediator
>>> mediator.add_colleague(colleague_2)
>>> mediator.add_colleague(colleague_3)
>>> colleague_1.send_message("Hello from 1")
>>> colleague_2.send_message("Hello from 2")
>>> print(colleague_1.messages)
["Hello from 2"]
>>> print(colleague_2.messages)
["Hello from 1"]
>>> print(colleague_3.messages)
["Hello from 1", "Hello from 2"]

In this example all the colleagues are instances of the same class, but this does not have to be the case.

Advantages and when to use

The mediator pattern is useful to convert many-to-many relationships, which is very hard to reason about, to one-to-many relationship, which is much easier to understand. If you have a complex system with a convoluted network of interactions between objects, it may be beneficial to centralize control. An additional advantage is that colleagues become decoupled from each other.

Disadvantages

The centralization of interaction protocols between objects into the mediator can make this class complex, monolithic, and hard to maintain.

Memento

A memento is an object that stores snapshots of (part of) another object’s state. The memento can then be used later to restore the object to a previous state. Mementos can be useful for undo/redo operations and tracking history.

Example implementation

Suppose we have an object for which we want to keep mementos of the state. The book calls this the Originator. In our example, the only state being stored in the originator is an integer:

class Originator:
def __init__(self, state: int) -> None:
self._state = state

def set_state(self, state: int) -> None:
self._state = state

def get_memento(self) -> Memento:
return Memento(self._state)

def set_memento(self, memento: Memento) -> None:
self._state = memento.get_state()

def print_state(self) -> None:
print(self._state)

The memento is a simple object that just implements a getter:

class Memento:
def __init__(self, state: int) -> None:
self._state = state

def get_state(self) -> int:
return self._state

The originator is not responsible for keeping track of mementos; this is the responsibility of the Caretaker. We can achieve this with a simple wrapper over a list:

class Caretaker:
def __init__(self) -> None:
self._mementos: List[Memento] = []

def save_snapshot(self, snapshot: Memento) -> None:
self._mementos.append(snapshot)

def get_snapshot(self, index: int) -> Memento:
return self._mementos[index]

The caretaker should not have to know anything about the implementation of Memento and should not have to look inside. The book notes that ideally only Originator may know about the internals of Memento, however this is very difficult to actually enforce in Python. We can use our example as follows:

>>> originator = Originator(3)
>>> caretaker = Caretaker()
>>> caretaker.save_snapshot(originator.get_memento())
>>> originator.set_state(4)
>>> caretaker.save_snapshot(originator.get_memento())
>>> originator.print_state()
4
>>> originator.set_memento(caretaker.get_snapshot(0))
>>> originator.print_state()
3

Advantages and when to use

The memento pattern enhances encapsulation. The originator can have a complex internal state and yet doesn’t need to expose it. Instead, a black box is sent out to store the necessary state elsewhere. By offloading the responsibility of tracking state history to other objects, the originator object can be kept as simple as possible.

Disadvantages

Concerns may surface around memory usage, especially if there is a lot of state to store. A possible mitigation strategy recommended by the book is to use mementos to store changes in state, instead of snapshots of state. This does mean that in order to return to an earlier point in history, one must traverse and apply all the changes until the desired point is reached. An additional concern is that complexity is offloaded from the originator to the caretaker.

Additional notes

  • The book recommends that the originator sees a “wide” interface to the memento, while the caretaker sees a “shallow” interface, i.e. it should not be able to access the internal state. As mentioned, it may be difficult to enforce this in Python, but you can try to implement the rule with static type analysis using different Protocol definitions.
  • In the example above we use a list to track the mementos. The caretaker can be implemented in any number of ways, and the best way depends on your application. For example, for undo/redo, a stack may be more appropriate.
  • There seems to be no clear guidance on how the caretaker and originator should relate to each other. In our example above they are independent and it is assumed something else manages the communication between them. One could also imagine that the caretaker stores a reference to the originator, and calls originator methods directly in order to create and store snapshots. Finally, a reference to a caretaker could also be stored in the originator, which would be equivalent to the originator managing its own history.

Observer

The observer pattern is also known as the pub-sub pattern. It is one way to organize the communication between objects that need their state to remain in sync, i.e. the state in one object needs to change when the state in another object it changed.

Example implementation

There are two elements to the observer pattern: observers and subjects. Observers can subscribe to a subject. Whenever the state in the subject changes, all the subscribed observers are notified.

Suppose we have a subject class storing a counter. Whenever the counter is incremented, we would like to notify any observers. The subject could be implemented as follows:

class Subject:
def __init__(self) -> None:
self._counter = 0
self._observers: set[Observer] = set()

def increment(self) -> None:
self._counter += 1
self.notify()

def get_counter(self) -> int:
return self._counter

def subscribe(observer: Observer) -> None:
self._observers.add(observer)

def unsubscribe(observer: Observer) -> None:
self._observers.remove(observer)

def notify() -> None:
for observer in self._observers:
observer.update()


def Observer(Protocol)
def update(self) -> None:
...

We can then implement different classes that adhere to the Observer interface and thus can subscribe to the subject. For example, an observer that has its own internal state that depends on the value of the subject’s counter:

def ConcreteObserver:
possible_states = ("happy", "sad", "angry")

def __init__(self, subject: Subject) -> None:
self._subject = subject
self._state = self.possible_states[0]

def update(self) -> None:
new_index = self._subject.get_counter() % 3
self._state = self.possible_states[new_index]
print(f"The observer's state changes to: {self._state}"

In this implementation, the observer stores a reference to the subject and when notified of the update pulls the state of the subject to update its own state.

We can now use our subject and observer as follows:

>>> subject = Subject()
>>> observer = ConcreteObserver(subject)
>>> subject.subscribe(object)
>>> subject.increment()
The observer's state changes to: sad
>>> subject.increment()
The observer's state changes to: angry

Advantages and when to use

The observer pattern promotes loose coupling among observers and somewhat between subject and observers.

Observers don’t know anything about other observers, but through the subject can still stay synced. The book uses as motivating example a graphical application with multiple widgets that rely on the same data back-end. If the data is modified by interacting with one widget, all the other widgets should be updated accordingly.

The subject doesn’t care about the implementation of the observers, and just has to worry about its internal state and exposing the relevant bits. This means that we can easily extend the system in the future with additional observers.

Only observers can become somewhat tightly coupled to the subjects they observe.

Disadvantages

In our example above, we manually called increment and have a good overview of which observers are subscribed to the subject. However, an observer does not know how many other observers there are. Typically in real applications, it is the observers that modify the state of the subject. This update could be very costly depending on the type and number of other observers, but the updating observer is blind to this.

Additionally, the observers are only notified that there has been a change in the subject, but not what has changed. Depending on what the observer needs to do, it could be costly for the observer to figure out what the change was. This can be partially resolved by defining different “events” on the subject, and allowing observers to subscribe only to specific events.

Additional notes

There are many variations on the observer pattern that all have their advantages and disadvantages:

  • In our example above, notify is called inside the subject when the state is updated. However, one could also imagine that notify is called from elsewhere, e.g. from an observer that updates the subject state. The advantage of the latter is that multiple state change operations may be necessary before a notify is required, so it can be more performant. The downside is that it is more easy to forget, so you can more easily end up in an unsynced situation.
  • The update method could be altered to take some arguments that indicate what has changed, which can reduce the amount of work observers have to do to figure out what changed in the subject. The book calls this the push model, in contrast to the pull model in the example above. For example, in a scenario where observers are subscribed to multiple subjects, it is helpful to pass a reference to the subject on which notify was called in the update method. The downside of the push model is that it increases coupling, because the subject has to be aware of the needs of its observers.
  • If there are many interrelated objects and subjects, it may be helpful to combine the observer pattern with the mediator pattern.

State

The state pattern is a way to change the behavior of an object through its internal state. The state itself is represented by different classes that adhere to the same interface.

Example implementation

We have a Context class, of which the behavior changes with its internal state.

class Context:
def __init__(self, state: State) -> None:
self.state = state

def set_state(self, state: State) -> None:
self.state = state

def request(self) -> None:
self.state.handle()

In this case, State has a very simple interface, simply requiring a method handle to be implemented.

class State(Protocol):
def handle(self) -> None:
...

We can then make two different states, for example representing “on” and “off”.

class StateOn:
def handle(self) -> None:
print("State is on")


class StateOff:
def handle(self) -> None:
print("State is off")

We can then use our context and states as follows

>>> on = StateOn()
>>> off = StateOff()
>>> context = Context(on)
>>> context.request()
State is on
>>> context.set_state(off)
>>> context.request()
State is off

Advantages and when to use

The main advantage of using the state pattern is that one can avoid long and complex conditional logic. The branches of conditional logic are factored out into their own class. This should make it easier to extend the code with additional states.

The state pattern also encodes state into a type, which provides additional safety. Typically, all state of the context object is represented by a single variable, so reaching inconsistent states due to erroneous combinations of variables is not possible.

The state pattern is a very basic example of composition/polymorphism. Its application makes the most sense if there are lots of states.

Disadvantages

You get more classes which may bloat and complicate the codebase. While the code for a single state is now localized in one class and easier to understand, it may be harder to understand how the Context class will behave given the many State classes and the transitions between them.

Additional notes

  • The state pattern does not specify how state objects should be kept track of and who is responsible. If state objects don’t have internal state themselves, and all state is encoded in the type, then they can be reused through the flyweight pattern.
  • The state pattern does not specify how state transitions should be done and who is responsible. State transition logic can be implemented in Context , in an external object that has a reference to Context (in the example above we just use the REPL), or in the state objects themselves. The book suggests that the latter may be most preferable if the possible state transitions are known. However, this does introduce dependencies between state classes.

Strategy

The strategy pattern aims to encapsulate algorithms and or behaviors, expose them through identical interfaces, so that clients can use them interchangeably. Similar to the state pattern, it allows us to change the behavior of an object based on the selected strategy.

Example implementation

The class structure is nearly identical to the state pattern. We have a Context class of which we want to modify behavior through different strategies:

class Context:
def __init__(self, strategy: Strategy) -> None:
self.strategy = strategy

def run(self) -> None:
self.strategy.run()

The Strategy interface requires the run method to be implemented

class Strategy(Protocol):
def run(self) -> None:
...

Two different example strategies:

class StrategyA:
def run(self) -> None:
print("Context is using strategy A")

class StrategyB:
def run(self) -> None:
print("Context is using strategy B")

We can then use our context and strategy as follows:

>>> context_1 = Context(StrategyA())
>>> context_2 = Context(StrategyB())
>>> context_1.run()
Context is using strategy A
>>> context_2.run()
Context is using strategy B

Advantages and when to use

Similar to the state pattern, the strategy pattern avoids long conditional statements and replaces it with polymorphism. This also makes the code more extendable. If the strategies encapsulate a reusable algorithm, this can also promote code reuse as multiple clients can use the same strategies. Precisely like the state pattern, the strategy can be swapped out at run time to give objects very dynamic behavior.

Disadvantages

Similar to the state pattern, we may get a proliferation of classes which may complicate the code base. Because the strategies are now scattered, it may not be clear to clients which strategies are available.

Additional notes

  • The strategy may need data from the context in order to work. There are multiple options to achieve this. First, the context can pass in this data into the methods of the strategy. This makes the strategies most decoupled from the context. However, because strategies must have the same interface, it may mean that we have to pass in data from the context to some strategies that the strategy does not need, in order to satisfy another strategy. Second, the context can pass itself to the strategy methods. However, this requires that the strategies become aware of the context interface.
  • The strategy pattern is often used to illustrate composition versus inheritance. Instead of defining different behaviors in sub-classes (inheritance), the strategy pattern uses composition to factor out unique behavior.

Template method

The template method is a method in a parent or base class that calls other abstract methods that must be implemented in child classes.

Example implementation

We define an algorithm in an abstract class as a template method:

class Abstract(ABC):
def template_method(self, x: int) -> int:
x = x**2 + x + 1
y = self.operation_1(x)
z = self.operation_2(y)
return z

@abstractmethod
def operation_1(self, value: int) -> int:
raise NotImplementedError

@abstractmethod
def operation_2(self, value: int) -> int:
raise NotImplementedError

The template method relies on two operations which must be implemented in child classes, for example:

class Concrete(Abstract):
def operation_1(self, value: int) -> int:
return 5 * value

def operation_2(self, value: int) -> int:
return value + 2

We can then use our “algorithm” as follows:

>>> concrete = Concrete()
>>> concrete.template_method(2)
37

By implementing other Concrete classes that inherit from Abstract , we can reuse the implementation of the algorithm and get a different behavior. This example is silly, but if we have a very long algorithm that just requires a few repeats with minor differences it may make sense to factor out the commonality in a template method, and the differences in custom operations implemented in child classes.

Advantages and when to use

  • It is a convenient and easy way to factor out common behavior and duplication with minimal repetition or boilerplate.
  • If base classes do a lot of heavy lifting, it can be easy to implement new child classes with very little new code.
  • If there is a clear “is-a” relationship to be found between classes.
  • In compiled languages, inheritance can be more performant than composition, because functions can be inlined at compile time.

Disadvantages

All the disadvantages of using inheritance beyond interface inheritance:

  • It is more difficult to understand where behavior and state is defined, especially if (as is often the case) there are multiple layers of inheritance.
  • The template is inflexible. A change in the template requires a change in all child classes.
  • Child classes are fixed in a potentially non existent hierarchical relationship.
  • Base classes tend to become bloated to accommodate diverging needs of child classes.

Additional notes

The book notes that instead of using abstract methods one can also implement “operation” methods on the base class, which can be overridden by child classes. The implementation on the base class then serves as a default.

Visitor

The visitor pattern describes a way of operating on elements of a potentially heterogeneous collection or structure of objects. It can be seen as a generalization of an Iterator.

Example implementation

Consider multiple classes of which we will create multiple instances and store in one collection. The one shared trait these objects must have is that they “accept” a visitor, i.e. they adhere to a very simple Element interface:

class Element(Protocol):
def accept(self, visitor: Visitor) -> None:
...


class ConcreteElementA:
def accept(self, visitor: Visitor) -> None:
visitor.visit_concrete_element_a(self)

def operation_a(self) -> int:
return 1


class ConcreteElementB:
def accept(self, visitor: Visitor) -> None:
visitor.visit_concrete_element_b(self)

def operation_b(self) -> int:
return 5

A Visitor is then an interface that implements a visit method for each concrete element class:

class Visitor(Protocol):
def visit_concrete_element_a(self, element: ConcreteElementA) -> None:
...

def visit_concrete_element_b(self, element: ConcreteElementB) -> None:
...

We can then implement concrete visitors that “do something” when visiting each element in the structure/collection. For example, a visitor that keeps a record of the sum of all element values returned by operation :

class ConcreteVisitor:
def __init__(self) -> None:
self._counter = 0

def get_counter(self) -> int:
return self._counter

def visit_concrete_element_a(self, element: ConcreteElementA) -> None:
self._counter += element.operation_a()

def visit_concrete_element_b(self, element: ConcreteElementB) -> None:
self._counter += element.operation_b()

We can now use our elements and visitor as in the following example:

>>> collection = [ConcreteElementA(), ConcreteElementA(), ConcreteElementB()]
>>> visitor = ConcreteVisitor()
>>> for element in collection:
... element.accept(visitor)

>>> visitor.get_counter()
7

Advantages and when to use

The visitor pattern should mainly be used when a composite structure or collection of objects have elements that all have very different interfaces, yet you want to perform different operations on the whole structure. In this case, the visitor factors out thematically related operations into a single class — a concrete visitor — with the only drawback that all elements must implement the very minimal accept method. This keeps the element classes clean and is in line with the single responsibility principle. Adding a new visitor is easy, however adding a new element is hard (see disadvantages). So the visitor pattern should mainly be used when the structure and composing elements are static, whereas the number of operations that need to be performed on the structure changes often.

Disadvantages

  • It can be tricky to implement a new element, since if you have a lot of different visitors, it means the Visitor interface needs to be extended as well as all concrete visitor implementations.
  • Visitors and elements are tightly coupled, because the visitor needs to access internal state of the elements in order to do anything.

Additional notes

The visitor pattern is very flexible and there exist many variations:

  • In the example above, the visitor is responsible for accumulating state as it visits all the elements. It is also possible for the visitor to be stateless and the accept method to return a result which is stored in the client function. However, this restricts the flexibility in implementing new visitors.
  • In the example above, the iteration is defined by the collection object (the list). However, it is also possible to put this responsibility on a custom iterator (if we need an alternative way of traversing the list, e.g. in reverse) or on the visitor itself. The last case is typically only relevant if the traversal depends on the result of a visit.

Conclusion

An now, after reading 12K words, you are obviously an OOP expert. Or not. The book notes in its own conclusion that it probably didn’t achieve much in terms of showing you how to architect software. It suggests the real value in studying design patterns provide:

  • A shared vocabulary. Now if you hear builder, visitor, factory or strategy, you will have a vague idea of what is meant. You can use these terms to name your classes to indicate to colleagues (who also know design patterns) what the role and intent of a class is.
  • A better insight into existing systems. When you encounter a codebase, you may better appreciate certain design decisions and get a quicker grasp of how different components are related.
  • A quicker solution to solve common problems in design.
  • Tools for more effective refactoring. After you’ve built an iteration of software, you may realize what the system should actually look like and design patterns may be good targets to work towards.

Ultimately though, design patterns do not exist in isolation and are somewhat arbitrary constructs with an infinite number of variations made from four basic concepts: classes, objects, composition and inheritance. All patterns can and should be combined into larger constructs. How to do that depends on the requirements of the application and different tradeoffs you want to make in terms of performance, flexibility, readability, maintainability, and extensibility. There is no book that will teach you how to do that, and seems to be more of an art than a science.

The goal of this piece was to finally work my way through GoF and learn something from it. If I made grave errors, please correct me in the comments.

I work as a researcher, data scientist, and scientific software developer at VITO, mainly using Python. Previously, I also worked in enterprise as a data engineering consultant. Before that I was a researcher in applied physics and materials science. Check out my personal blog where I occasionally write about random things that interest me.

--

--