Is OOP Relevant Today?

Does object-oriented programming still have a purpose?

Dick Dowdell
Nerd For Tech

--

Image courtesy of Kinsta

Nearly every day, discussions emerge on Medium either criticizing or praising object-oriented programming. “Java is obsolete!”, “Java is the way!”. This article offers a pragmatic examination of OOP in 2024.

The term object-oriented programming was originally coined by Alan Kay. Kay was a member of the team at PARC that pioneered the graphical user interface that helped to make the modern Internet, PCs, tablets, and smartphones so useful — as well as some of the object-oriented languages with which we now implement those GUIs.

Once you cut through the emotional clutter surrounding OOP, what remains? Is OOP still an effective software development tool or is it just an obsolete programming fad? It is important for professionals to understand the answer!

SPOILER ALERT: The short answer is yes — we’re pro-OOP.

This Is What Alan Kay Said About OOP

“OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.”

Let’s break that down into its constituent parts:

  1. “only messaging” — Means that objects communicate with one another through messages. A message to an object is a request for execution of a procedure, and therefore will result in a change in the state of one or more objects, or return a value. This highlights that objects should interact through clearly defined interfaces, using messages to request actions.
  2. “local retention and protection and hiding of state-process” — Refers to encapsulation and information hiding. Encapsulation is the bundling of data with the methods that operate on that data, or the restriction of direct access to some of an object’s components. Information hiding is a principle where the implementation details of a class are kept hidden from the users. The object only reveals operations that are safe and relevant to the user. This protects the object’s internal state and ensures that the object can be used without needing to understand the complexity inside.
  3. “extreme late-binding of all things” — Late binding means that the specific code to be invoked is determined at runtime instead of build time. This allows for more flexible and dynamic code behavior, where objects of different types can be used interchangeably if they offer the same external interface, even if they implement actions in different ways internally. Late binding describes a system that maximizes this flexibility, allowing for a very dynamic and adaptable component model.

Kay’s description places emphasis on interaction between self-contained components through well-defined interfaces, keeping their internal processes and data private, and allowing for a high degree of flexibility and dynamism in how and when components can interact with each other. These principles can make software more modular, easier to maintain, and more flexible.

This Is What He Didn’t Say

One thing Kay did not mention was inheritance — a concept that has troubled many OOP programmers. His statement makes it clear that he doesn’t consider inheritance to be a hard requirement for object-oriented programming.

While subclassing enhances code reuse and polymorphism, it also comes with some downsides:

  1. Subclasses are tightly coupled to their parent classes.
  2. Inheritance hierarchies can make code difficult to understand and trace.
  3. Changes to a base class can easily break its subclasses.
  4. Overriding methods in subclasses can lead to confusion about which instance of the method is being called.
  5. Subclasses often rely on knowledge about the implementation details of their parent classes, which can break encapsulation.
  6. Changing a superclass may require extensive changes across many subclasses.
  7. Subclasses introduce additional states and behaviors that can make testing more complicated.

Though often stressed in OOP training, inheritance is not a core attribute of OOP — but rather a feature of some class-based object-oriented languages, like Simula, Smalltalk, and Java — and is too often used when composition or aggregation are more appropriate design choices.

Inappropriate or excessive use of inheritance can lead to unnecessarily complex designs that are less flexible and more difficult to understand and modify.

So far, reader responses have shown that class-based versus prototype-based OOP can be a contentious topic. See Appendix A, below, for a short discussion of the topic.

So, What Does OOP Mean for Us Today?

In the design, development, deployment, operation, and maintenance of software, complexity is a primary determinant of cost.

Software is inherently complex. The larger a system grows, or the more it has been changed, the more complex it tends to get.

Figure 1: Complexity = n(n-1)/2

As implied in Figure 1, application complexity increases dramatically as the number of things that make up the application and the connections between them increase.

One of the most effective ways to manage software complexity is to use component models that:

  1. Make individual software components easier to understand and change.
  2. Insulate software components from changes to other components.
  3. Minimize the potential interference among teams working on different parts of the system.
  4. Simplify the delivery of new and updated software components.

Alan Kay and the team at PARC chose OOP for GUI development for the same reasons that OOP makes sense today for both concurrent and distributed application development. Composable microservices that meet Kay’s OOP definition prove the value of Kay’s ideas.

Why Are Kay’s OOP Insights Important?

The fundamental problem is that building software is complicated. Responsive, networked, and distributed software that is affordable to create and maintain — and that works reliably — can be very complex stuff.

The problem becomes even more difficult when one or more teams of people must coordinate their efforts to ensure that all the parts of the resulting application work seamlessly together.

It would also be economically beneficial if the resulting applications were easy to test and continuously modify and deploy — and it wouldn’t hurt if the applications were to be self configuring and monitoring, fault tolerant, and scalable horizontally in response to load.

When used to implement applications with composable component models, Kay’s three attributes of object-oriented programming give us tools to address those challenges.

“Only Messaging”

This principle is rooted in the concept that communication between different parts of a software system should be exclusively through the exchange of messages. This approach is a cornerstone of a number of programming paradigms, including object-oriented programming, the actor model, and microservices. The advantages of an “only messaging” approach include:

  1. Messaging allows for loose coupling between different parts of a system. Since components communicate through messages without needing to know each other’s inner workings, changes to one component do not directly impact others. This facilitates easier updates and maintenance.
  2. Systems designed around messaging can be scaled more easily. Components can be distributed across multiple servers or processes, and because they communicate through messages, the system can handle increased loads by adding more resources without significant changes to the architecture.
  3. Messaging inherently supports concurrency. Different parts of the system can process messages simultaneously, taking advantage of multi-core processors and distributed computing resources to improve performance.
  4. Because components communicate through well-defined messages, they can be implemented in different programming languages or technologies. This allows developers to choose the best tools for each component’s requirements.
  5. Messaging systems can be designed to be resilient to failures. If a component fails, messages can be retried or redirected to another instance of the component, ensuring that the system remains available. Because components are decoupled, a failure in one area is less likely to bring down the entire system.
  6. Messaging facilitates the integration of disparate systems. Different systems can communicate by exchanging messages, even if they are built with different technologies or run on different platforms.
  7. Messaging naturally supports both synchronous and asynchronous operations, where a sender can send a message and wait for a response — or either post a message or publish an event and not wait. This can lead to more efficient use of resources and improve the system’s overall responsiveness.
  8. In a messaging-based system, all data in messages are passed by value, not by reference, guaranteeing that messages are thread safe, immutable, and cannot create side effects.
  9. In a messaging-based system, messages flowing between components can be logged and monitored, providing visibility into the system’s behavior and performance. This can be invaluable for debugging, performance tuning, and understanding system interactions.
  10. Messaging systems support transactional messaging, allowing for complex operations involving multiple steps to be treated as a single, atomic transaction. This ensures data consistency and reliability, even in the face of partial failures.
  11. Components that communicate through messages can often be tested in isolation by simulating incoming messages and observing the responses. This simplifies the creation of unit and integration tests.

Adopting an “only messaging” approach can significantly enhance a system’s modularity, reliability, and scalability.

Figure 2: Messaging Strategies

Orchestrators can be the wiring that connects individual service objects by organizing messaging between them and by acting as circuit breakers to mitigate cascading error conditions. Orchestrators manage the failover, scaling, and self-configuring capabilities of service objects.

Figure 3: Message Orchestration

In the example above, messages M2 thru M4 originate from Component 1 and are delivered to the target components (2, 3, and 4). The maps of component addresses are automatically shared among orchestrators.

When an orchestrator starts up, it builds a map of all service objects located within its own directories, and registers its own presence with all the other reachable orchestrators on the network — exchanging maps with them. Orchestrators are federated across a network and share state information with each other.

Figure 4: Message Routing

An orchestrator takes in messages addressed to a specific service class and version and directs them to the most performant instance of that specific service class.

“Local Retention and Protection and Hiding of State-Process”

These principles are fundamental to creating robust, maintainable, and secure software. The many advantages of these principles, when applied to software development, include:

  1. Keeping the internal state of an object hidden from the outside world, only allowing access through a well-defined interface. This hides the complexity of the state management and protects the object’s integrity by preventing external entities from putting the object into an inconsistent state.
  2. Localizing the state and its management logic within a component, makes a system easier to understand and maintain. Changes to the state management of a component can be made with minimal impact on other parts of the system.
  3. Enhances modularity by allowing developers to design systems in which components are self-contained, with clear interfaces for interaction. This modularity supports the reuse of components across different parts of a system or in different projects.
  4. Protecting objects’ state from unauthorized access is a key aspect of software integrity. By hiding the state and only exposing a controlled interface, the system can ensure that only authorized actions can be performed, reducing the risk of integrity vulnerabilities.
  5. Encapsulating the state and its manipulation logic makes it easier to test and debug components. Since the state is managed locally within a component, developers can focus on testing the component’s behavior in isolation before integrating it into the larger system.
  6. With state and behavior closely managed and encapsulated, it’s easier to scale the system either by enhancing individual components or by adding instances of components that adhere to the established interfaces. This local retention and management of state supports both vertical and horizontal scaling strategies.
  7. Protecting the internal component state and ensuring that all state transitions are controlled through a defined interface, so the system is less prone to errors and unintended side effects, leading to more robust applications.
  8. Ensuring that a system is divided into distinct features that overlap in functionality as little as possible. This separation makes it easier to manage complexity, as developers can focus on one aspect of the system at a time.

“Extreme Late-Binding of All Things”

This involves delaying decisions on specific code execution until the last possible moment, usually at runtime rather than at build time. Modern advances in computing have enabled us to push module binding later and later:

  1. Composable components, which are dynamically loaded at runtime to extend the application’s functionality. This is a form of linking that allows applications to be highly extensible and customizable.
  2. Languages that use JIT compilation, like Java and C#, compile code at runtime — and languages like TypeScript are interpreted. This allows for a form of dynamic linking where the compiler/interpreter and linker can optimize the executable for the specific hardware it’s running on. Method calls are conceptually late binding. Unless optimized, the call selects the class and the class selects the code to be executed.
  3. With microservices, linking takes on a new form. Services communicate over the network using lightweight protocols, effectively linking distributed components at runtime.

Throughout its evolution, the goal of module linking has remained consistent: to enable the construction of complex software from smaller, more manageable pieces. However, the methods and technologies used have evolved to offer greater flexibility, efficiency, and ease of use, reflecting broader trends in software development methodologies and technologies.

There are many advantages to applying extreme late-binding in software development:

  1. Late-binding allows software to be more flexible by making it easy to change how parts of the system interact without recompiling or sometimes even restarting the system. This adaptability is crucial in environments where requirements change frequently while systems still need to be highly available.
  2. Systems designed with extreme late-binding can alter their behavior at runtime based on user interaction, configuration, or external data. This capability enables applications to offer highly dynamic features — where components can be added, removed, or updated on the fly.
  3. Late-binding facilitates easier integration with other systems or components, as the specifics of those integrations can be determined at runtime. This is particularly useful in scenarios involving third-party APIs or services where the details might not be known until runtime.
  4. By decoupling components and deferring decisions about their interaction to runtime, systems can achieve higher levels of modularity and reusability. Components designed to interact through late-bound interfaces can be easily reused in different contexts or applications.
  5. Late-binding supports rapid prototyping and iterative development by allowing developers to make changes and see their effects immediately without extensive recompilation. This can significantly speed up the development process and facilitate experimentation.
  6. Software systems designed with late-binding in mind can be easier to maintain and evolve over time. Since the bindings between components are resolved at runtime, updating or replacing components can be done with minimal impact on the rest of the system.
  7. Late-binding enables higher degrees of customization and extensibility, as new behaviors can be added or existing behaviors updated at runtime.

Wrapping Up

Is OOP still an effective software development tool or is it just an obsolete programming fad? The answer is that OOP is not obsolete. If anything is true, it is that OOP is even more important in today’s world of distributed computing where effective component and communications models are crucial.

Building responsive, networked, distributed software that is affordable to create and maintain — and that works effectively and reliably — can be complicated stuff. Alan Kay has proven, over and over again, that using object-oriented programming — in the way that he and the team at PARC envisioned — can help you do it better.

If you are interested in learning more about how, there’s some suggested reading below.

If you found this article useful, a clap would let us know that we’re on the right track.

Thanks!

Suggested Reading:

Appendix A — Class-Based vs Prototype-Based OOP

Object-Oriented Programming (OOP) is a paradigm that utilizes objects and classes to structure software applications. It aims at implementing real-world concepts like inheritance, encapsulation, polymorphism, etc. in programming.

The main difference in how OOP is implemented comes down to whether a language uses a class-based or prototype-based approach. The key distinctions are:

Class-based OOP

In class-based OOP, the fundamental way to define objects and their behaviors is through classes. A class acts as a blueprint for creating objects (instances), specifying the initial state (fields or attributes) and the implementations of behavior (methods or functions) the instances can have.

Characteristics:

  1. Inheritance — is achieved through a class hierarchy. A subclass can inherit from a superclass, extending or overriding functionality.
  2. Encapsulation — is enforced, allowing private, protected, and public access levels.
  3. Polymorphism — allows objects of different classes to be treated as objects of a common superclass.

Example languages: Java, Smalltalk, C#, Python (Python supports both paradigms but is primarily class-based).

Prototype-based OOP

Prototype-based programming is a style of OOP in which objects are created without defining a class for them. Instead, a prototype object is created. New objects can then be created by cloning existing objects, which serve as prototypes. This model allows for more dynamic and flexible object creation and manipulation.

Characteristics:

  1. Cloning — objects are typically created by copying a prototype object.
  2. Inheritance — is achieved by linking an object to another object (prototype chaining), rather than through a class hierarchy.
  3. Encapsulation and Polymorphism — while these concepts can still be applied, they are achieved differently compared to class-based OOP, often with less emphasis on access control.

Example languages: JavaScript, Typescript, Lua.

Comparison:

  1. Structure and Syntax — class-based languages use classes and inheritance, leading to a more rigid structure. Prototype-based languages use objects and clones, which can lead to more flexibility and less boilerplate.
  2. Flexibility vs. Safety — prototype-based languages offer more flexibility in object creation and modification, which can be beneficial for certain dynamic features. Class-based languages, by providing a clear class structure and inheritance model, can offer more safety through encapsulation and access control.
  3. Performance — Class-based languages may have a performance advantage due to the static nature of class definitions, allowing optimizations by compilers. Prototype-based languages might incur a performance hit due to the dynamic resolution of property access but excel in scenarios requiring dynamic behavior modifications.

In practice, the choice between class-based and prototype-based OOP depends on the requirements of the project, the language being used, and personal or team preference.

Some modern languages and frameworks attempt to blend the best features of both paradigms to provide developers with more tools to solve their specific problems.

A useful balance can be struck between the two paradigms, when using a class-based language like Java, by avoiding the overuse of inheritance and more reliance on interfaces versus superclasses — and the appropriate use of composition and aggregation in design.

<back>

--

--

Dick Dowdell
Nerd For Tech

A former US Army officer with a wonderful wife and family, I’m a software architect and engineer who has been building software systems for 50 years.