Notes from 99 Bottles of OOP

Link to book.

The net cost of caching can be calculated only by comparing the benefit of increases in speed to the cost of creating and maintaining the cache. If you require this speed increase, any cost is cheap. If you don’t, every cost is too much.

..humans are very bad at predicting, in advance, whether a program will be fast enough overall, and, if not, which parts of it will be too slow. Complicating code in order to solve performance problems, in advance of actual data about where those problems are, raises costs and very often pays nothing in return. These guesses are almost certain to be wrong, and merely serve to harm readability and impede change. Given this, the best programming strategy is to write the simplest code possible and measure its performance once you’re done. If the whole is not acceptably fast, profile the performance, and speed up the slowest parts. Increasing speed may require caching, but many problems can be fixed by substituting more efficient code in specific, narrow places. Your goal is to optimize for ease of understanding while maintaining performance that’s fast enough. Don’t sacrifice readability in advance of having solid performance data. The first solution to any problem should avoid caching, use immutable objects, and treat object creation as free. This results in speedy development of simple code, which leaves plenty of time to identify and correct the real performance problems.

If you’re in the middle of writing a test suite, better information is as close as the next test. Squeezing all duplication out at the end of every test is not necessary. It’s perfectly reasonable to tolerate a bit of duplication across several tests, hoping that coding up a number of slightly duplicative examples will reveal the correct abstraction. It’s better to tolerate duplication than to anticipate the wrong abstraction.

Think of the path to Shameless Green as running on a horizontal axis. Some changes propel you forward along this path and help you quickly reach green, while others are speculative and possibly distracting tangents in a vertical direction. You should complete the entire horizontal path before indulging in any vertical digressions.

Following the horizontal path means writing code to produce every kind of verse before diverging onto tangents to DRY out small bits of code that the verses have in common. The goal is to quickly maximize the number of whole examples before extracting abstractions from their parts.

Duplication is useful when it supplies independent, specific examples of a general concept that you don’t yet understand.

Fake It style TDD may initially seem awkward and tedious, but with practice it becomes both natural and speedy. Developing the habit of writing just enough code to pass the tests forces you to write better tests. It also provides an antidote for the hubris of thinking you know what’s right when you’re actually wrong.

Bottles now produces that correct output, and it’s tempting to walk away at this point. However, doing so transfers the burden of keeping this code running to some poor downstream programmer, one who has far less understanding of the problem than you do right now.

Tests are not the place for abstractions—they are the place for concretions. Abstractions belong in code. If you insist on reducing duplication by adding logic to your tests, this logic by necessity must mirror the logic in your code. This binds the tests to implementation details and makes them vulnerable to breaking every time you change the code.

Testing, done well, speeds development and lowers costs. Unfortunately it’s also true that flawed tests slow you down and cost you money. It is worth the effort, therefore, to get good at testing. TDD can prevent costly guesses, but only if you commit to writing code in small steps. Tests can make it safe and easy to refactor, but only if they are carefully de-coupled from the current code.

generally it’s best to clarify requirements, and then write the minimum necessary code.

"Open" is short for "Open/Closed," which in turn is short for "open for extension and closed for modification." The "O" in open supplies the "O" in the acronym "SOLID" (see sidebar). Code is open to a new requirement when you can meet that new requirement without changing existing code.

Note that safe refactoring relies upon tests.

Tests are the wall at your back. Successful refactorings lean on green. Therefore, you should never change tests during a refactoring. If your tests are flawed such that they interfere with refactoring, improve them first, and then refactor.

Flocking Rules

  1. Select the things that are most alike.
  2. Find the smallest difference between them.
  3. Make the simplest change that will remove that difference.

Changes to code can be subdivided into four distinct steps:

  1. parse the new code [execute the test without actually calling the new code]
  2. parse and execute it
  3. parse, execute and use its result
  4. delete [the original] unused code

DRYing out sameness has some value, but DRYing out difference has more.

Making a slew of simultaneous changes is not refactoring—it’s rehacktoring. It would be much better to make a series of tiny changes and run the tests after each. If the tests fail, you know the exact change that caused the failure, and can undo back to green and make a better change. If the tests pass, you know that the current code works, even if the refactoring is only partially complete. Formal refactoring confers two additional benefits. First, because no change breaks the tests, the code can be deployed to production at any intermediate point. This allows you to avoid accumulating a large set of changes and suffering through a painful merge. Next, code that runs properly even in the midst of a long refactoring increases the bus factor. This contributes to a higher likelihood of project success even if you, personally, were to meet an untimely end.

In general, change only one line at a time. Run the tests after every change. If you go red, undo and make a better change.

In his book Refactoring to Patterns , Joshua Kerievsky talks about "Gradual Cutover Refactoring," a strategy for keeping the code in a releasable state, gradually switching over a small number of pieces at a time. This type of refactoring can be done alongside other development work so as not to affect the release schedule.

The small amount of time lost to making incremental changes is more than recouped by avoiding lengthy and frustrating debugging sessions. This style of coding is not only fast, it’s also stress-free.

Let red be your guide. If you take a giant step and the tests begin to fail, undo and fall back to making smaller changes. There are plenty of hard problems in programming, but this isn’t one of them. Real refactoring is comfortingly predictable, and saves brainpower for peskier challenges.

Within any group of programmers, there’s often someone who’s good at naming things. If your group has such a person, you know who they are. Appoint them the "name guru," and leverage their strengths when you need a name.

Code is read many more times than it is written, so anything that increases understandability lowers costs. Next, and just as important, consistent code enables future refactorings.

This chapter explores what it means to model abstractions and rely on messages; it considers the consequences of mutation and the perils of premature performance optimization.

Programmers naturally assume that difference exists for a reason, but here there isn’t one. Superfluous difference raises the cost of reading code, and increases the difficulty of future refactorings.

Conditionals result in multiple execution paths through the code, which add complexity and make code hard to understand.

In an ideal world, each different concept would have its own unique, precise name, and there would be no ambiguity. Unfortunately, real world code often fails to meet this ideal.

Conditionals are the bane of OO.

now that you have a new requirement and are rearranging the code, you’d like to apply a full-blown OO mindset, and that mindset is deeply suspicious of conditionals. As an OO practitioner, when you see a conditional, the hairs on your neck should stand up. Its very presence ought to offend your sensibilities. You should feel entitled to send messages to objects, and look for a way to write code that allows you to do so.

there’s a big difference between a conditional that selects the correct object and one that supplies behavior. The first is acceptable and generally unavoidable. The second suggests that you are missing objects in your domain.

The cure for Primitive Obsession is to create a new class to use in place of the primitive. For this operation, the refactoring recipe is Extract Class.

while you should continue to name methods after what they mean, classes can be named after what they are.

If you lean towards mutating the existing BottleNumber rather than making another, it’s possible that you are biased against creating new objects. This bias is often unexamined, and has its roots in the assumption that if you routinely create many new objects, your application will be too slow.

You may be familiar with Phil Karlton’s famous saying "There are only two hard things in Computer Science: cache invalidation and naming things.”

Caching is easy. However, figuring out that a cache needs to be updated can be hard. The code to do so is often complicated and confusing. This additional code must be tested, and inevitably, when it turns out that the tests are insufficient, debugged. The extra code needed to manage a cache can be so difficult to write, hard to understand, and expensive to run that it offsets the original benefits.

Having a clump of data usually means you are missing a concept. When a clump gets sent as a set of parameters, the method that receives the clump can easily become polluted with clump management logic. If more than one method takes the same clump as input, some or all of this management logic will inevitably get duplicated in several places. Not only is it a pain to maintain this duplication, but over time the logic might accidentally diverge, introducing errors and confusing everyone involved.

Skilled programmers are good at picking what best to do next. For many problems, they can immediately identify the code smell that will be most fruitful to resolve. They have excellent judgement. Their decision-making process looks intuitive and effortless, but also inimitable, which makes their actions simultaneously inspiring and disheartening. It’s as if they have a secret understanding of the underlying patterns of code, one not granted to mere mortals. Despite appearances, these programmers weren’t born with magical talents. Their powerful intuition isn’t innate—it’s the result of a lifetime of coding experiments. Their present-day actions are informed by a diverse body of knowledge gained piecemeal, over time. Their deep familiarity with many varieties of code entanglements allows them to unconsciously map appropriate solutions onto common problems, often without the necessity of first writing code. They know what’s right before they do it. Or at least, sometimes. They also know that they don’t know everything. This belief in their own fallibility lends them caution. Skilled programmers do what’s right when they intuit the truth, but otherwise they engage in careful, precise, reproducible, and reversible coding experiments. You are encouraged to do the same.

Polymorphism allows senders to depend on the message while remaining ignorant of the type , or class, of the receiver. Senders don’t care what receivers are; instead, they depend on what receivers do.

As previously stated, this recipe uses inheritance. Modern object-oriented programming is biased towards preferring composition over inheritance. However, this bias shouldn’t be taken to mean that the use of inheritance is banned. The current recipe calls for it, and for the problem at hand, inheritance supplies a straightforward, cost-effective solution.

When several classes play a common role, some code, somewhere, must know how to choose the right role-playing class for any specific contingency. This choosing very often involves a conditional, which should exist in one and only one place. Code like this is said to "manufacture" an instance of the right kind of object, and so is commonly referred to as a factory.

This willful ignorance of type is fundamental to object-oriented programming. It insulates code that calls a factory from changes of implementation within that factory. By refusing to be aware of the classes of the objects with which you interact, you grant others the freedom to alter your code’s behavior without editing its source. In the distant future, someone could amend the factory to return newly introduced players of the bottle number role, and your existing code would happily collaborate with these unanticipated objects.

Factories don’t know what to do: instead, they know how to choose who does. They consolidate the choosing and separate the chosen.

The cold, hard truth is that you can’t avoid conditionals. However, you can use polymorphism to create pluggable behavior, and confine conditionals to factories whose job is to select the right object. Applications so constructed are supremely amenable to change.

Objects that sometimes fail to respond to a message you plan to send, or occasionally return something you don’t expect, force you into a paranoid programming style. These untrustworthy objects require senders of messages to know too much. When your application has code that needs knowledge of the internals of other objects in order to correctly interact with them (as did successor above), changes to those other objects might break your code. If you have to check the type of an object in order to know what message to send, you’re forced into a conditional that lists every concrete class with which you’re willing to collaborate. Doing this dooms you to changing the conditional when you add a new class. Checking to see if a object responds to a message rather than checking that object’s type may reduce the size of this conditional, but it doesn’t ameliorate the problem.

You have been refactoring for many chapters, using passing tests, or green, as the wall at your back. Now that the current arrangement of code is open to the six-pack requirement, it’s finally time to switch from refactoring mode back into TDD mode.

Kent Beck describes this entire process beautifully, and sympathetically, when he says:

make the change easy (warning: this may be hard), then make the easy change
— Kent Beck via Twitter

Clever shortcuts are a false economy. Invest in code that tells the truth. Just write it down.

[Regarding dynamic invocation within a factory, e.g. "#{animal.capitalize}Catcher".constantize]. Given the list of objections, it’s logical to wonder if opening this factory could ever be worthwhile. Do the benefits of openness justify the cost of this additional complexity? The answer, as is true for most questions about object-oriented design, is that it depends. If you frequently create new bottle number classes, the cost of repeatedly changing the factory might very well exceed that of making it open. Conversely, if you never add new bottle number classes, the factory won’t ever change, so there’s no justification for complicating the code. Your goal is to minimize costs, and costs are determined by the situation at hand. There’s no hard and fast rule about what’s best. It just depends. A factory’s fundamental job is to manufacture the correct player of a role. Relative to this responsibility, its openness is a trivial concern that can be tweaked over time.

Strive for simplicity. Don’t abstract too soon. Focus on smells. Concentrate on difference. Take small steps. Follow the Flocking Rules . Refactor under green. Fix the easy problems first. Work horizontally. Seek stable landing points. Be disciplined. Don’t chase the shiny thing.

In addition, deal with new requirements by first refactoring existing code to be open to them, and then writing new code to meet them. Achieving openness is usually the more challenging task, but can be sought in absolute safety if you have tests that act as a wall at your back.

You may need better tests. If so, writing them will save you money.

The canons are practical rules that guide the programming process . Adhering to them will lower your stress, speed up your work, and improve your code. If you commit to nothing more than to follow them, your reading time will have been well spent.

However, this book is not just about process. It has a second, more abstract, goal—it aspires to infect you with a certain perspective about object-oriented programming. This book wants you to fall in love with polymorphism. When you write conditionals that supply behavior, and put those conditionals in classes whose names other classes know, your code depends upon concretions, and will break with every change. However, when you disperse behavior into polymorphic objects, you can use factories to isolate both the names of the classes and the conditionals that choose them. Factories instantiate role-playing objects, which you can then inject as dependencies. When you inject smart dependencies, and trust them to behave correctly, your code depends upon abstract roles rather than on concrete classes. This loosens the coupling between objects and makes code open to change.

The secret to programming happiness is to combine the canons with the infection, building applications from polymorphic, trustworthy objects, and changing them one step at a time.

Think of your code as a message in a bottle, written in haste for future readers. They’ll always know more than you know right now. Your job is not to be perfect, but to write a generous and sympathetic story. Tell them a story they can understand, and they’ll cherish you forever.

The code in this book is on Github. The simplest way to get the exercise is to clone the repository and check out the correct branch, as follows: git clone --depth=1 --branch=exercise https://github.com/sandimetz/99bottles.git