Feeling Anchored to Your Legacy Code?

Boris Cherkasky
Riskified Tech
3 min readJun 17, 2020

--

Feeling Anchored to Your Legacy Code?

We all have in our code base this one huge class which is just a big ball of mud. It probably handles loads of different concerns, maintained by dozens of developers, and changes a lot. Also in most cases, its tests aren’t amazing — and in extreme cases, it lacks tests altogether.

Did you ever stop and wonder, how the hell did we get there? How come no one ever stopped and refactored the hell outta this piece of code in the dozens of times features were added over the last few years?

I assume you are probably a decent developer, and above all,a professional — you would never accept such code in your codebase if you were to write it from scratch now. So why does it still exist? And beyond that, why does it still evolve?

With the help of Nobel prize winner Daniel Kahneman’s study, I’ll try and suggest reasoning for that.

The Anchoring Bias

The anchoring bias, as explained in Wikipedia, is simple:

Anchoring is a cognitive bias where an individual depends too heavily on an initial piece of information offered (considered to be the “anchor”) when making decisions.

The best example of the anchoring bias is a study where two groups were asked at what age Mahatma Gandhi died. Both groups were given an anchor (in bold, below) in the phrasing of the question:

  • The first group was asked whether he died before or after the age of 9;
  • The second group was asked whether he died before or after the age of 140.

Both anchors are obviously wrong, but the existence of the anchor was enough to tilt the answers. On average, the second group guessed older ages than the first one.

Projecting back to the world of software — in some cases, the information given during a task definition, or the way task requirements are phrased, will affect the decisions we make in regards to that task.

Anchoring in Software Development

I want to make a controversial observation:

When a developer is asked to maintain a bad part of the codebase, he will allow himself to develop in standards that he wouldn’t allow himself otherwise.

I think that when a task is anchored to the legacy code, our judgment gets clouded, and we allow our professional selves to be a little less professional. We permit ourselves to make “dirty decisions” and write different code than we would have written in a different part of the codebase. You’ll start hearing things like, “There are no unit tests for this class”, “It’s been like that for years, no need to add tests now”, “It’s legacy, no place for clean code here, quick and dirty is fine”.

Un-anchoring

Let’s try and think of a few ways to stop this snowball effect, of code getting progressively worse:

  1. Phrasing: Stop addressing legacy tasks as bad and start addressing the impact and value they give.
    A task to “add a new feature to our legacy service”, can be described as “Give this-n-that impact, by adding a new feature to service X”.
    Bitterness is like poison: it spreads like a virus. When the team will stop treating the legacy code as leprous, and some (even small) resources will be diverted to making the legacy code a bit better, it will become a joint effort, and turn bitterness into pride.
  2. Surgical refactoring: When adding a new feature requires working with such messy code, start with a surgical refactor — refactor just enough code to make things a bit better, and safely introduce needed change — usually it’s adding a new class, introducing some abstraction, and extracting some code here and there. While at it, add tests, and clean the code. Refactoring such small bits and pieces can cause a bit of a weird design, but assuming the origin is worse, you at least helped a bit.
    The Gilded rose code kata may be a good practice on how to do such things for those of you who are inexperienced with refactoring.
  3. Measure progress by code quality: In addition to defining the task as done when the feature is working in production, define it as done when it’s working in production, a surgical refactor was implemented, and test coverage has been improved.

--

--

Riskified Tech
Riskified Tech

Published in Riskified Tech

Software Engineering, Research, Data, Architecture, Scaling and more, written by our very own engineers and data scientists.

Boris Cherkasky
Boris Cherkasky

Written by Boris Cherkasky

Software engineer, clean coder, scuba diver, and a big fan of a good laugh. @cherkaskyb on Twitter