Don’t Make Your Code Predict the Future

Alex Vanyo
Livefront
Published in
7 min readMar 31, 2021
Photo by Aron Visuals on Unsplash

It’s sometimes nice to think about code in a vacuum. Perhaps that function you just wrote elegantly solves a problem, or some interface models the situation perfectly. You submit your modification, happy with the changes, introducing just enough complexity without causing any unintended side-effects.

Unfortunately, we live in the real world. Our code doesn’t run in a vacuum, as it’s usually part of some larger, messy, complex system. Requirements may change, and the previously solved problem may change. The old solution might need to be updated or replaced days, months, or even years after it was originally written.

It’s tempting to take this as a challenge, to create a solution that is ultra-generic, powerful enough to handle any change without modification. Unfortunately, this is ultimately a fool’s errand, one that I’ve been guilty of attempting myself. I can’t predict the future, and I doubt you can either. Therefore, it stands to reason that neither can our code.

Design, don’t predict

Superb APIs and libraries don’t attempt to solve for everything under the sun, past, present and future. Instead, they acknowledge the unknown, the problems of the future. Tests are written, encoding the expected behavior of the present, so that updates to accommodate changes don’t break previous functionality. Interfaces and components are developed with the expectation that they might require adjustments or replacements down the line, or combined in previously unforeseen ways.

This is an art, not a science, so there’s no hard and fast rules for what to do. However, I hope with a few examples you might consider writing and reviewing code a bit differently, treating codebases as the living, ever-changing beasts that they are.

All of these examples will be in Kotlin, my primary day-to-day language as an Android developer. However, the principles should be broadly applicable to whichever language you find yourself working in, even if it may be one that doesn’t exist when this article was written.

Will your default arguments stand the test of time?

Kotlin has the powerful feature of default arguments. When defining a function, parameters can have a default value, which allows callers to skip that argument when calling the function.

fun read(
b: ByteArray,
off: Int = 0,
len: Int = b.size,
) { /* ... */ }

This is extremely useful for many APIs, and it is used heavily across the standard library. Common defaults don’t have to be specified, while still allowing them to be overridden if the situation demands it.

However, this power comes with drawbacks, especially when overused. If you add a default argument to some function, you are making the decision at that point in time that the default argument is the right value for most cases.

A common situation for adding a new argument to a function might be to provide some extra information to make a decision. If this extra information only applies in one or two cases, it’s tantalizing to avoid changing all call sites by declaring a default argument.

However, what happens if somebody else happens to add another case in a pull request that is being worked on simultaneously? This might not be too much of a concern for small projects and teams, but if you don’t catch the additional case in code review, the decision of adding a default argument effectively means that you are making a decision about a case’s behavior that you may not even be aware of.

If that sounds frightening for your new parameter, reconsider giving it a default in the first place. There are still situations where a default makes sense to handle the vast majority of cases, but also recognize the power of not providing a default. By requiring it to be specified explicitly, a call of the function that doesn’t specify the parameter will result in a compiler error. This might cause an annoying build break, but that’s better than an unintentional decision slipping its way into production.

How confident are you in your default code paths?

Another very useful feature of Kotlin is the sealed class. Sealed classes allow modeling restricted class hierarchies, and have a ton of uses for creating precisely controlled logic. For an example, we could define a simple sealed class Fruit:

sealed class Fruit {
object Apple : Fruit()
object Banana : Fruit()
object Grape : Fruit()
object Kiwi : Fruit()
object Lemon : Fruit()
object Orange : Fruit()
}

Now, suppose we wanted to determine if a Fruit might be artificially dyed to look better in the store:

fun Fruit.mightBeLegallyDyed(): Boolean = this is Fruit.Orange

Since oranges are the only fruit I’m aware of that the FDA has approved dyeing, this logic is correct right now.

Now, suppose that original this is Fruit.Orange is buried deep in some logic, in the corner of the code that is rarely visited. Perhaps the code is handed off completely, or the person that originally wrote it leaves the project. If a new subclass is added (or a non-Orange subclass is removed) the statement this is Fruit.Orange will continue to do exactly what it says, even though it was written before the new subclass was added. It might be correct, but it might not be. If it isn’t, hopefully the resulting logic is well-tested, and the erroneous behavior is noticed.

However, we can automatically surface that potential issue at compile time, instead of having to rely on testing. Even though it’s longer, I would prefer this, completely logically equivalent definition instead:

fun Fruit.mightBeLegallyDyed(): Boolean = when (this) {
Fruit.Apple,
Fruit.Banana,
Fruit.Grape,
Fruit.Kiwi,
Fruit.Lemon -> false
Fruit.Orange -> true
}

This is taking advantage of an exhaustive when statement, where there is no else block since the compiler can verify that all possible branches are handled. This is longer and more verbose, but has the huge advantage that a missing case will result in a compiler error.

Like the first example with default arguments, any change to the sealed class will cause a compiler error. It defers a decision to the time when we actually have all of the information for what the correct behavior should be, automatically prompting the choice again.

This is a big enough of an advantage that support might be added to the language directly, and you can currently enforce making when statements exhaustive with tools like detekt or cashapp/exhaustive.

Will your names ever conflict with somebody else’s?

The last example deals with multiple receivers, which spring up when using inner classes, extension methods with receivers, and the scope functions with, run and apply. All of these allow adding to this, in the sense that functions can be called on any receivers that are in scope. For example:

object Foo {
fun foo() { /* ... */ }
}
object Bar {
fun bar() { /* ... */ }
}
fun main() {
with(Foo) {
with(Bar) {
foo()
bar()
}
}
}

This syntax is convenient, and is one of the building blocks for a lot of DSLs that add methods to nested receivers.

However, this implicit usage of this is, as the name implies, implicit.

If any method foo(): T is added to Bar (it doesn’t even have to be foo(): Unit, since the return type is unused), the code will still compile, yet act completely differently. Even though the main function didn’t change, it will now call Bar.foo() instead of Foo.foo(), since Bar is the innermost receiver.

The above is a simple toy example, but by nature the cases where multiple receivers are in play tend to be more complex, perhaps to deal with generics. If you’re lucky, an added function might have a conflicting type signature, causing a compiler error. If you’re unlucky, the function or property might have a matching type signature and compile without issue. That might seem unlikely, but if you named something, chances are pretty good somebody else might name something in the same way.

If you’re even more unlucky, Bar is from an external library, and bar was added in an update. Just a simple version bump, that didn’t cause any compilation errors, that causes a functionality change. In my case, the change was missed by tests but was large enough to be noticed quickly, but more subtle issues could go unnoticed for a lot longer.

There’s a couple of preventative measures that you can take here. You can avoid nesting with, run and apply blocks if they aren’t providing much direct benefit. If an extension function doesn’t depend on the class it is nested in, move it to the top-level to make it clear that it doesn’t depend on the outer receiver. Finally, implicit this usages can be replaced with qualified usages instead, explicitly naming the receiver. Banning scope functions and implicit references is definitely overkill for most cases, but maybe a little extra verbosity is worth preventing nightmarish bugs, especially in vital code paths.

Unless machines completely replace humans at writing code, the code you write is going to be reviewed, maintained, updated and ultimately removed by someone else. As much as we might strive to write the perfect code that never needs to be touched again, reality rudely reminds us that it will. Unless of course, it’s the bit of code that you bet will be temporary — that ends up sticking around forever.

The perpetual changing of code is one of the reasons why we review code, stick to patterns and write tests in the first place. When you’re writing code, you don’t have to pretend that it’s the only “pure” part of the process. There are probably a lot of simple things you can do to make your job, your team’s job, and ultimately everyone’s job easier. At the very least, it’s a lot less work than trying to predict the future.

Alex reads and writes code that definitely doesn’t predict the future at Livefront.

--

--