The Double Edged Sword of Faster Feedback

Tobbe Gyllebring
4 min readJan 28, 2017

--

Most would agree that everything else being equal shorter feedback cycles are better than longer ones. One could even go so far as to argue that most Agile practices at their heart are feedback loop shortening devices. Shorter iterations are generally better than longer ones. They yield more information with less noise due to less confounding factors. Feedback is good, quick feedback is better.

Pushing to towards ever faster feedback cycles we instrument our solutions, collect metrics and try to speed up the hypothesis to outcome process by almost any mean thinkable. We argue that doing so yields better outcomes, reduces anxiety and frees us to move without fear (and breakage).

I hold the above to be correct, but not entirely true. What I have observed is that while quicker feedback has the potential for all the above goodness, often something insidious happens. It happens gradually over time often without anyone really noticing until much later, and at that point it’s argued that the thing lost probably didn’t have value anyhow.

What happens is the replacement of deliberate thought and the construction of robust theories gets replaced by tossing things at the proverbial wall to see what sticks. Thoughtful experimentation gradually gets replaced by a rapid nearly random guessing with decaying quality. The ineffectiveness of the new strategy is not seen as a problem with the approach but as validation that even quicker feedback mechanisms are required to again start making progress.

The very mechanisms that was supposed to help us reach greatness becomes crutches and even when we succeed we’ve done so through chance rather than skill. Left without strong mental models we become dependent on our tools, replicating our success becomes a new random walk, superstition sets in.

Unable to anticipate outcomes we quickly conclude the world has moved into the complex domain, it’s not a failure of our method or mental machinery we say, it’s simply that the world has become so different that everything is unknowable probing and responding our only option.

I guess an example would be useful right about now.

Let’s take a few steps back. I grew up in an age where compile time, even for small projects, was a real concern. Compiling and running any non trivial piece of code took long enough that one tried to avoid it. You could often gauge the proficiency of a coder at that time by their ability to “run the code in their heads”. The best ones still had habits that meant running their code under real conditions and exercised in different fixtures to check correctness. But running the code in your head was how we worked, how you sped up debugging sessions and how you explored unknown corners of the code base.

Over time mega-hertz became giga-hertz, tools became more sophisticated and it become increasingly viable to stop doing the hard and error prone work of mentally running the code to instead switch to “set a breakpoint and inspect” style debugging.

The rise of unit testing tools and the availability of interactive environments and REPLs naturally pushed this even further, simply “test it” creating a short snippet to probe for behavior at any time became the norm. Productivity for many kept climbing at each step. The cycle time of real “validated” learning in many instances crept down to nearly nothing.

Today with continuous test runners and blazing fast machines we’re getting feedback about our code often as quick as we can type it and sip our coffee. Everything is wonderful.

Until that is. It’s not.

What I’m observing is a steady increase of scatter shot programming. When something odd happens. An unexpected failure occurs. What used to be a reflective process of figuring out how the code could end up in such a odd state has become a game of rapidly throwing guesses at the machine in order to make the red ball go away. The outcome of this guessing game itself is not integrated into a bigger picture, the cycle is to rapid for that.

When probed about “why do you think that will fix the issue” the increasingly common answer is “… I don’t know, but it’s so cheap to test so why not just try it”.

Sloppy thinking. Running an experiment requires you to have a theory. This is not science. It’s a farce.

I don’t want to go back but I think we must become aware of the double edged sword of getting to reliant on our magic tools. The way to grow our capabilities and drastically increase our effectiveness is not by building more elaborate guessing games, but by using the advances made to more robustly run true experiments, backed by our best theories on what will happen and how and why things will be. The unexpected should cause us to pause, update our mental maps, find the holes in our theories and help us refine our pattern matching and hypothesis building process.

From the intricate inner workings of our platforms to how our products perform in the market, feedback should be put back into its proper place — as the raw material for deep learning about how to place better bets.

The more precise you dare to be with your predictions about the likely outcome from any single probe the more you can learn and leverage the increasingly fine feedback available. Let’s not squander that opportunity and slide back towards superstition.

Don’t accept “let’s just test it” as the normal justification for experimentation and use “I’m not sure let’s run it and see” as true opportunities to learn and develop your understanding rather than as a quick sampling menu soon forgotten.

Rapid feedback is one of the great enablers of our time. Let’s keep it that way by not succumbing to lazy over reliance on it.

--

--

Tobbe Gyllebring

I search not for alignment to the true north but coherence on the path of getting better .