The Importance Of Working Software.
As software developers gain more experience creating software, we learn where the strategic points of the process (the “high ground”) lie.
This article shines a light on one of the strategic points so that you can deliver more customer value in less time and preserve trust in your software brand.
Time Efficiency is Everything
In software development, our equipment costs are fixed and insignificant. Our recurring costs are developer salaries, but they are often also insignificant compared to the value of the software we’re creating. What’s left as a significant cost is time itself.
The time it takes to discover customer value, and write software to provide it, is our critical resource. The good news is that time is free. The bad news is that a developer’s rate is fixed and we can’t increase it on demand. Our best recourse for getting more use out of our critical resource is to identify wasted time and to minimize it.
Expectation Gaps Waste Time
The software we’re making today is incredibly complex. In order to create software, developers must simultaneously construct an equally complex model of it in their head.
Whenever developers are asked to fix a problem or add a new feature, they consult their mental model of the software before diving in to make a change. If the model matches the software well, this process is time efficient. If there’s a divergence, the developer must spend time asking the question, “Why isn’t this working (as I expect it to)?”
What percentage of your software development time is spent asking this question? Any time spent asking this question is taken away from the more valuable question, “How can we discover more value for the customer?”
The potential for divergence increases when other programmes are expected to work on the software, and since they haven’t had a chance to build up a mental model, they can’t immediately make any meaningful changes. Building the model can be very time consuming and is often underestimated by managers who can see the code but aren’t as aware of the invisible mental models that go along with it. You can’t hand over 100 files of source code from one developer to another and expect the second developer to just read them and start working.
The potential for divergence is even greater when the next developer isn’t on the same team. Expectation gaps can be mitigated somewhat by inter-team communication. External developers don’t have as many opportunities to communicate with the original developer.
How can we minimize the expectation gap?
We’re Not Building Software, We’re Discovering It
The key to minimizing the expectation gap comes from the realization that we’re not building software as much as we’re discovering it. Think about the last software project you worked on; it may have taken you months to write it originally, but if you had to start from scratch and someone told you which letters to type, how long would it take you to write it again? Probably no more than a day. Typing code doesn’t actually take that long. It’s figuring what code to type that takes the time.
Once you see software development as a discovery process, you realize the basic dance is that of conducting experiments. Every time we add a new feature or fix a bug, we’re conducting an experiment. Write the code, maybe compile it. Did it work? When you ask that question you’re looking at experimental results. What’s crucial when conducting experiments? Having control.
I find it helpful to think of rock climbing as an analogy. When climbing a rock wall, careful climbers take time to hammer metal spikes into the wall as they go. After hammering a spike, the climber tries a new route forward (he conducts an experiment). If the experiment fails, he falls backward but only as far as his last spike. The spike limits how much time he wastes in his route discovery process.
Automated Tests = Climbing Spikes
What are the equivalent of climbing spikes in the software world? The best analogue we have is sets of automated tests.
Software is considered to be working when:
- There’s no expectation gap: it’s functioning as the customer expects
- The automated tests are a good approximation of customer value
- All of the automated tests pass
Points #1 and #2 are difficult to measure, so point #3 is our go-to indicator. It’s not always an accurate indicator of working software but it’s usually good enough. It’s also easy to improve incrementally; if all of the tests pass but the software doesn’t meet customer expectations, then more tests can be added (within cost-effective constraints).
Seize The High Ground And Keep It
These concepts are familiar to most developers. How we apply them — the conditions we choose to tolerate and those we don’t — makes all the difference in driving down waste.
Many software developers I’ve met, and the industry as a whole, have figured out how to make software and are now in a practice of honing their understanding of the value/waste distinction developed at Toyota Motor Corporation (which spawned lean manufacturing and later Agile Software Development).
This article may be an introduction or a review depending on where you are in the learning process.
Every Release Should be Working
Never publish a version of your software that isn’t working 100 percent. Just don’t do it.
This is probably the hardest guideline for developers because we’re all drawn by the siren call of more features. Adding more features is fun; it feels good, and we can do it with the freedom of not having to look at the big picture. Making sure every significant feature is represented by automated tests, and that all the tests pass, is a more subtle emotional reward that comes with experience. It feels more like the first time you notice, “Hey, not all varieties of cabernet sauvignon taste the same and I like this one!” rather than, “Woo hoo! I’m buzzed on alcohol!”
Making sure a release is working is not cowboy-style “look what I made it do!” It’s a careful process of crossing t’s and dotting i’s.
Why is this so important? Because if you ship a nonworking version of your software, you are handing the world an expectation gap that wastes the time of each user multiplied by the number of users who try it. Will people tolerate this waste and figure out how to get value from your software, anyway? Sometimes they will if your software is crazy-cool or new, but every time you do this you decrease the trust in your brand.
Not long ago I was excited to use an open source library in my project. It seemed like a perfect fit. There was extensive documentation, the code looked good, and there appeared to be users. Excited, I spent three whole days, or 24 working hours, trying to get a simple use-case to work. I submitted bug reports, I re-read the documentation, I scanned the existing bug reports, and I even read the source code.
Ultimately I realized that despite appearances, the software didn’t work. I had fallen into an expectation gap and lost valuable time. I switched to a different library that promised less but functioned exactly as I expected, and I never looked back.
A chart of the value delivered by software to a customer should be at least roughly linear over time — the more time you spend writing the software, the more value you should be delivering to the customer:
If your code isn’t working, you may still have the above chart in your mind as you’re writing, and it may be providing some kind of value to you, but the actual value delivered to the customer is zero.
If you want to make a bleeding edge version of your unstable and incomplete software available, you can get away with that as long as there is also a working version available. Make sure that the master branch is always 100 percent working, no exceptions, and you can make as many unstable branches available as you want. This works only if your stable version is sufficiently current for use. If it’s too far behind your bleeding edge release, and people are forced to battle the expectation gap to get the features they need, then it doesn’t count.
Does this feel like an unattainable goal? Are there always just a couple more features you need to complete before you can focus on stability? If so, you’re not alone. The feeling is pervasive in the software industry. Microsoft project managers are famous for declaring loudly to their developers, “Shipping is a feature!” to overcome this common tendency.
The solution is to deliver less, dramatically less, than you habitually do and begin working on the next release.
Working Releases Encourage Bug Reports
Bug reports (and feature requests) are precious feedback from your customers. They can be instrumental in the quest to discover customer value. It takes effort to write a bug report and your users will put in the effort only if they perceive that they’re getting sufficient value from the software to justify it.
Imagine your software is a car. If it drives smoothly and brings its users to new places, they will be motivated to let you know if a tail light goes out or of their desire for improved fuel efficiency. The fact that the car mostly works makes it easy to isolate a defect or feature request and to describe it simply. If, on the other hand, the car can’t leave the driveway, there’s no incentive for users to write you a bug report. They’ll assume it’s obvious that the car isn’t working and not take the time to let you know.
Make Sure Software Is Working Before Handing It Off To Another Developer
The best way to minimize the time for a second developer to construct a mental model is for it to be working and have an automated test suite. Remember, to do useful work, the second developer will need to conduct experiments against it and therefore will need a control. If all of the tests are passing, then he has that control and he can perform an isolated experiment using 10 percent of the code without needing to construct a mental model for the other 90 percent.
This is a huge win! Most developers receiving software from someone else spend most of their time a) constructing a mental model of it, and b) discovering why it’s not working as expected. If you give them working software with a solid automated test suite, they can immediately start spending their time building features to deliver more customer value.
In the worst case situation, when a professional developer inherits software that’s not working and without an automated test suite, they will often rebuild the entire project from scratch. I’ve seen this time and again and it can actually be the smartest thing to do because (counter-intuitively) it requires less time than grokking the non-working software. Talk about waste!
There are many different priorities to satisfy when writing code. In open source projects in particular, the motivation for development may be internally driven, including the joy of coding.
When developers learn to orient their attention externally on making working code a priority for customers, a synergistic feedback loop is created. The customers are able to use the software and, if necessary, report requests for improvements. The developer gets this precious feedback, which can increase the value of the software, which attracts more customers.
The synergistic feedback loop is what makes the difference between projects that rocket forward in a short period of time (Docker, Node.js, etc.) vs. others which plod along for years without much evolution.
Originally shared by David Braun, TopTal.
Originally published at rainmaker-labs.com on January 29, 2016.