Into the Light — Embracing Good Software Development Practices

Grant Gadomski
Walk Before you Sprint
9 min readJan 4, 2020

When we first start writing code, many of us will find ourselves falling into one of two camps. One focuses on the code itself, considering clean, intelligent, modern code to be the ultimate goal, with working software as the happy byproduct. The other focuses on the software we’re making & it’s place in the world, considering code as merely a tool to solve issues & execute on visions.

As a young gung-ho developer I very much exemplified the latter. I got into software to solve the world’s problems, and code seemed like the best medium for me to do so. Throughout my first couple internships my focus was entirely on delivering functionality to users, through whatever means necessary. I used to bristle at the thought of automated tests & code reviews, thinking that they only slowed down the flow of features & usable software.

But with time comes experience, and with experience comes a better understanding of the importance of clean, well tested, robust code. Here’s what caused the need for good software development practices to click with me.

Writing Clean Code — Buildups & Boomerangs

In the waterfall model of software development, we tended to see the software’s release as the finish line. Agile methods like Scrum have shifted this to shorter ~2-week finish lines, with the underlying sentiment being the same. Either way we tend to think that when a piece of software is released to the world, that it’s “done”. Like a factory assembling a car, once the car is put together & rolls off the line we’ll never have to touch that car again. It’s the owner’s problem now.

In reality building a feature is less like building a car, and more like adopting a child. The code written will need years of frequent maintenance to fix bugs, adapt to changing business needs, add new capabilities, and even deal with changes in other parts of the system. So long as you’re still on the project you can never fully escape code you’ve written, and even after you leave you’ll probably still get phone calls from developers looking to update it. Your work will boomerang back to you for years.

So when laying your fresh code upon the system, think about how it would feel to revisit it 2, 5, 10, maybe even 20 years from now. Is the logic & styling clean enough for someone new to understand it, or is it a tangled mess that would require hours of detailed tracing & cursing? Are all functional paths & possibilities explicit, or is it littered with dependency landmines that an unsuspecting developer is bound to hit?

It’s estimated that maintenance accounts for 75% of the total cost of ownership for software, which means an awful lot of developers’ (including your) time is spent working with already written code. Make sure the code you write is at least bearable to come back to.

Another side effect of non-ideal code is the buildup of tech debt. Note that tech debt isn’t exclusively the product of poor development. Instead it’s the result of trade-offs that every software maker deals with constantly. Even the most technically lauded software companies like Google & Amazon have tech debt.

What makes places like Google & Amazon different is how they deal with this tech debt. Patching holes, redoing code, and even re-architecting systems is a continuous practice at these places, and ~10–20% of time is allocated to do such things. Their tech debt remains manageable as a result.

So why is tech debt such a bad thing? When I was first introduced to the concept, I’ll admit that it didn’t bother me too much. To me it sounded like the worry of nitpicking developers who needed to have all their code look perfect, regardless of how much time cleanup efforts took away from feature development.

But then I saw first-hand the effects that tech debt buildup has on a system. It starts small:

  • Changes become a little more complex to fit into a particularly messy part of the code
  • Previously simple changes now take twice the time they once did
  • Unexpected dependencies lead to pop-up bugs on deploy night
  • CRUD operations start taking a bit longer due to everything filtering through one master controller
  • Developers waste time on low-impact & unused functionality

Over time these unaddressed issues put a chokehold on your system, making any changes go from tricky to downright impossible. And don’t you dare try to write a new feature and tie it into this rats nest. Over time estimates lengthen, strange errors crop up constantly, performance starts to suffer across the board, and god help the poor sucker who joins & needs months to become somewhat productive in this codebase. It gets bad enough to where the only three options are to live with a buggy & unchangeable system, pause all feature development to untangle the mountain to tech debt, or scrap the whole project and start over.

So to ensure tech debt is continuously tackled instead of just talked about, schedule time to address it. Maybe each developer takes Friday afternoon to clean up a rough patch of code they saw earlier in the week. Better yet, include tech debt cleanup in feature estimation and work to bake a “leave the code better than you found it” mentality into the team. If stakeholders push back, remind them that a little bit of extra cleanup effort now saves them an almost unbelievable amount of pain later on. Continually cleaning the system means it can remain stable and adaptable to changing needs for years. Neglecting this cleanup for the sake of more features leads to a buggy mess that will be set in stone as the business’s needs change around it.

Automated Testing — It’s All About Feedback

It wasn’t until my third internship that I started writing automated tests. Like many other best practices, I wasn’t completely sold on it at first. If I can click through and see the code work on my local environment, why do I need to write automated tests to confirm what I see with my bare eyes?

It wasn’t until I read the Gene Kim books DevOps Handbook & The Unicorn Project that automated testing & the need for fast feedback really clicked for me.

Fast feedback relates to the 2nd Ideal in The Unicorn Project: “Focus, Flow, and Joy”. If you’re a developer, think back to some of your first programming projects. You would open a clean editor window, punch out ~10 lines of code, and click “run” (or compile if you started with a more hardcore language). Immediately red text would appear in your console, saying something like “syntax error: line 5”. You would look at line 5, notice a silly mistake (“I forgot a semicolon!”), fix it, and run it again. Rinse & repeat until FizzBuzz is working like a dream.

Programming was fun partially because feedback was fast. Mistakes were quickly & easily noticed, and once you learned what to look for the problem was relatively straight forward.

In professional projects we don’t always have that fast feedback. Applications are complex, sprawling, and take time to fully “run” all scenarios via manual testing (by clicking through the application, testing all command line options, etc.). Worse, new code often relies on certain preconditions (a case make it to a certain step, a premium user exists in the database, etc.). Setting up and checking your new code manually takes a substantial amount of time, and running into + correcting issues means you have to start the whole process all over again. By solely relying on manual testing, there’s a good chance your project is both wasting valuable developer time, and remains open to manual testing mistakes & misses.

Adding to these headaches is the risk of undetected issues due to forgotten or unexpected dependencies. The developer may not realize that the change they made to once piece of code has downstream effects, or they may have forgotten to test how an older feature reacts to new changes. If manual testing isn’t fully regressive, the developer may not get feedback about the larger-picture impact of their change. These issues can fly under the radar until the new code makes it to production, leading to a real doh! moment for everyone.

A suite of fast-running automated tests that cover the full application can significantly improve the speed & level of feedback provided to the developer. With a full automated test run (preferably in one click), within one trip to the coffee maker the developer can learn:

  1. If their change worked
  2. If their change broke anything else in a large, complex application

Of course this doesn’t completely replace the need for smoke or manual testing, but the speed in which feedback is provided for new code significantly decreases development time, and it can provide a needed sanity check for unexpected dependencies & downstream effects. The result is a more stable system in which developers can confidently make changes and quickly respond to results from their new code.

Code Reviews — Fixed vs. Learner Mindset & Psychological Safety

After a few days of coding your tuchus off, you run your unit tests, give your new feature a quick click-through, and open a pull request, confident that your handiwork will be in production in no time. After a couple celebratory cups of coffee you return to see a barrage of must-address comments from the code review, requesting everything from code-style fixes to back-to-the-whiteboard redesigns.

For some this can be awfully demoralizing. You feel like you’ll never produce a clean piece of code, and are bound to make mistakes daily. You long for the day when you’re truly a “senior” developer, whose pull requests are consistently devoid of comments besides “This for-loop brought tears to my eyes, you’re clearly an amazing developer and also attractive and muscular and rich”.

In truth, no matter how far your coding skills have advanced there’s still room for improvement (also skills don’t correlate to muscularity). The excellent developers see feedback as an excellent opportunity to learn something new about their craft & ensure the team’s codebase remains as clean as possible. Instead of becoming defensive & digging into inapplicable or outdated practices, these developers are grateful for their peer’s comments, assess what was proposed, and if useful make the change & include it in their development toolkit for later. This is a prime example of something called a “Learner’s Mindset”.

The learner’s mindset stands in direct contrast with a fixed mindset. A fixed mindset assumes that we have the skills we have, and that’s it. You’re either a good developer or you’re not. You should focus on the things you’re good at, and give up on the things you’re bad it. Often times there’s a direct tie between this person’s abilities, and their self image. When others critique a piece of their work, it’s seen as a critique on them as a person, leading to defensiveness & loss of confidence.

The learner’s mindset rejects this approach, instead seeing ourselves as fluid humans whose skills are a reflection of deliberate practice & learning from experience. With enough work we can become good at whatever we set our minds to. Mistakes are seen as an opportunity to learn & adjust, not a blow to our self worth.

For this learner’s mindset to exist, both the team & surrounding organization must bake Psychological Safety into their baseline culture.

Psychological safety is the belief that you won’t be punished for genuine mistakes. When people feel psychologically safe in their environment, they’re more likely to think creatively, propose new ideas, and try new things, developing new skills along the way. A Google study found psychological safety to be one of the main drivers for a team’s success.

This is especially critical in software development, where the Impostor Syndrome runs rampant. When the company’s culture makes it clear that mistakes are acceptable learning experiences, and value genuine, sometimes messy growth over quiet stagnation, the wall of comments from each code review seem less like a threat to job safety, and more like a chance to learn from a peer.

Conclusion

Just being told “these practices are good, now follow them” doesn’t always work for people with a propensity for stubbornness like myself. Sometimes it takes a specific argument or just good ol’ experience to understand. This is how I came around on the importance of good software development practices, and I hope you’re on your way to seeing the light as well.

--

--