The Importance of Codebase Transparency (and the Weather)

On leading software development in the 21st century

Jason McInerney
Sema Software
Published in
5 min readJan 30, 2019

--

But first, a preliminary note about rain

Predicting the weather has always been valuable, but for most of history people have been pretty bad at it. This was not for lack of heuristics. Societies have always had weather lore — some parts of it (red sky at night) more accurate than others (groundhogs) — and an experienced sailor could develop a gut feeling for when a storm was approaching. But today’s detailed forecasts would seem like outright magic to them. A modern meteorologist would either be worshipped as a weather deity or burned at the stake.

What changed?

The 19th-century pioneers of meteorology didn’t have polar satellites or weather balloons — they were still trying to figure out light bulbs and the typewriter. The technologies that made forecasting possible were much simpler. One was a set of standardized metrics for temperature and pressure, which had been developed during the scientific revolution of the previous two centuries. The second was a faster-than-weather means of communication — the telegraph, invented in 1844.

These tools rendered the map newly transparent: early forecasters could now see all across Europe and predict the coming weather based on weather patterns hundreds of kilometers upwind. All that was actually needed was measurement and communication.

The nondevelopment of software development

Both software and the hardware it runs on have evolved at breakneck speed since the birth of the personal computer in the 1970s. The process of software development has not kept pace — for engineers or for managers. Today’s technical managers are responsible for codebases too huge for any one person to understand the ins and outs of every feature, let alone monitor the weekly deluge of changes, patches, and extensions. Without special tools, the codebase is opaque to decision-makers.

The result is that upper-level dev management remains more art than science. If you’re leading a sizable development team, you probably rely on intuition and experience for many types of decisions. You get a sense for which coders are better at what tasks, which features are likely to break with a major update, which modules have trouble interfacing with each other. But you don’t have metrics for most of this, or ways to detect shifts in real time. And even if you’re very skilled, it’s not unusual to end up scrambling to make ad-hoc fixes after a new release as users encounter small unforeseen bugs.

This state of the industry is okay, sort of. With talent and a little luck, you can get by. People got by back when weather forecasting meant asking Grandpa how his knee was feeling. But it’s 2019 — we have flying killer robots and autonomous pizza delivery. We can do better.

Measurement and Communication

So how do you achieve codebase transparency? How can you keep track of the strengths and potential weaknesses of your entire codebase at any given time?

Some degree of codebase transparency is possible simply by good communication — the idea being that for every in-use code file in the codebase, there is someone, somewhere in the company, who understands it enough to give an accurate assessment. But this approach is limited by employee turnover, the need for interface and structural perspectives, and long communication chains of non-standardized assessments.

What’s really needed here (as in the case of meteorology) is a solid set of code quality metrics. This is easier said than done for a whole host of reasons¹, but academics and industry organizations have made significant progress on the problem². Reasonable metrics have been developed for characteristics like understandability, extendibility, flexibility, and reusability. While the formulas used to compute these metrics haven’t been standardized across the industry, just having a standardized metric within the company allows you to look across your entire codebase at once and measure every module by the same ruler.

Metrics won’t be some panacea, or a replacement for good communication and sound judgment. But they can give you important information in areas where many development leaders are functionally blind. Just as importantly, they allow your team to see and discuss code quality at all stages of the process, rather than incurring technical debt and paying up later. These capabilities are helpful in the day-to-day, and can be absolute game-changers in the long-term trajectory of a software company.

Every discipline begins with intuition and empirical testing, whether medicine, or meteorology, or software engineering. These are the right foundations. But at some point, you have to move forward, build on those foundations, and develop standards and metrics that give greater insight into all stages of your process. It’s time for software development to undergo this rite of passage.

This article was originally published by Jason McInerney on the Sema blog. Read part two of this post here.

1 Putting numbers to code quality is notoriously difficult. Part of the problem is that you’re trying to quantify abstract principles such as simplicity and modularity. But unlike activities like performance testing, the practical aspects are just as problematic: you’re not just concerned with how the code does work, but how it will work in all future updates, changes, and extensions, including the ones you haven’t thought of yet. You ideally want to account for how it might interface with not-yet-written features. You also have to account for how the code will interface with humans who have to read and understand it in order to make those changes and add new functionalities. This is a messy bunch of problems, and it’s no surprise that there’s no consensus on a specific formula.

2 Of particular note: Bansiya and Davis’s 2002 Quality Metrics for Object-Oriented Development (QMOOD) paper, and the Consortium for IT Software Quality’s 2013 paper on quality metrics.

--

--