Geek Culture
Published in

Geek Culture

Shaping Young Minds with Agile Thinking

From Wikimedia.

On Monday and Wednesday nights, you can find me teaching a class on “software engineering” at the CS masters program of Saint Louis University. Yes, it’s a real class and yes, it’s accredited for a real degree. I’m sure you’re questioning SLU’s judgement, but you may also undervalue how good I am at selling myself. Heck, I once sold myself Windows 8, hook, line, and stinker.

In this class, my focus is not on algorithms or data or line by line code, but instead on the process of engineering, which I would argue is easily the more challenging study. I have interviewed scores of programmers out of masters programs from “top ranked programs”: Stanford, Carnegie-Mellon, University of Illinois — even my prestigious alma mater, at whose nourishing teat I suckled for four academically challenging years: Truman State University.

I find a vast majority of the time that graduates from these programs are completely ill-prepared to actually build software, which is made all the more frustrating because of how much smarter than me these people are. Don’t get me wrong: the stuff they DO know (algorithms, structures, formal methods) can be extremely valuable — it’s just that they can’t get that stuff down on paper. These fresh graduates are the equivalent of an 8-liter, 645-horsepower V10 Viper engine put in the body of a 1992 Ford Taurus. Wheels spinning, tires smoking, loud as cluster-thunder but sitting stock-still on the starting line: they simply can’t get the power down.

Well, with my two — nope, now three weeks experience teaching in an academic setting, I’m going to single-handedly change all of that. I’m going to teach them about over-steer and under-steer, electronic speed control, and apexes. And the first study is, strangely enough, a derivation of Agile.

It’s been very interesting.

From Wikimedia.

As I’ve stated before, most everything cool you stumble upon in software was probably discovered 50 years ago by some neck-beard: and the rule applies here too.

In 1981, Boehm (no, not the garbage collection guy) used real world data to map “cost of errors” to “project phase”:

From Software Engineering Economics.

You may have seen this before. This curve is infamous (poor Boehm) because, as it turns out, what he was actually showing was the Ford Taurus effect: Boehm was measuring the failures of the specific process software engineers were using (Waterfall) rather than some inherent property of software engineering.

This historical digression is all well and good (I find it fascinating) but I admit it is not necessarily helpful to today’s students simply because none of them have prior experience with Waterfall. What’s more, in 2022, none of them are likely to come upon it once they join the fray. The software world has moved on, and there is a 99–100% chance that their future employer will claim to use “Agile”. These students will be forced to use this “Agile Methodology” whether they like it or not. Unfortunately, this means that they will be ambivalent about it. They will never appreciate what it’s doing for them and furthermore, they will never take full advantage of it. I know this because this is what happened to me, and it’s a real shame that it took me years to understand why methodology actually matters.

I need to prove to these students that this matters not in the general but in the specific. I need to make a mental model for them — something simple first and correct second. Something to help them work in concert with Agile and not against it. And to build this model, we need some more graphs.

The Boehm Model was erroneously presented as a first principle, so we need a new first principle. This one will be even firster.

How do errors scale with lines of code (LoC)? Of course we know that 0 lines of code has 0 errors— that data point is easy. This leads me to propose the following:

I might even argue for an exponential relationship, but regardless — let’s all agree that it is increasing. This is self-evident to anyone that has written a line of code. Furthermore, let’s take a step of faith and say that as software projects age, they generally grow. We can restate using “new errors” rather than total errors to get something like:

Here is the first truth of Agile development: the only constant is error. This is the reality of building software in the modern world, though I concede that there are some types of edge case software (of the space shuttle or other physical systems variety) that must reach the point of negligible new errors. These are edge cases that our simple-first mental model does not handle.

Now I’m going to propose a goal. Instead of the Boehm model of increasing cost, what we actually want is the following:

That is, if we ever want to ship and operate a product, we need a constant cost of fixing errors (or decreasing — but let’s not kid ourselves). Otherwise, a software project will become untenable to maintain. How do we get here? Let’s restate the question with a simple, visual model:

This looks trivial. And it would be, if there were some sort of constant way to address errors. Unfortunately, “errors” are by their nature unknown, so saying anything sure about their resolution is… difficult. We all have a Jira ticket number seared into our brains for the Bug That Must Not Be Named. Your skin crawls at the mention of that number. It took the lives of friends and colleagues — it almost took your own. You escaped with only your life and a slightly better understanding of character encoding. This class of bug took 100x the effort of the tenable class.

Furthermore, although our goal is “constant time error fixing”, ideally we also need to find a way to make that constant as low as possible.

Agile proposes a novel approach to thinking about this equation. Agile proposes that the cost of addressing error is proportional to the speed of the feedback loop.

This is our second truth: Short feedback loops lower the cost of errors.

Now we can see why Waterfall had ever increasing costs! The feedback loop grew longer and longer because of the process, and thus, the cost of errors skyrocketed. Let’s plug this into the equation to get a full mental model.

Now we have a complete image in our heads of one of the most powerful results of Agile — a visual expression of what Agile can actually do in practice. This gives meaning and motivation to the practices of experienced engineers that are never clearly explained to the grasshopper.

Ah, so this is is why we do short sprints. This is why we’re delivering builds so freaking often. This is why CI/CD is so important. This is why fast unit tests are important. This is why git mastery is so important. This is why your IDE hotkeys matter. This is why touch typing is important. This is why I need to care about how I sit in this dumb chair and where exactly the monitor is positioned and how often I touch my mouse.

You see, properly appreciated, the Agile feedback loop affects things all the way down.

A final egg

You may have noticed something here. This is like the Easter egg at the end of a Marvel movie (but not like the stinker at the end of The Eternals — can we please expunge that one from the record?). It vaguely hints at a sequel.

There’s an unmentioned intersection point here. What’s that doing there? It needs to remain unmentioned for now because, as it turns out, Agile doesn’t have much to say about when to stop working on your feedback loop. How fast is fast enough? How do I approach feedback on my feedback loop? These are questions that Agile doesn’t have the answers to — not directly anyway. For that, we need a combination of hands-on experience and a look at another practical philosophy. I’ll let you know when I’ve mastered teaching that part.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store