“The Problem With Software”
I’ve always been interested in finding new valuable software engineering books and in my article “Top 5 Contemporary Software Engineering Books” I’ve put together a list of my current favourites. “The Problem With Software: Why Smart Engineers Write Bad Code” by Adam Barr (2018) — if I had read it earlier — would probably have made it onto that list. Here’s a review and stories from my personal experience related to some of the contents of the book.
The Problem With Software
The Problem With Software has 2 ratings and 0 reviews. An industry insider explains why there is so much bad…
Barr basically presents a long history of computer science and technology and critically discusses the fact that there’s little common agreement on standards and sound SE approaches. He states:
Unlike in other engineering disciplines, having a degree in software engineering does not guarantee that you understand a known corpus of programming tools and techniques, because such a thing does not exist.
The book doesn’t contain a lot of concrete advice such as “If you structure your methods like this, you will improve your software design and write less bad code”. Instead, the author focuses on some of the root causes and explains why the state of the industry is the way it is today. (Spoiler: Among other things, the GOTO statement and the programming language C are responsible for a lot of bad things that happened in software.)
The first couple of chapters read a bit long-winded and to me the second half appeared more interesting and thought-provoking. This is probably just due to the fact that, despite some early exposure to BASIC in my Commodore C64 and Amiga 500 days as kid, I could relate more to the more recent developments from my professional experience. However, even the first chapters cite insightful bits of SE research studies:
“There is a unique maintenance aspect called ‘knowledge recovery’ or ‘program understanding.’ It becomes a major cost component as software ages (assume 50% of both enhancements and defect fixing).” Half your future maintenance costs will be spent relearning the details of your program that you will have forgotten in the meantime!
”…Hence most programmers cannot effectively test their own programs because they cannot bring themselves to form the necessary mental attitude: the attitude of wanting to expose errors.”
Overall, I really enjoyed the balanced, thoughtful writing style, and Barr’s vast knowledge of both industry and research. He primarily examines the core question “Is software development really hard, or are software developers not good at it?” from various angles. Also, what I like is that he doesn’t claim to know all answers, which in his case is certainly a form of understatement and humility. (I’ll come back to that at the end of the article.) The author highlights some of his personal challenges, caused by the lack of standard SE knowledge and reflected in statements such as:
I could mentor people on how to navigate the waters of corporate life, but that was generic advice that they could get from anybody. Like others, my guidance was vague: “Well, in this one case I remember this sort of thing worked OK, so why not try that?”
The impact of context in SE projects and a phenomenon he mentions called the “Gell-Mann amnesia effect” become even clearer when Barr says:
If you told members of one Microsoft team about the engineering experience of another team, they would immediately be able to identify — because of their knowledge of the internals of Microsoft — the ways in which that other team was different from their team, and therefore dismiss the guidance as not relevant. Meanwhile, they would happily slurp up guidance on Scrum, even if it was completely inapplicable to their team, because they weren’t aware of the details of the environment in which it had been successful.
In the chapter “Design Thinking” he covers topics such as the benefits of design patterns and explains in what situations they can be useful, for example when it’s likely that you want to modify or extend code in the future (since a lot of patterns focus on future extensibility). Their application can be pointless, however, if future changes to a module are unlikely in which case they would only introduce complexity. Moreover, the chapter contains references to amusing terms from noted people like Spolsky (e.g., “Architecture Astronaut” — somebody seeing broader abstractions and patterns everywhere…) or other witty observations:
Although grounded software architects, at this moment in time, are considered better than oxygen-deprived ones, the fact that architects need continual immersion in their team’s current project is another sign that there is not enough accepted knowledge and vocabulary around software engineering.
The author emphasises that “reasonable advice” from books such as “Clean Code” and “The Pragmatic Programmer”, doesn’t present “specific approaches”. Software design is unique in a sense that developed systems are hardly comparable to each other. Since we often deal with completely new domains and business problems the quality of design outcomes strongly varies even for very experienced people. Barr claims that “when you strip away the nonsense from software design, you are left with design patterns and not much else”.
Somewhat related to context, in the chapter “Your Favorite Language”, he mentions that the main issue with our multitude of different programming languages and their opinionated designs is that there’s very little guidance on when one language is superior to another one for solving a certain problem. And again later, in the chapter “Agile”, we only have little knowledge on when exactly agile software methodologies and practices such as Scrum are really valuable. The chapter concludes with:
While Agile may make easy problems a bit easier, it doesn’t help with the hard problems. It’s appealing to programmers, but to make software engineering more of an engineering discipline, something else is needed.
When I spent a couple of years in research and teaching at University, after reading relevant papers or “Making Software” (one of the rare books that attempt to bring SE research to a larger audience), I was surprised to learn how much SE research actually exists. Often, though, it left me somehow dissatisfied with the conclusions since you secretly expected universal answers like “No, TDD is not useful” or “Yes, Design Patterns are valuable” — arguments you hoped to use in your next discussions with colleagues to dismantle their religious arguments and anecdotal evidence. Of course, through the constraints of the experimental designs, the answers were frequently similar to “Yes, under these conditions and in that specific context, it can make sense…”. More over, I often felt that the study designs didn’t capture software development realities very well (which is one of the main challenges in evidence-based SE).
I particularly liked the final chapter “The Future” and the author’s suggestions on what we can actually do to improve the situation and naturally, a lot of the Barr’s ideas have to do with education, how we teach SE at university, and how students are prepared for industry jobs. The chapter is inspirational and when again thinking back about my own academic experience, the gap between academia and industry has always bothered me. Barr cites Weinberg who writes:
Software projects done at universities generally don’t have to be maintainable, usable, or testable by another person.
In an attempt to close this gap a bit, what a colleague of mine and I did back then was coming up with a new course concept for our students called “Coding Dojo”. The main idea was to set up a multi-day intensive seminar to teach participants what we regarded as important for working in industry later and what they otherwise wouldn’t learn in the computer science curriculum. So we brainstormed, came up with ideas, printed a flyer that should attract students, and built an interactive system for teaching topics like code readability, code smells, refactoring, etc. We spent our entire Easter holidays designing this realtime system that could present interactive exercises with automatic evaluations and tried to incorporate everything we knew about the topic back then, including material from books such as “Code Complete” or “Refactoring”.
I’m still proud of the concept and the course was a success (at least in terms of feedback and popularity) but as far as I know, until today it still has been a one-time effort. Furthermore, seminars like “Coding Dojo” wouldn’t help with the problem that students usually don’t have to work with “larger pieces of software”, which according to Barr would early expose them to programs that are “built from connections across API boundaries” and thereby confront them with important development activities such as reading, understanding, and debugging a significantly large system.
There’s lots to be improved in CS and SE curriculums and as Barr argues, specialisation might help since the entire field has become too wide and complicated to gain a thorough understanding of everything. So rather than making topics like compilers, graphics, and advanced data structures — topics with well researched fundamentals — mandatory, students in their undergraduate degree could choose to concentrate on their preferred subjects. This would then help shaping their profile for future employers and give the degree more credibility. Barr writes:
Currently, software engineers coming out of college are viewed as fungible; it is expected that any programmer, if found competent by whatever hiring procedure is used, can go work on any part of a program. As software becomes more and more complicated, however, it makes more sense for people to specialise in different areas.
Finally, my favourite piece of advice mentioned in the book and also heard elsewhere in the past is “The humble improve”, stressing the point that if you stay curious and keep learning while having the attitude of “Despite having years of experience, I don’t claim to be an expert in that topic and there’s always more to know”. Even though that advice may sound straightforward, I’ve been repeatedly stunned by how high candidates applying for a SE position rate their own skill levels (we ask candidates to fill in a “self-assessment sheet” before a technical interview). My impression is that usually it’s a sign of seniority when people with substantial experience don’t give themselves the highest ratings in particular areas — indicating that they stay intellectually humble and might in fact be the ones who best improve their skills.