
Patching Bad Design With More Bad Design Has Become the Software Standard
It never crosses anyone’s mind to return to what worked before.
Introduction
The last 20 years have seen a lot of disturbing trends in software development. Developers have to take instruction from people who know nothing about software; inane methodologies are regarded with reverence; shoddy work to meet unreasonable deadlines is the new normal. The concentration-enabling workplace has been replaced with high noise levels and constant interruptions.
But for every new problem there is a solution, and it makes things worse. Let me give a few examples.
Software Testing
In the past software development and testing were the work of separate groups. With good testers this worked well. In classical “blackbox” testing the QA groups worked from specifications and without knowledge of the code, testing for behavioral conformance with the design.
But things went bad; developers came to resent being confronted with their mistakes, so the two groups merged. And the perennial disdain of developers for documents got in the way; few developers can write worth a damn, even fewer can write clearly and thoroughly, and in the end hardly anyone read the documents.
So along came test-driven development, a completely absurd idea even as conceived and of little use in practice, given that there is a crazy idea that developers should have primary responsibility, if not sole responsibility, for testing their own work. So they have the same blind spots in writing tests that they had in writing code.
But since writing some tests yields better results than writing no tests, people think the problem is solved. It isn’t. Unit testing is a lot of work and often overlooks all but the most ordinary cases. And since the tests are supposed to be written before coding (why?) they become obsolete within days, if not hours, of commencing to code.
And TDD is aflame with fanaticism, people who think that unit tests are more important than products, that documents are obsolete as horse-drawn carriages, and that only TDD is “true” programming.
But it never crosses anyone’s mind to return to what worked before.
Concentration and Productivity
In the great days of software, the late 1980s and early 1990s, the primary imperative of management was to enable developers to work without interruption. We had single-occupancy offices, one meeting per week, and were instructed to never interrupt anyone busy working without good reason, e.g. the building is on fire.
Success at writing good software is directly linked to being able to maintain unbroken concentration, a condition we called Flow.
We wrote better code and a lot more of it, and with fewer bugs. But Flow is a fragile condition, easily broken and usually gone for the day. A meeting destroys it.
But managers coming from the business world resented developers’ expectation of having their own offices and not being mired in meetings, so recognition of the value of Flow disappeared. Productivity plummeted; code quality suffered; schedules slipped.
So then came more layers of procedure; methodologies that inevitably meant more and more meetings, which, being interruptions, just made things worse. Younger developers have likely never experienced this condition and having grown up with channel surfing, games, and nothing to encourage extended concentration may very well be incapable of ever attaining it
So instead of getting in the zone and writing solid code they have morning standups, sprints and their ceremonial meetings, layer on layer of procedure and distractions. It all sounds so very coool with new nomenclature (burndowns, retrospectives) but it adds nothing to productivity and for people who remember actually enjoying writing code the thrill is gone.
But it never crosses anyone’s mind to return to what worked before.
Package Versions
I’m learning Docker and containers. One of the earliest part of the introduction is the explanation that different developers may be working with different frameworks or packages and when their work goes from their own machines to others’ or to staging servers, things don’t work because the new environment is running a different package version and it doesn’t work with the code calling it.
This should not happen. Not ever.
- New releases should not break old code, they should be backward-compatible with older. If this is not possible then the new and incompatible package should have another name. People who write breaking code should be in another line of work.
- Everyone working on a project, and every server environment, should be required to run the latest versions of all packages.
It’s hard to imagine anything simpler than this.
But no. Instead the new Dumb Idea is to include copies of every package in the container. This is not really objectionable; packages are not terabytes in size and time not spent tracking down new bugs is time saved. But a project using many containers should not have three or four versions of the same package.
In the Component Object Model (COM) objects were identified by unique identifiers, GUIDs, and any new version with a breaking change had a new GUID. This was easy, it worked perfectly, but COM has been supplanted by .NET and now we have too many “stacks” and too many people who want to scent-mark their work by writing something new, inattentive to such considerations as breaking changes.
This is a lot of extra work; Docker and Kubernetes are fledgling technologies that require vast amounts of extra preparation, mostly because of the absurdity of incompatible package versions.
But it never crosses anyone’s mind to return to what worked before.