Agile: The Pope at the Dark Ages of Software Development

Marcelo Sousa
The Startup
Published in
5 min readAug 26, 2019
Photo by Paul Gilmore on Unsplash

Agile, the software development methodology turned into a religion, is built on the fallacy that the complexity inherent in the fuzzy collective intelligence of a group of people, a.k.a. thin air, can be broken up into clearly defined simple tasks.

It is a brilliant business strategy as it shifts the error rate from the client side into the process itself. We have fewer unhappy clients because they are now a part of that collective intelligence. What happens if the whole thing blows up? Well, the process can’t be blamed; remember that the client is never wrong. So the blame is shifted to some stakeholders and they are usually software developers. Their way of coping with this social phenomenon was to focus on the speed of execution and the creation of a myth: the 10x engineer.

However, humans (as opposed to machines) are not built for speed and rushing on the whole evolutionary process is unpredictable. What we observe nowadays is that technical and intellectual debt seems to be showing up as an exponential: suddenly and overwhelmingly. This is not only happening with individual software components but also in the entire software development industry because these components are all connected in some way. The community brands it as scalability problems but it’s just more evidence that ultimately agile software is a business methodology while many associate it with a technical methodology.

The quality of agile software is pretty low as shown by the incredible growth in the industry that tackles the problems associated with software quality. Quality is usually composed of two main aspects: maintainability and reliability.

Maintainability concerns the evolution of the software. It questions whether the software is being built in a way that it is easy to identify:

  1. Faulty components in the process of bug fixing;
  2. Connections between components so that new features can be added on top of them.

Both aspects rely upon a basic task in human understanding: search. The problem is that widely used programming languages were not design with search in mind. Thus, the low-level tooling available to developers is not suitable to this fundamental task.

Reliability concerns execution aspects of the software the most important usually being: resource performance, reproducibility and security. It questions whether the software is:

  1. Cheap to execute — at the end of the day, the memory-runtime trade-off is ultimately decided by those who pay the bills. Software developers tend to be very confused about this and relate their own market value to their creative ability of solving this optimisation problem. This is a risky bet considering that the cost of hardware will keep decreasing and there is no real momentum in tackling climate change among developers — the servers will keep burning.
  2. Consistent over executions — this feature is so tricky that I believe we are starting to give up on it. This might actually be a good thing since we are starting to dabble in the whole machines as humans and vice versa in so many ways, e.g. widespread processing of streams of events, reactive systems, probabilistic programming under the umbrella of machine learning, etc. This will continue to present new challenges for software development as human knowledge of the execution will continue to decrease.
  3. Trustworthy — we are going through another shift in the code vs infrastructure vs data valuation that defines what does it means for a piece of software to be secure. It is common sense that software is not just the code that is executed but also the entire ecosystem around it: the infrastructure where it runs and data that it consumes and generates. In the early decades of software development, the patent war era, most of the value was associated with the code itself provided you had some machine to execute on. It was also a time of adoption where the provider of the machine and the code was one and the same. Security and trustworthiness was almost reduced to the previous considerations of resource performance and reproducibility. We then saw the hardware explosion in two stages: continuously going small into every room of every building and the piece of clothing, and the creation of Behemoth machines filling floors economically feasible after the dot-com burst. At this stage, code was still quite valuable because you had more infrastructure and needed specialised code for it — you need train manufacturers once you build a railway. However, a hidden dependency was created: a software developer of such specialised infrastructure became hooked into the infrastructure itself. With the natural increase in the complexity of the infrastructure, this dependency became stronger until it was accepted as the norm. Security concerns shifted from code to the infrastructure as the most important thing was for it to be available all the time for whatever code it had to run. Whatever happens inside the train is not a concern of the railway company as long as the movement of the other trains is not impacted. Then a magical thing happened: machines and its accompanied software passed the threshold of trust for an average human to believe that putting all of its information into it would result in a better quality of life. Somehow we started to believe that printing our personal information in the train ticket and showing it to the validation machine upon boarding, we would get a better seat. For a while this was true: the consumer became a sort of angel investor — the early adopter of a technology did get the benefits of its products, could influence the development and dramatically improve its productivity. This fundamentally shifted the valuation towards the data about the consumers. The complex situation we are facing is a combination of this phenomenon at a global scale combined with the fact that code and its infrastructure are not real! In fact, the only thing of real value in the triad code-infrastructure-data is data. And not any kind of data but data about real events, people, locations, etc. Since virtually every software component contains some personal data about its users (usage data is personal), we are now living in a time where it is safer for a service to crash than to leak information.

Business practices encouraged this optimisation for speed of execution preached by Agile evangelists. This is quite short-sighted and does not consider the global ecosystem where these components now live. Usually when a company faces either the scalability problems or trustworthy problems it is already perceived as a success — founders and specially investors could care less about these problems until it prevents them from achieving their business metrics. The sensible thing to do then is to go for a “successful exit” or/and start over. Unfortunately, we do not know yet how to recycle code, infrastructure and its data. Regarding our own data, the only way to recycle it is through time with new experiences so in the end this increase speed of execution might turn into a slowdown. Because unreal things have real consequences, we are starting to see the effects of this policy of iteration and its nefarious effects on the world.

It is clear that the Agile way of developing software will become irrelevant because it is incompatible with data-driven systems working at scale which are becoming increasingly important. Working on improving these processes is like trying to fix a broken bridge when the flood is about to hit. We are still very much in the Dark Ages of software development where Agile is the ruling Pope.

--

--

Marcelo Sousa
The Startup

CEO & co-founder @ Explore.dev. Building tools for humans and machines! More info at https://explore.dev/