The Trick For Writing Better Software Lies On The Technique
Why we need several techniques to be able to write software efficiently
In the beginning, everything is new. People learn how to use ifs for conditions, build classes or functions, print something on the terminal… It seems very easy to produce software without all those theoretical patterns, practices, and techniques that many people talk about. It looks like there's no reason for learning any of them.
The problem is that the code somebody writes to do something simple is not the same code somebody writes to do something complex. When coding on top of complex requirements and circumstances (high stability, shared codebase, big teams) the solution for a task becomes difficult. A simple problem can be solved with an easy solution, but a complex problem most of the time can only be solved with a difficult solution.
A simple problem can be solved with an easy solution, but a complex problem most of the time can only be solved with a difficult solution
Unfortunately, it's very hard for someone that have never experienced some of these complex problems to understand the purpose of why some of these techniques exist.
David Dunning explains it very well in his own words:
… the skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is
The Dunning-Kruger effect, named after the ones who found it, is not restricted only to software development. Most complex areas of many industries contain many years of experiments that usually discover several techniques to tackle specific problems arisen from complex requirements. It's totally reasonable to be unaware of most of them, nobody was born knowing things and sometimes there are some pieces of knowledge that are really hard to get. Sometimes we need to rely on abstractions, and sometimes the abstractions can haunt us.
Software is inherently complex, humans did not evolve to build software efficiently. As Douglas Crockford likes to say: it is the most complex thing that humans make.
The complexity of software and the complexity of the teams that work with it tend to increase over time. If we always use the simple solution even when the requirements evolve and become complex, we will eventually create code that is so complex at the point of being unmanageable. At that point, the "simple" solution becomes the "naive" solution, which is the same as the "wrong" one.
If we always use the simple solution even when the requirements evolve and become complex, we will eventually create code that is too complex at the point of being unmanageable
In an interesting talk, called "Why Writing Correct Software Is Hard … and why math (alone) won’t help us", Ron Pressler shows how hard it is to objectively verify the correctness of a system:
… we will never find a generally useful category of programs for which verification is always affordable, but we can make the following obvious observation: all programs we care about are created by people, and if you like, we can consider humanity to be one large turing machine that generates all machine’s end, and it is very likely — in fact, we know it to be the case — that there are patterns in those machines that humanity generates, and if we study them we would be able to maybe get some affordable verification.
Now we can’t analyze humanity as a program, the only viable method is the empirical study of the programs humans write.
The TL;DR is: We can't analyze mankind as a program and there's no big logical answer for proving the correctness of a non-trivial piece of software. He proposes the use of empiricism to analyze the programs humans write, just like how physics does, gathering evidence through the scientific method in order to prove a small piece of what it really is.
We can also see this from a slightly different point of view, while still leveraging the analogy of the humanity as a big turing machine: If humanity is like a turing machine that generates all machine's end, then we can also find patterns and potential solutions to solve common human problems that happen in software development by the builders of the machines that humanity generates (software teams). We might be capable of finding patterns that will help pieces of that machine (members of those teams) to produce a better software in a way that the final programs they generate are more likely to be correct.
If humanity is like a turing machine that generates all machine’s end, then it means that we are also capable of finding patterns that will help the developers of that machine to produce a better output
The difference is that it's not easy to verify those patterns objectively. For many of them, mostly the ones that rely on the specific mindset of the humans involved, the scientific method cannot help efficiently because we can't easily restrict the experiment into a controlled environment for an objective and reproducible result, although that is possible for certain attributes.
Computer science differs so basically from the other sciences that it has to be viewed as a new species among the sciences …
If we can't use science effectively and we can't objectively determine if an answer is right or wrong then there's no apparent way that we can find a solution and make progress, but just because there isn't an apparent way to find a solution, that doesn't mean that there is no solution
It comes down to the problem with the "simple" code and the "naive" code. The difference now is that we are not talking about the code that is the output generated by humanity, we are talking about the humanity itself: teams and humans.
Let's call the human factor complexity as the relationship between the number of members of a team, the level of complexity on the requirements of the business they are working on and the level of expertise between each one of them in different topics. A high human factor complexity represents more complex and broad requirements, while a low human factor represents less complex and narrow requirements.
Human factor complexity is a theoretical measurement of the complexity of a team given the context they are inserted
Let's assume, for example, that somebody is going to build a website for a bakery. The requirements are just to show some common products and a phone for contact, no heavy server-side implementation. For this case, it is reasonable to reach a conclusion where the team needs only 2 members, the designer that will design the looks of the site and the developer that will translate that to the browser. The human factor complexity is low because there are only a couple of members on the team, the requirements are simple and the owner of the bakery don't care about hiring the best developers on the market.
Producing the right answer is easy.
Now let's assume, for example, that somebody is going to build a platform that will make it possible for all bakeries in the country to register their own products dynamically through a server-side CMS system. For this case, even if they can start with 2 members, due to the complexity of the project the team will eventually need more than 2 members, they will need a bigger team that will have to share among themselves the development of all the pieces of the codebase to ensure the stability of the platform. The human factor complexity is high because there are many members on the team, the requirements are complex and the owner of the platform will demand high skilled individuals that can deliver all that.
Producing the right answer is hard.
When working with humans and complex requirements, someone can either use the easy solution for a simple problem (building the website for the bakery) or the difficult solution for a complex problem (building the platform). What can't be done is to use the easy solution for the complex problem, or believe that the easy solution is the right solution, because it is not.
For the simple problem, there isn't a lot of techniques because the solution is easy, but for the complex problem we need to use a lot of techniques to be able to tackle it because the solution is hard
One of the benefits of not being able to objectively determine the effectiveness of a human based technique is that it opens a door for each team to experiment on themselves and find out if one of those practices works or not in their specific context.
The best way to increase the chances of finding answers when handling with humans is having a more innovative mindset. Thinking out-of-the-box for alternative solutions that don't strictly limit yourself on what can be scientifically proven. Like an entrepreneur that jumps into the complexity of the world, trying to build new things without clear evidence that it will work. They try and they fail, but learn with their mistakes, and eventually they find something new that becomes the solution for a very specific problem, with the potential to change everything.
What we need are not better programmers on the classical sense. What we need are better innovators that can use different techniques to look for day-to-day issues and come up with solutions for problems where traditional means fail to do so.
As Ron Pressler said at the end of his talk at 45:34:
Complexity is essential, it cannot be tamed and there’s no one big answer, the best we can do, in society as in computing is to apply lots of ad-hoc techniques and, of course, try to learn about the particular nature of problems as much as possible.
If software is complex, the trick on how to do it better is to learn about the particular nature of everything that surrounds it, including people.
We might find out that "coding" is the simplest of all of our problems.