Typical programmer’s mistakes
Software projects going wrong aren’t always to blame on bad management — I’ll try to identify some typical programmer’s errors, not carried out solely by juniors but often by seniors.
I don’t intend to analyze bad programming practices but rather, I’ll talk about some soft skills related mistakes.
Before starting, just a quick manifest on programmer’s roles. A programmer isn’t only a technician, being generally a software engineer instead, considering he/she:
- operates in several stacks of software: interface, business logic, database, integrations …
- may have different responsibilities: devops, tester, interface designer, etc.
- traverses the various phases of a project: pre-study, solution design, implementation, support, etc.
The programmer is also a manager because:
- a boss isn’t always present;
- the boss shouldn’t perform micromanagement;
- there are methodologies in which the boss role doesn’t exist (e.g. Scrum) and everyone is “acting as the boss”.
On the other hand, a programmer is a scientist who proposes, tests, and accepts or discards hypotheses, researches, makes decisions based on facts and shows skepticism about unfounded assertions.
Finally, the programmer is a consultant, using experience to advise solutions, as a social being that must respect the client and the project’s environment. Without that, he/she’s just a “code monkey”.
I started the article speaking of guilt. However, we shouldn’t blame those who make mistakes, but rather those who lack the humility to recognize and fix them. Good programmers are humble to continuously improve, recognize shortcomings and never believe to know everything.
I’ll do it in two hours top.
Sometimes programmers fail to estimate as a result of pride: they promise to deliver something quickly when, in reality, this would happen only in ideal conditions (which is very rare as there are always uncertainties). Estimating something is difficult per se (Scrum, with its relative rather than absolute estimates, can help) and recognizing it, is the first step to alleviate it.
Admittedly, programmers are often pressured to promise impossible deadlines. However, when in the end these aren’t met, it’s the managers/company who loose (with the project impacted and the team exhausted). If the end result was to be the same, wouldn’t it have been better to be more realistic? Would you prefer a happy illusion or the solid realism of deadlines? What about the danger of creating false expectations? Programmers should try, progressively, to instill these ideas into managers.
I have nothing to do with it.
Some programmers get so embedded in the coding that they lose their track of the features — either because they’re doing something very complex, they’re enjoying it a lot or just don’t care. Probably no one explained the business for which they work, but sometimes, they don’t seek to know it. However, programmers, above all, are software engineers, hired to solve business problems, not simply to code. Programmers should aim to know the “whys” behind the features.
Delegation of fault
Programming, however rewarding, is just a tool at the service of the project/product and therefore the company/customer. Ultimately, it’s done for the purposes of the user (especially in information systems), so our actions should envision users directly or indirectly.
He should have the browser updated.
They are not very intelligent.
Didn’t he see the button!?
Blaming the user, besides doing nothing to solve the problem, excuses the real culprits who fail to know the target audience. On the other hand, it de-characterizes the profession, since users are the reason for the existence of information systems and software engineers.
That code isn’t mine.
On the other hand, there are cases in which the programmer offers resistance to change something, admitting ignorance about a subject to evade a task. After some time, he/she acquires the so-called “tunnel vision” of the code and/or the business. What programmers must realize is that, at the end of the day, all code belongs to all and everyone belongs to the company. Everyone should strive for the good of the project, so there shouldn’t be “reserved places” in the code. The good of one is the good of all, and personal agendas shouldn’t interfere with the project’s success. Dismissing guilt or responsibility adds no value — usually, not a culprit is sought but a solution.
Fear of refactoring
Given the time pressure or the fear of ruining something, many programmers choose to change “as little as possible”. When a change is needed, they end up “hammering” something, copy-pasting code, among other bad practices. The codebase becomes “spaghetti”, which no one else can maintain.
As in similar situations, there must be a balance. I don’t advocate that everything should be refactored all the time, but good code welcomes changes, whether desired or imposed. So it’s better to accept change than to deny it. Certainly, there are projects with a well defined term, but even these go into maintenance phase and have new functionalities added from time to time.
To avoid creating the fear of change, one must reduce the risk associated with it:
- Avoid mixing a refactoring with a feature; it’s fine to “boy scout” the code but try to isolate it into separate commits;
- Good code speaks for itself: it’s simple, structured and elegant enough to be prepared to be modified — it doesn’t depend on comments to be changed;
- There should be a good test coverage to verify that everything keeps working after each change; if there are no tests for the code to be refactored, start by doing it (TDD);
- Consider the setup of test and staging environments;
- It should be easy to go back if something negative happens (e.g. reverting code easily, performing simple and quick deploys);
- All programmers should try to know all the code (pair programming and code reviews are decisive here);
- Programmers should be familiar with the project architecture (and the system where it belongs) before changing its code (wiki pages with videos and diagrams can help).
The Joel Test covers most of these measures.
Being technology driven
Let’s convert everything to Swift.
It’s common for the programmer to suffer from “technology enthusiasm syndrome”, whether because of his/her comfort zone or some random whim. As a consequence, he/she ends up preferring or even forcing the use of certain technologies, even if it means changing a whole project. It’s okay to be an enthusiast; in fact, it may even be important. However, technologies aren’t toys — one should make an impartial analysis of the various technologies before choosing one. Here are some false arguments for making a decision:
I’ve used this in many projects.
He already has a lot of experience; let’s do it.
I’m really curious to try it.
I feel it’s the ideal solution.
I read on a blog and it’s just awesome!
This is what Google uses.
Everybody uses this now!
There are no universal solutions and the technological arena is constantly changing, so it’s probably always necessary to question previous decisions.
A good programmer, even if enthusiastic, knows how to slow down when he loves a certain technology or found out a bleeding edge framework. He knows that business is ahead of technology and that the fewer technologies a project has, the better. A bad programmer fills a project with dozens of libraries and frameworks that make it very sensitive to changes, with numerous dependencies and potential points of failure.
Dismissing paper drawings
Drawing on the PC is much better.
It’s easy to fall into the trap of thinking that everything has been solved in your head. This applies for example to an algorithm, a GUI, or a microservice architecture.
Who doesn’t remember high school, where a simple drawing helped to solve a math or chemistry problem? Being able to visualize something complex in images is decisive for the solution and to the way to get there.
In a world full of technicians and politicians all having different levels of understanding, a graphic representation was often the only way to make a point; a single plummeting graphic usually aroused ten times the reaction inspired by volumes of spreadsheets.
Dan Brown, Digital Fortress (1998)
By drawings I mean simple low-level prototypes (wireframes), flowcharts, and other sketches. I don’t mean using Visio or Photoshop, but instead paper/whiteboard and pen/pencil, for example, to draw:
- Flowcharts and activity diagrams to express algorithms, interactions and other flows;
- Low-level non-functional prototypes (mockups or throwaway prototypes) to illustrate graphical interfaces (GUI);
- Class diagrams to specify architectural concepts.
Drawings encourage discussion/iteration on the problem by putting it into a (visual) language (similar to talking about a problem to someone). This allows early abandonment of ideas without future and reinforcement of interesting ideas. Having a cohesive mental model of the solution allows you to be much more objective when it comes to putting it into practice.
On the other hand, drawings promote communication: it’s much better to present an idea to someone in a graphical rather than a textual form. It’s also easier for two or more people to construct a similar mental model of the solution instead of each having their own.
As a result, there are cost savings. A drawing that took 20 minutes can save a lot more time to programmers and the company, avoiding much unnecessary refactorings and the code probably has better quality when you had a clearer idea of the problem early on.
No pair programming
The pair programming technique is valuable and should be more applied. It implies that two programmers work on the same computer (two mouses/keyboards/monitors; one PC). Typical managers argue:
Why have two people in the same place when they could both be programming alone?
The point is that a programmer is not a machine; he/she makes mistakes, is distracted and has no linear productivity and motivation. Having someone (navigator) next to you (driver) allows to:
- Improve focus;
- Collaborate and to make less errors (reducing lengthy PR discussions);
- Share more knowledge of the code and the systems; therefore to achieve more coder redundancy;
- Blocking fewer times on problems that others know how to solve.
It’s not the purpose of this article to explain this technique so I recommend you read more about it. The main argument is that the programmers’ bottleneck isn’t the lines they can write per minute.
Accepting without questioning
Because we always did it this way.
This is perhaps the most sensitive topic as it deals with comfort zones and it’s never trivial to question managers’ decisions. However, when you get a task assigned, you shouldn’t just execute it. You should ask some basic questions first like: “does it make sense?”, “is that really what you want?”, “why is it like this?”. It’s only after the task passes this initial validity test that you should move forward.
It’s even worse when someone says not only what to do, but how to do it. Programmers should, using logical arguments, discuss whether that’s the best way to solve the problem, suggest better ways of acting, and question the methodologies and technologies in use.
Acting like a computer
Programmer aren’t computers. Since we deal with machines so often, we believe to be one, especially in terms of memory. Perhaps you often say:
I can’t forget this.
… but the stress and work log accumulate and some things are left behind.
There are such tiny tasks that won’t deserve the overhead to exist in the issue tracking system (like Jira or Pivotal Tracker). For example, details that someone suggests in the hallway or improvements that we remember out of nowhere (but that we can’t code at the moment).
For this, having simple listings (to do lists, checklists , etc.) can be very productive. A notebook or post-its suffice but if you prefer an electronic version, you can use Google Keep or Trello (or a similar utility). Both support listings and are sleek and collaborative.
There’s also the beneficial psychological effect — the willingness to complete a listing is always present (probably linked to collecting). We’re motivated to see this list checked as we complete it.
In short, what not to do:
- Having a culture that punishes refactoring and constant improvement;
- Estimate based on guessing or being too optimistic: it’s better for programmers and managers to be realistic;
- Be driven only by technology: information systems must meet and serve the needs of businesses and users, not the other way around — people and companies should not have to adapt to information systems;
- Not having the initiative to know the business, the customer and the users (and, worse, blame the latter);
- Devalue seemingly rudimentary techniques such as paper prototyping, flowcharts and basic to do lists;
- Accept tasks, technologies and methodologies without ever questioning them.