Software debugging Part 3: First principle approach

Sambhu Surya Mohan
3 min readMar 29, 2018

--

This is one technique I use for cases where the code was working fine up to a point. I use this also for code writing, when I use libraries where documentation is minimum. The first principle as in wikipedia is — “ A first principle is a basic, foundational, self-evident proposition or assumption that cannot be deduced from any other proposition or assumption.”. In my words — “A first principle is that point in which you are sure that everything is correct”.

Whenever I make a mistake and is not able to find out why the code broke, I start from the point in which the code was working well and fine. I walk along the modifications from the start to point where everything went wrong. Versioning systems always help me in this. I always commit my code for the smallest changes. Now when something comes up which was not there before I move my commit back and start from that point. I keep track of the stable commit but I can always checkout previous working code and see where the error started.

Sometimes I take out sub-molecular sentences and try to identify what went wrong. Usually atomic sentences are correct if not we have to start with that. So, for an error in a big code I make a small working code from samples similar to the sub-molecular sentence which is causing the problem. If the sample is working I will build from that and replace the original problematic code with the new one.

Extra points: I sometimes mix mathematical inductions into debugging. Instead of working with the whole set of data for which your code is being tested, test it on one data and then a small batch and see whether they are working fine. If something is wrong, it is faster to debug and fix with this instance. If it is correct then according to induction the code would work on similar set of data. The time and effort it takes to make a batch based code and using it will be very small compared to debugging the code with the whole dataset without any change in code. And as a good standard the code should always work with different dataset size, so mostly no change is needed in the code.

First principle on code writing

I use first principle also when coding. This is not purely a debugging information. But I will just go through it. Sometimes the library we use doesn’t have the documentation needed. In this case I use the First Principle extensively to write my code. My first principle in this case starts from the fact that the compiler is never wrong. If the syntax and semantics is correct it should work. Most get the syntax correct without any problem and they forget about the semantics. A simple example would be for an assignment the assignment variable and value should be of same type(keeping aside casting operations). Sometimes people try to identify what went wrong with a compilation in which the code is syntactically correct. This usually happen in cases where a new library is used and they trying to do a brute force approach to figuring out the library. Although the concept may be known they try to use every object which fits the description. Instead we can verify what the input and output of function is, and type of assignment variable used before starting the first line of code. We can use that information to cut down the objects to use in that circumstance and build on that. Its like making a structure from building block. If we know those pieces of blocks which fits exactly to other and we know the structure of the building, then it is easy to build it.

I have lot more to say about the topic. But may be I will do that sometimes later. Hope this was useful and expect some comments on the whole set.

My older article in the set Part 1, Part 2

--

--