I’ve long been a little annoyed by the term “computer science”. It’s the standard term for the college program that teaches programming, but it’s kind of a lie in my opinion, because most of programming is not science in the formal sense.
There are many terms that I am comfortable applying to programming:
- Above all, programming is engineering. You are taking an established set of procedures, guidelines and disciplines, and applying them to new problems. It’s often a rather ill-disciplined branch of engineering (I certainly know engineers who sneer at it as a result), but done well I think the term applies, so I tend to prefer “software engineering” to “computer science”.
- It’s worth thinking of it also as a craft. This is the positive side of that rather weak discipline: since each problem is a little different, programming generally doesn’t quite reduce to rules, but taking the care of a serious craftsman then becomes really important. It’s not just a matter of knowing the bits and pieces — there’s a lot of practice involved, in mastering how to put them together well in a host of different configurations. (I sometimes say that I learned to program in the best medieval fashion, apprenticing to my father in his craft back when I was 14.)
- I’m even comfortable describing programming as an art. Certainly this isn’t true for everyone, and it’s really an art that only speaks to other programmers, but when I look at code it totally engages the aesthetic side of my brain. When I use words like “pretty”, “elegant” or “ugly” to describe a block of code, I’m not speaking metaphorically: it’s as true for me as when I am looking at a painting or sculpture.
But “science”? Yes, people lazily often use that as a synonym for “technology”, but IMO that’s misleading. Science isn’t about technology. Science is a rather specific technique for exploring the world and coming to an understanding of it.
There are a number of variations in how people describe the scientific method (I’m mainly following Popper here — you can adjust the following based on other versions), but roughly speaking, it goes like this:
- Start with a question or problem that you are trying to understand.
- Gather data about that question.
- Based on the data, formulate hypotheses about what might be going on.
- Come up with falsifiable experiments that either support or disprove your hypothesis.
- Do those experiments.
- Analyze the results, and based on those decide whether you are off-base or whether it is worth following this line of thought.
That really doesn’t sound much like programming, does it? Well, mostly not, but there’s one big exception:
Debugging, done right, is exactly science.
This is a nuance that most senior programmers know intuitively, but junior ones sometimes struggle with, because many educational programs pay little or no attention to the process of debugging. Which is a pity, because it is one of the most common activities that all programmers perform, and worth understanding.
When confronted with a bug, folks sometimes just flail at it — they try changing things willy-nilly, they make guesses, sometimes it just starts working and they declare victory despite not really understanding what was wrong or why it is now right.
Instead, I commend the scientific method to you when you are confronted by a bug. It requires a slightly different lens in this case, but hopefully you’ll be able to see the resemblance to the above:
- To begin with, you have a problem you need to solve: the bug in question.
- Start by gathering data. So much data. All the data. This is where junior engineers often stumble, because it feels like you surely must be wasting time on all this data gathering, but it’s often necessary to do quite a lot. Look at the errors. Talk to users about what they are seeing. Add print statements. (Yes, they’re old-fashioned, but there’s still often nothing quite so useful as a bunch of
- As you gather the data, keep looking in it for patterns. (When necessary, pull in other programmers to act as rubber-ducks in your search for patterns.) Start trying to come up with hypotheses about what might be going on, but keep comparing them with the data to see if it fits. If not, keep gathering data. How long does this take? There is no set answer — I’ve had bugs where I spotted the pattern in five minutes, and others where it took five months. But there’s nothing for it but to keep gathering new-and-different data, and seeing what you can see in it.
- Once you have a hypothesis, figure out how to test it. Yes, you can just dive in and fix it, but ideally be more formal about it. The best approach, if you have the luxury of it, is to start by writing a failing regression test that illustrates the problem, and which, if your hypothesis is correct, will work once you have corrected it.
- Then you fix the bug, after you have a nicely falsifiable experiment. If the test starts to work — great! You have good evidence that you have not only fixed the bug, but that you understand what you have just done and why it works. And if not, you’ve disproved your hypothesis, so it’s time to step back and come up with another one.
That latter bit is critical, and is worth underscoring. When something was failing, and is now working, and you don’t know why, mixed emotions are appropriate: it’s great that you don’t have a bug staring at you, but deeply uncomfortable that you can’t be sure that it won’t come back. What the scientific approach to debugging gives you is comprehension of what the problem was, which helps you avoid it and related bugs in the future.
Finally, there’s a common misconception about the scientific method that also totally applies to debugging: “proof” is rarely absolute. The more tests you have, the more confidence you can have that you know what you’re doing, but don’t fall into the hubris of believing that you completely understand all the possibilities. That’s occasionally possible in programming (a little moreso with modern, more-rigorous FP approaches), but usually the goal should be to advance your understanding, and gain more confidence in it. Don’t let that go to your head, though — the next time something goes wrong, and you wind up with data that contradicts your understanding, you should start by believing that data, and doing more experiments to determine the reality of the situation. Don’t ever assume that you know better than the empirical data: what is actually happening when the program runs.
If you haven’t done it before, I recommend trying this approach out the next time you hit a bug. It can feel a little slow and laborious, but that formality can often save a lot of time in the long run, because it often leads to better bug fixes, and a better understanding of your code.