Image by geralt on Pixabay

Why programming is hard

Jos Visser
11 min readFeb 26, 2019

--

In the office chill-zone otherwise known as the weeks around Christmas and New Year I was determined to crank out a bunch of high-quality code to solve a Problem. I understood the Problem and I had an Idea how to fix it. It was a good Idea, modeled after a best-of-breed solution that has been around for ages. Not entirely trivial, but definitely not rocket science. Nothing that hasn’t been done before.

I was going to use version 3 of a programming language that I have been using for over five years now. I did not have a lot of experience with version 3, but I had been using version 2 extensively and version 3 is not that different. On top of the programming language I was going to use a Library for making server-to-server remote procedure calls. Everyone at my employer uses the Library all the time so I expected few problems. In short: no road blocks anywhere in sight; no trouble on the horizon. The perfect way to spend the quiet days of the Yule and the first few days of the new year.

Fast forward two weeks. I got it working alright but it’s just not fast enough (and we already have the fastest computers and networks that money can buy). And in my subsequent drive to make the program faster I kept running into one roadblock after another. So this Thing that is not rocket science, written with programming languages and tools that are relatively well understood and supported, is still not finished. And that’s not because I am a moron: I have been programming for over three decades and have designed and implemented high-performance web and network services at a well-known search engine company for more than a decade.

The issue is just this: Programming is hard!

But why?

Let’s start at the beginning: A computer is an idiot box. It can execute only the simplest of instructions natively, like: Add 42 to this number and then store it in memory location one million two hundred forty three thousand twelve hundred and six. Or: Compare the number on line twelve in your internal notebook with the number on line fourteen and if they are the same please continue execution with the instruction on the current instruction memory location plus thirty two. If you want to have a feel for what that is like, try a game like the Human Resource Machine :-) (Highly recommended even if you can already code. If you consider yourself a bit of a hacker, try ExaPunks)

Unfortunately not many real world problems can be solved directly by the instructions that the idiot box understands. Even higher level programming languages that seem to support more complicated instructions (such as calculating the square root of something, or converting a line of text in all upper case to all lower case) do not have helpful instructions like SolveMyProblemPlease. Because of this limitation every real world problem needs to be decomposed and then decomposed again until we have written down the entire process in instructions so simple that even the idiot box can understand them.

This might seem tedious (it sometimes is), but why is it hard?

One of the reasons that this is hard is that while decomposing the problem into myriads of simple instructions it matters how many of these simple instructions we generate. Modern computers can execute billions of instructions per second, but if we need hundreds of billions of instructions to achieve a particular goal it still takes time. For years we could hope that buying a faster computer could solve our problem, but alas these days seem to be at an end, just at a time when newer applications (like artificial intelligence and machine learning) require more and more computing power than ever. That means that a smarter translation of the problem into fewer instructions is often better if not outright required. And that smarter translation often requires non-trivial insights from the world of mathematics.

Another reason that programming is hard is that we usually don’t understand the original problem very well. Even the people that are experts at a particular task/problem cannot describe the process by which they solve the problem very succinctly. Especially not if they have to cater for every possible exceptional situation, even if they are very rare, even if they occur only once in a lifetime: Your program will have to deal with it or fail hopelessly. It’s no wonder Murphy’s law is so popular among software engineers because it beats us over the head all the time!

People improvise when faced with an exceptional or non-standard situation. Unfortunately computers cannot improvise even if their sad mechanical existences depended on it! People recognize incorrect input when they see it; computers don’t. Instead they just process it, which by the way created the saying: “Garbage-in, garbage-out”. “You asked me to add 4 to this person’s name? Sure thing!” Or: “You want to take the square root of a social security number? Not sure what it means, but here you go!”

Even when we think we understand a problem, we don’t. Sometimes this is because the world is very complicated (or at least very diverse). This leads to incorrect assumptions such as “everybody knows their date of birth”,“people live in a place that has a street address”, “the year 2100 is a leap year”, or “a filename does not contain spaces”. The first of these incorrect assumptions befuddles administrative computer systems in the entire western world; the second is not even true in the US and blocks some citizens from voting, the third is simply not true, and the fourth crashes well-meaning backup programs all over the world on a daily basis.

Not completely understanding the entire conceptual space in which your program lives makes it impossible to write a correct program. This leads to all sorts of problems, both during software development and thereafter. A number of Dutch researchers wrote eloquently about this in their book “The Digital Cage” (Dutch link) where they showcased a number of intricate problems people run into because government computer programs are pretty much always incapable of dealing with the exceptional (but in now way crazy) cases that reality invariably produces.

To add insult to injury: Even if you completely understand the problem and have a deep understanding of the ways to solve it then it is still very hard to code up the solution correctly. A lot of this boils down to computers being the idiot box that I alluded to earlier. Programming means getting every detail exactly right. The idiot box does exactly what you told it to do, not what you meant for it to do. Mistakes are inevitably made and sometimes they are fatal. I have been involved in a world-wide outage of a well-known Internet service that ultimately hinged on one missing character in the source code (for the connaisseurs: In C++ a pointer to a boolean is also a boolean :-). Or what about the $125 million Mars orbiter probe that was lost because one team of engineers used imperial units and another one used metric? Of course these examples are dramatic, but problems like this plague software all over the world all the time!

Another fundamental difficulty when programming is that you are often locked in to the choices you made in the past. When solving a problem you make all sorts of choices big and small. More often than not you will have to live with these choices for a very long time, even if they turned out to be suboptimal.

A great example of this is the Y2K problem: For reasons best left undiscussed many programmers decided to store the year of a date as two digits (e.g. 66 for 1966). Then when the year 2000 came in sight the question suddenly arose whether “12” meant “2012” or “1912”? Cue a massively complicated reanalysis and rewrite of many programs and a scare that kept planes on the ground during the fateful new year’s eve. Y2K might seem like a massive one-time event, but programmers make choices like this all the time! It can be as big as Y2K or something as simple as deciding to name your system function to create a file “creat” (and confuse generations of programmers ever since).

Fun side note: The next Y2K is happening in 2038 when for some technical reason the internal clocks of many computers and programs run out, based on a choice made in the 1970s on how to store a time and date.

By the way if you thought the Y2K problem was bad, wait until the Y10K :-)

In programming the past haunts us like nothing else. We are constantly faced with problems great and small that have their origins in choices that were made in the past and that we cannot get away from unless we are willing to revisit and fundamentally address them. In that sense programming is very much like constantly rebuilding a century old house: Imagine the hassle you will be going through when adding doors and windows, cabling it for Internet, adding a bowling alley in the basement, building a hot tub on the second floor, or adding a third floor. My wife and I are currently trying to replace a refrigerator in our apartment and guess what: They don’t make 27-inch fridges any more so we either have to break out a wall or buy a smaller one and figure out how to fill the remaining space. Things like that happen in programming all the time.

Unfortunately it is often not feasible to fundamentally address a suboptimal design choice from the past. There are many reasons for that: Most of the time the task is simply too gargantuan: Combing through millions of lines of source code to find the things we need to change and then test and deploy them often makes no economic or practical sense. There are other reasons too: We might have lost the source code of the programs that we would need to change, or these programs are running in places where we cannot easily replace them (for instance in millions of cars that cannot easily be recalled to get a software upgrade). So instead of addressing a problem fundamentally we more often than not add complexity to work around design flaws. This complexity then becomes a millstone around the necks of future programmers. A good example of this problem is how we deal with pieces of text.

Way back when in the dim (but near) past the (American) computer industry figured out that they could represent all characters in US English with an alphabet of 127 characters. So they set forth to create a table with 127 entries where ever letter, digit, or special character was represented. The letter “A” got entry 65. The digit “0” got 48, the space character got entry 32, et cetera.

Then the western Europeans came along and it turned out that their languages contained letters and characters that were not present in the table! For instance hey have é, ß, ç, ü, and other strange characters. After some head scratching the table got extended to 255 positions and all of the characters in western European languages got an entry in the table too. All was well for a (short) while, but then it turned out that if we wanted to include letters from Icelandic and Polish then even the 255-character table was not big enough! Instead of making the table even bigger (which would be inconvenient for technical reasons that are related to how computers store data) they decided to create separate tables for different groups of languages. So now if we come across a piece of text we also need to know which character table was used to encode the characters. Without that knowledge we cannot perform some of the most basic text manipulation such as printing, sorting, or turning letters from upper case into lower case (or vice versa).

As you can imagine the problem became progressively harder when we wanted to include Asian languages such as Korean, Chinese, and Japanese. Eventually the multiple table solution lost out and we created a table with 65,536 entries to contain letters and often-used characters from these Asian languages. When that was not big enough we went all-out and created yet another table with almost 4 billion different entries in order to contain all the characters of all the languages on this planet and probably some from alien civilizations as well! And to round it off we then created a super smart way to store pieces of text in memory in a way that undid some of the inefficiencies that came with storing text using the larger tables. As you can imagine all of this made processing text more and more complicated, and hence it is not surprising that many systems in use today can not reliable print the name of the city of Zürich.

Similar problems appear all over the map in modern programming languages and computer systems. A design choice, once made, is more often than not set in stone for all of eternity, leaving programmers to deal with the resulting complexity and confusion, making even a simple program almost impossible to write correctly.

The more technically astute might wonder if all the aforementioned problems are not solved by using newer and better programming languages. Unfortunately (a somewhat overused word in this article) the answer to this question is a resounding “No”.

First of all some of these design choices (like storing a year in a two-digit field in the database) are independent of the programming language you are using. As parts of the system get (re-)written in newer programming languages we are still beholden to choices made in the past because the new programs needs to exchange data with existing older programs that don’t know any better. Another problem is that the designers of new programming languages don’t get everything right on Day One, which leads to changes in the language in order to fix any design mistakes. But alas, in order not to annoy existing users of the language the fixes are either complicated hacks on top of the existing system or a complete additional module (with lots of duplicate functionality). The first solution creates complexity and the second one creates confusion. One of the newest programming languages around (“Go”) is already at version two and therefore has to deal with the problem of backward compatibility.

That said, choosing the right programming languages certainly helps, though it creates problems too. For one your programming language choice is also a choice that you locks you in for all time to come. If you later find out that your language choice was suboptimal (for instance because it is easy to develop with but slow as molasses) you are stuck with it.

I could go on and on. The number of different factors that conspire to make the life of a software engineer more difficult than it theoretically needs to be could easily fill a weighty text book, which will obviously never be written because we have enough trouble as it is getting people to study computer science :-)

However, before we all reach for the Prozac and my younger readers decide to apply for a liberal arts degree instead:

Q: Is there no good news to report at all?
A: Yes there is!

The fact that programming is hard makes it fun and rewarding! When programming you are pitching your intellect against the complexity of an entire universe with nothing more than math and an idiot box to help you solve the problem. There is an incredible satisfaction in writing a beautiful piece of code that just works, that neatly dances around the problems, and that elegantly solves the problem. And, not unimportant, the fact that it is hard means that the people who do it well can bask in the comfort of generous compensation that allows one to pursue simple but fun pursuits (like the law) in their spare time :-)

If only there were no deadlines…

--

--

Jos Visser

"Would you want to have kids growing up into what's left of this? She shook her head; she said "Can't you see? The world is you. The world is me."