The Myth of ‘Serious’ Code

Elliott Hauser
6 min readFeb 27, 2015

--

If Kim Jong Un can code with binoculars in one hand, so can you.

I see too many beginning coders apologizing for ‘just’ learning Ruby or Python or Javascript, as if they were cheating the system somehow. It’s as if there’s some ‘Serious’ coding that they’re not doing. The idea that some coding languages are more Serious than others is a myth that needs to be disproven.

On the Spectrum

Coding languages are, of course different from each other in myriad ways.

There’s a spectrum of usability from technical, low-level machine language to the high level languages most people use. The low usability of machine language does not make it more ‘serious’ or advanced than high-level code. That low usability is a consequence of design choices language designers made to accomplish specific design goals.

Let’s zoom in. Here’s an example of a program in machine language:

“If this listing seems pretty inscrutable, then you have some idea why hardly anyone programs in machine language anymore.” -from Talking with Computers by Thomas Dean

This code directly manipulates digital sensors and motors once it’s interpreted as electronic signals by a digital processor. The human-readable comments after the # sign are ignored by the computer- they were put there by the programmer so that other humans reading the code could understand what the heck is going on. Thank God we don’t have to write all our programs like this.

A punchcard containing machine-readable code for an early IBM computer.

Programs must eventually become machine language if they’re going to control computers. But instead of writing machine language directly in binary, hexidecimal or decimal code (as early computer scientists like Alan Turing and friends had to do!), we have invented programming languages to help us translate our ideas into machine instructions more quickly and easily. There are a great many ways of producing the machine language you saw above, so there are a great many possible programming languages, and they all fall somewhere along a spectrum of usability.

Viewed like this, the choice of language is simply choosing how you’d like your computer to help you generate your machine language. What factors are relevant to this choice?

Design

All of the programming languages that exist today were designed by people to serve a certain purpose. This is crucial to understand, because their design goals define the experience users will have when programming with them.

Let’s contrast two programming languages, C and Ruby, by their goals.

  • C’s Goals: “be close to machine language, but portable across many machine architectures.”
  • Ruby’s Goals: “beauty, simplicity, and developer happiness.”

These divergent design goals make the experience of using these languages drastically different, though eventually each generates machine language. Because C’s language constructs are closer to what the machine must actually do, its machine code is more efficient for the computer to execute, making it faster for the machine. Here’s a visualization of programming language speed for a simple text processing task as the size of the text to be processed increases:

Inquiring minds can click the image for more information about the test conditions, source code, etc.

Because execution time is easily measurable (and people love optimizing things they can measure), ‘fast’ languages like C tend to sit at the top of people’s mental hierarchy as the most ‘serious’ of programming languages.

This is the myth. And it’s closely related to this imprecise measurement of speed.

What is Speed?

Though simple by today’s standards, serious magic was required to get fast 3-D graphics to work on the relatively resource-constrained computers of the 1990s. The low-level machine speed of C was an integral part of this magic.

C and other low[er] level languages were de rigueur when computer hardware was slow and systems were resource-constrained. Doom, Quake, and others of the first generation of 3D games, for instance, pushed the limits of what was possible by using the machine-speed of this language.

Today, though, computing power is cheap and abundant relative to most needs we have of our computers. The real constrained resource in software development is developer time and attention. Languages like Ruby allow programmers to do more with less code, making them faster for the developer. Increased developer-speed means it’s faster to find out whether anyone wants what you’re making, easier to add the features users request, and easier to finish your project with fewer developers.

Expressiveness is a rough measure of developer-speed. This is a graph of languages ranged by how much code is in a commit, on average. It’s not perfect but shows a clear gradient, with C on the far right.

It also makes such languages easier to learn (for more on choosing a first language to learn, see this post). This makes them places beginners tend to start and is another reason why some might see them as at the bottom of a hierarchy of seriousness. Which approach is more ‘serious’?

“There will never be a universally-adopted Esperanto for programming.”

Diversity

This is a false choice, though, because the answer varies depending on your needs. Scientists, statisticians, and data analysts may need the machine-speed of a language like C to make their work possible. Most web developers need the developer-speed of a language like Ruby or Python to meet their business or client goals quickly. The ‘best’ language is different in each case.

This diversity of needs, then, makes a strong argument for the importance of a diversity of programming languages with diverse design goals. There will never be a universally-adopted Esperanto for programming. Even machine language isn’t a potential universal language, since each machine architecture requires a slightly different machine language. And that’s a good thing, since we all have such divergent needs of our software.

Take Ownership

New coders reading this post: I hope it helps you take ownership of the code you’ve learned and the code that you write. You don’t need to know C to be a ‘serious’ coder any more than a C coder needs to know machine language. We’ve invented programming languages precisely to remove this burden from ourselves so that we can get back to more easily making the things we want to exist.

If you learn C, do it because you want to use it for things it’s good at like programming an Arduino, programming an embedded microcontroller, or implementing a fast machine learning algorithm, not because you think it’s more ‘serious’ than the language you’re already learning. Let the projects you want to do drive the language you learn.

They’re all just easier-to-speak dialects of machine language, anyways.

“Let the projects you want to do drive the language you learn.”

Serious Play

All of the most serious coders I know started their coding careers doing non-serious things (and many never lost the habit). Many coded simple games. Others coded research experiments or visualizations. In every case, they were motivated to make something that they were genuinely interested in. This gave their coding a quality of play: it felt like fun.

High repetition with relevant feedback is key to mastering any skill. Whatever your chosen language or project, make sure you spend lots of time working with it and get feedback from a guide or mentor. The whole purpose of coding languages is to help anyone build or accomplish cool things more easily. Use whatever language helps you accomplish this most quickly and most completely.

Looking for a first language to learn? My recommendation is Python, and I wrote a post explaining why I think it does all the things new coders need from a first langauge. You can find other stuff I’ve written here.

--

--