It’s best to learn Computer Science, not just how to Program

Jean Lescure
11 min readMay 15, 2018

Hobbyists looking to light up some LEDs, code wizards at tech startups, even Minecrafters wanting to create “a cool game mod” have come up to me asking what programming language they should learn next; I usually give the following advice:

Invest your time in learning Computer Science, not programming

The TL;DR list of why this is the better course of action:

  1. You will save time
  2. Your projects will save money
  3. Your collaborations will be more effective


If you want more details, read on forward…

1. You will save time

Many people who program in Javascript don’t know the following:

// This
for (let i = 0; i < n; i++) { ... }
// Runs slower than this
for (let i = n; --i >=0;) { ... }

Those two loops can be used to achieve the same results, the main difference being that the first one increases the variable i (via simple addition) and the second one subtracts from i(via simple subtraction). Yet, the second one runs significantly faster.

This happens in most programming languages by the way.

When I’ve met programmers that know this fact, and I ask them why the second loop runs faster than the first, they’ll usually say they found some comparison charts online (and we all know how comparison charts are crucial to decision making), or that they read the fact somewhere and it seemed logical due to logical reasons they came up with at the moment; which they usually explain to me in detail and do indeed sound very reasonable.


A computer scientist might just tell you that at CPU level, the first loop runs 4 instructions:

move register, 0
compare register, n
jump-if-greater-or-equal L2
-- ...
increment register
jump L1

and the second runs just 2:

move register, n
decrement-and-jump-if-negative register, L2
jump L1

(to be fair, they might not recall the proper assembly code syntax as I wrote it here, but they would indeed be able to recall the general gist of how many instructions will be run in the CPU by common programming operations)

This means that the second loop will run twice as fast. This might not seem like a lot, but consider the following hypothetical case:

You write your first program, and within this first program

  • If you use the first loop, it takes an average of 2 milliseconds to run
  • If you use the second loop, it takes an average to 1 millisecond to run

It’s very common for these types of quick operations to run hundreds of millions of times while the program is being used. Well, lets say your program ran the operation containing the loop exactly one hundred million times.

In the case of the first loop this would mean:

or 2 days 7 hours 33 minutes

In the case of the second loop this would mean:

or 1 day 3 hours 46 minutes

Would you rather spend 2 days of your lifetime waiting for a result, or just 1?


And this is only one operation. An everyday piece of software will have hundreds if not thousands of these types of loops.

Now, in this example I’m only accounting for the amount of time a computer takes to run code, yet programming one loop or the other takes roughly the same amount of time.

But let me give you a real world example of the pains programmers go through when they don’t apply proper Computer Science guidelines.

I recently read a Medium article by Jonathan Fulton (VP Engineering at @StoryblocksCo) titled These four “clean code” tips will dramatically improve your engineering team’s productivity.

In it he details how his team ran into several hurdles while coding their flagship platform for an online video market place now known as VideoBlocks:

Writing new features and even minor bug fixes required a couple of Tums at best and entire bottles of Pepto-Bismol and Scotch far too often. Our WTFs per minute were sky-high.

Then he goes on to detail the 4 keys to boosting their productivity, as in, the 4 things they “discovered” when digging themselves out of the hole they were burying themselves in, which includes this as number one:

1. “If it isn’t tested, it’s broken”
Write lots of tests, especially unit tests, or you’ll regret it.

Now, I’m not here to criticize or much less try and assume the level of Computer Science studies any of the team members at @StoryblocksCo may have, I highly value and respect their work and achievements, but I can say the following with 100% confidence:

Test Driven Development (TDD for short) is not a new concept, and if you have a Computer Science background, you should know better than to write the first line of code for an important project without considering a TDD strategy.

Sure, your MVP (start-up slang for functional prototype) can live without TDD for a while. To be honest I normally skip TDD on my projects up until I’m sure I’m writing code that will be used by others.

With this said, not considering how or when TDD should be required within a project’s timeline is wreckless.

I stated that TDD is not new. It was an understatement.

In early 1960s IBM was tasked to run a project for NASA. They used a technique equivalent to TDD:

Project Mercury ran with very short (half-day) iterations that were time boxed. The development team conducted a technical review of all changes, and, interestingly, applied the Extreme Programming practice of test-first development, planning and writing tests before each micro-increment.

They also practiced top-down development with stubs.

DevelopSense had a 2008 interview with Gerald (Jerry) Weinberg (well-known in the Computer Science field for his books on the subject from the 1950s and 60s) where Jerry detailed how he first discovered TDD:

We didn’t call those things by those names back then, but if you look at my first book (Computer Programming Fundamentals, Leeds & Weinberg, first edition 1961 — MB) and many others since, you’ll see that was always the way we thought was the only logical way to do things. I learned it from Bernie Dimsdale, who learned it from von Neumann.

When I started in computing, I had nobody to teach me programming, so I read the manuals and taught myself. I thought I was pretty good, then I ran into Bernie (in 1957), who showed me how the really smart people did things. My ego was a bit shocked at first, but then I figured out that if von Neumann did things this way, I should.

My point of bringing up these cases is that by 1957 TDD was already “how the really smart people did things”, and as such there are many other concepts that programmers tend to “re-discover” over and over again.

(Source: xkcd comic)

In all honesty, you can easily avoid many pitfalls when programming a solution if you know your Computer Science theory well enough, and thus, saving not only your gastro-intestinal integrity, but invaluable amounts of time as well.

2. Your projects will save money

Did you know that, on average, 20% of customers generate 80% of the revenue for a business?

“What does that have to do with Computer Science?” you may ask.

When you tell a programmer to optimize their apps/programs/websites, they’ll usually start by optimizing their code. And sure, optimized code will make requests run faster and in turn give a user what they want ever so slightly quicker after each optimization.

But riddle me this, would you consider using a shopping platform with which you can search, find, and buy the things you want in under 1 second if said platform takes 10 minutes to load its initial screen in your country?

Even though it’s a bit of an exaggerated example, this type of scenario is a bit too common, and no form of code optimization can fix these sort of situations.

In 1997, Sir Timothy John Berners-Lee, an English engineer and computer scientist, best known as the inventor of the World Wide Web, started a company called Akamai Technologies. Tim Berners-Lee foresaw the congestion that was soon to become very familiar to Internet users. Said congestion didn’t arise from badly written code.

Akamai focuses on solving all the Computer Science problems that programmers tend to oversee. In my shopping app example above, a solution Akamai may apply is to simply place all the product images the app uses in a server closer to your country.

It’s simple, if the electrons which carry the bits that represent the images you need are physically closer to you, they will take less time to get to you.


When I’m hired to consult for large-scale companies that have speed related issues, I’ll usually ask where in the world their 20% of high-revenue generating customers are and where their files are hosted. Then I’ll ask how big their files are. How many cores does the main server have? Etc.

In short, there are many things to consider that may be affecting a computer-based solution that have nothing to do with code.

Many times solving those things is much cheaper than having someone sit down for hours and days on end, coding, even if that person is you. Specially if that person is you, actually. Your time is worth more than any other spenditure in a project.

So far this might still sound like I’m focusing on time. Let me expand a little more on the money-saving side of learning Computer Science.

Arduinos have become popular amongst the maker community for a reason. Before they showed up on the market, if you wanted to create prototypes based on micro-controllers (with as many possibilities as offered by an average Arduino board) you’d have to spend hundreds if not thousands of dollars on all required programming and tesing components, as well as in effort.

With this in mind, sometimes I still cringe when I see a hobbyist prototype a project which uses a few buttons, and LEDs with an Arduino, and when it comes time to assemble the final piece, they keep using the Arduino as the driver for the completed project.

I cringe because this means that for the next project they’ll buy another Arduino.

An Arduino costs $20 on average. What if I told you that most Arduino based projects only use about 10% of its capabilities? And further more that this is directly proportional to the cost of the components you would have to buy to reproduce the Arduino features these same projects use.

This means that these hobbyists could be spending $40 in 10 projects (including the one and only Arduino they buy to prototype with) rather than $200 for 10 distinct Arduinos.

Computer Science is not just bits and bytes, it also deals with how to scale costs effectively depending on your software and hardware needs.

Best example I can give you of how Computer Scientists think regards to the scale of computer-based solutions is this amazingly simple yet profound explanation of what a Nanosecond is, by Admiral Grace Hopper:

3. Your collaborations will be more effective

I felt a little too related to the following comic as I was doing the research for this final point:

(Source: xkcd comic)

When I join an existing programming project where I’m the only person with a Computer Science background, it’s not uncommon to hear/read a comment directed at me akin to “I thought you’d be able to fix this quickly, aren’t you the Computer Science guy?”. In those cases my Computer Science background only assures I’ll know what’s wrong at a glance, no one can escape the elbow grease needed to either fix things or to scrap the whole thing and start again.

I tend to spend a considerable amount of time documenting with excruciating detail the why’s and how’s of my code on most projects I work on.

I’ve noticed that many of my programmer peers don’t write detailed documentation, and I feel I pin-pointed a very reasonable cause for this: testing and research exhaustion.

I tend to sit, think, and document things long before I write a single line of code. I’ve noticed that colleagues with Computer Science backgrounds do the same.

It’s hard for programmers to do this because what they specialize in is “knowing code”. And thus, they write code first. Then they test, and when things begin failing, they research. Then they write some more code, test again, etc.

This vicious loop continues until they finally achieve their goal. By which point they are so sick and tired of the task at hand that they move on without so much as commenting the links they used to solve the issue.

I know this, because I was a “programmer only” for 6 years before I decided to go through Harvard’s Intensive Introduction to Computer Science Online Course. This was 8 years ago as of writing this article, and I still wish I had found that CS program long before I did.

And it’s not just about good practices. It’s exhausting to read and hear casual coders defending their preferred programming language to seemingly religious extremes.

I’m (not) sorry, but even though I develop mainly in Javascript, if I need to do language processing I’ll use Scala, C++ to develop graphics intensive tools, Python (yuck) or Ruby to do Big Data analysis, and so on and so forth.

Programming languages are more than just syntax. More than just the different ways in which you write a loop. More than the libraries developed with them.

Programming languages differ in how they are optimized to solve different sets of problems, and no one language can tackle all things you throw at it more efficiently than every other language.

(Source: xkcd comic)

Before I learned Computer Science I was a hardcore PHP fanatic (for over 4 years). Afterwards I became enlightened and it quickly escalated to the point where I have delivered production ready solutions in 16 programming languages (and counting).

Collaboration is a thousand-fold easier when most (if not all) parties involved agree on the big picture. Computer Science gives you that big picture and let’s you focus on actually delivering value.


I’m no Tony Robins. I do not have the key to success in life (yet).

Still, it’s funny to me how in the past, after some of my lectures and workshops, asides the common programming related questions, people come up to me and ask how I manage so many projects at once, how I earn so much, how I have time to travel, and even how I manage to work from home (for real).

In my case, studying Computer Science helped me save countless amounts of time, generate more money, and communicate more effectively in what I spend one third of my day, every day: Programming.

If this article has compelled you to try out some Computer Science, be sure to check out these 3 resources to help you get started:

Happy coding!



Jean Lescure

Fullstack development, content creation, audiovisual production — The whole enchilada (͡° ͜ʖ ͡°)