I recently took over management of an engineering group focused on providing platform tools and services to support a large international TV platform.
Previously managing a portfolio of client applications, the choice of programming language didn’t come up very much; we use Swift for iOS, Java for Android, JS for the web, etc.
On the back end, if you are lucky, you work with a bunch of polyglot happy go lucky engineers that are as pragmatic as they are conscientious. But in reality, you’re going to run into personalities and opinions that differ, and I’ve found that there is a big split in software development when it comes to language preference.
In a recent fiery debate among engineering leads the following statement was made:
Go is less powerful than C++ because it does not have support for generics.
At the same time the argument was made:
Anything I would want to do in C++ I can do in Go more easily.
Oh dang, the gloves were off. So down the rabbit hole.
If you haven’t encountered the Blub Paradox yet, it’s worth a read. While Graham references Cobol, Lisp, and assembly, the same principles apply when we start discussing Go vs C++ or Node JS vs Java.
Why Does this Debate Matter?
When the complaint came up that Go didn’t have generics I was skeptical that this was going to be a real issue with Go. That kind of complaint didn’t pass the smell test because I’ve built quite a few systems without ever encountering a need for generics. But if you google the topic you get some interesting article titles:
What is a programming language like Go but with generics?
Go should have generics
Why Go is not good
Why Go’s design is a disservice to intelligent programmers
I began to see this as a fundamental misunderstanding between generations of programmers.
A colleague shared this interesting article with the group, explaining the dilemma with Go, but it ended up providing a much greater insight.
research!rsc: The Generic Dilemma
Generic data structures (vectors, queues, maps, trees, and so on) seem to be the hot topic if you are evaluating a new…
The bulk of the article is about the tradeoffs of generics as implemented in various languages.
In one case you get compile costs, in another you get a lack of features, and in another you get slower performance at runtime. I disagree with some of his analysis, but there is a very insightful question that spawned me writing this post to begin with:
do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
To me the answer to this question frames the future of the industry and a break from the past.
When it comes to language choice I believe fast programmers will win out and none of the rest will matter.
Memory is Cheap
The debate above assumes scarcity in compiler speed, binary size, and execution time.
There was once a day when memory was expensive. Uncle Bob will tell you all about it here: https://youtu.be/7Zlp9rKHGD4?t=26m57s
That’s done and gone, as Martin puts it:
We are filthy rich with the stuff. Memory is pouring out of every orifice of our bodies.
I’m glad Bob Martin sees this, but I honestly believe there are tens of thousands of software engineers from a generation before mine that still write their software subconsciously mindful of small memory optimizations in every aspect from variable naming to choices between types of iterators and so forth that end up making them write more intricate and complex code. They feel the need to have access to these additional language features because they feel empowered by having fine grain control. What is even worse is that they chastise and belittle newer developers for writing perfectly good code that doesn’t involve their predisposed nuances because of the increasingly obsolete notion of scarcity.
If you recognize the fact that memory is cheap and plentiful, you would focus more on writing easy to read and easy to write code rather than complex code that solves intricate problems that are no longer of real meaning.
This doesn’t mean we are out of the weeds and all programming is safe because all the hardware has no limits, that simply is not true, and never will be. Just ask anyone doing RxJava how many RPS their API can handle without being cautious about their thread pool. There are limits, but a whole category of issues has become not worth the effort of solving. A whole way of thinking about complex solutions has lost its virtue and become irrelevant.
Computation speed cannot solve your problems
Later in the same talk Martin discusses trends in CPU performance. Pointing out that in the early 2000’s clock speeds flattened out. We’re seeing better per watt performance as dies shrink a little bit but almost all of the actual computational power increases are coming from adding more transistors, more cores.
You’ll also note the growth of cloud computing around this time and the adoption of containerized deployment via things like Docker and services offering units of computing abstractly like Heroku’s Dynos becuase scale is now about horizontal scaling more than vertical scaling.
What this means is that computational speed has already hit a ceiling for a single thread. You can’t scale a system by making it calculate faster, you can only scale a system by making it calculate concurrently. If that is true, then the future of programming is definitely not making code that executes faster, it is making code that is resilient to being run in a horizontally scaled environment.
Runtime speed matters, but not like it did before we had cheap cloud computing and multi-core systems.
Compilers are fast and getting faster
Even compile time is becoming insignificant (despite the CPU clock speed dilemma). Go for example has trivially fast compilation by design https://youtu.be/uQgWP7zM6mU?t=14m47s
Compiling is less important
The comment thread and topics discussed in this article The Day Performance Didn’t Matter Any More[sic] really bring the topic home.
When we talk about programming languages, the “power” of the language used to be a valid discussion. How powerful or weak a language was could be benchmarked in its various implementations and those benchmarks could show 200–5000x differences in performance. Certain choices were viable and others completely unacceptable for a given use case. Today those differences are small, more like 50x or 100x in the worst of cases, but in most cases more like 1.5x or 2x.
An exception to this pattern is only in systems that are local where real computational latency is the bottleneck. This is pretty much exclusively in mathematics for large number computation with obscene precision. Outside of this in every part of our industry I/O latency will be the boundary or in the case of things like graphics cards, image processing sensors, driverless vehicle AI, etc, the massive amount of processing that must be completed nearly instantly is already done through parallelization in massively multi-core hardware such as a GPUs or MIPS.
So what does all this mean…
As computing costs continue to go down, execution speed matters far less than programming speed.
I propose that we can no longer evaluate programming languages based on their innate “power” as expressed through their implementation because the ground has shifted. Because of this we have to evaluate their appropriateness through a new lens.
Thinkers not Engineers
When I write code I am increasingly seeing myself as describing an application in a language that is clearly readable by 3 groups:
- the machine that will execute the program
- a programmer who I will never meet but will have to read, extend, and improve my code (often this is the future version of myself having forgotten all the details)
As we continue to improve language abstraction and transpilation “the machine” is becoming something that is able to speak any language we come up with.
If this is true, then the real language we need in the future is one that is easy to express our programs in as we write them and that is easy to read and understand when entering a project.
This has absolutely nothing to do with X feature or Y feature eg. generics, but instead has everything to do with simplicity and conventions.
If we have the opportunity to optimize for “Fast Programmers” then we are going to end up optimizing our language choices for what is fastest to develop in. I think this would eventually favor languages with fewer features, or, in practice, the use of a language relying on the smallest subset of its features, because programming should be less about deep knowledge of a specific language and more about design and communication. The languages should handle most of the optimization for you and let you focus on essential complexity and avoid accidental complexity.
This type of thinking edges on heresy in certain circles, but I firmly believe that as software development becomes more and more abstract, there will be less and less of an advantage for people who are practiced in computation as a “science” and more and more advantage for people that are practiced in design thinking and communication because the bulk of the engineering challenges will have been solved by disciplines other than software development through abstraction. The real landgrab will be for those that can communicate accurately and efficiently with other developers, themselves, and machines.
Computer scientists will go the way of farmers
We will always need “real” engineers. There will be increasingly fewer people focussed on raw computer science as a proportion of the active population of software developers. The niche will become smaller, but more important because more people will rest on their achievements as they are freed up to focus on more abstract challenges.
This evolution perfectly echoes the transformation of the agriculture industry in the 19th and 20th centuries.
In 1840 roughly 70% of the US population worked in the agriculture industry. Today it’s less than 2%.
It’s fairly straightforward. Food was planted and picked by hand with the help of animals to pull plows. Tractors were invented and evolved to be so efficient that today a single family of farmers could manage hundreds of square miles of farmland with a fleet of combines that can actually be controlled remotely with GPS guidance and automation.
Agriculture was abstracted away and the most effective means of production are by those skilled in scaling a business, most of which involves management and communication skills, not purely physical labor.
The same is true in the future for software developers, the moniker of “scientist” will be reserved for a smaller crowd and the rest of us will happily and successfully no longer account for garbage collection, stack overflows, null pointer exceptions, thread pools, or any other nonsense as we communicate the description of our software through code.
That certainly doesn’t mean there will be less languages in use, or even a semblance of unity around programming paradigms. The list keeps growing every year and recently at an even faster pace it feels like. https://en.wikipedia.org/wiki/List_of_programming_languages
Each of us will have our favorites and there will still be religious wars about syntax but it will be less and less of a technical argument and more and more of an aesthetic one. Language choice will perhaps seem more like the differences between choosing a standing desk vs a traditional desk, a 15 in. laptop vs a 13 in. variant, the choice of a track pad vs a mouse, etc. Clearly everyone should sit down and use spaces instead of tabs, but we are going to enter an era in software development (not computer science) where it is going to be more about productivity, comfort, and pleasantries than quantifiable scientific evaluation.
do you want fast programs or fast programmers?
My answer will be, fast programmers.
About the Author
Full Stack Developer, Former Entrepreneur, Designer and currently Director of Product Management for Red Bull in Santa Monica