More Programming Languages, Please!

In defense of a world where we keep designing and using new programming languages.

Erik Engheim
Mar 6 · 17 min read

I wrote my first line of code around 1990. The book that really got me into programming was called “Kids and the Amiga,” which I bought in 1992.

That means I was exposed to programming when Python was released in 1991. Definitely when Ruby and JavaScript were released in 1995. Java came out in 1996. C# appeared in 2002.

Why mention these years? Because I remember being around when a lot of this happened. I remember our discussions and our thoughts, and that is always useful when looking at the software industry today, because the same arguments and thinking tend to repeat over and over again every few years.

The book that really taught me programming back in 1992. I still have it on my bookshelf.

There has always been a reluctance and straight-out hostility towards new programming languages. However, I don’t think I have seen it quite as forcefully articulated lately as done by Kelly Curtis in a comment to one of my stories on Go programming.

It probably goes without saying that Kelly and I likely disagree on almost everything, but I must commend Kelly for setting the stage for what I think is a very interesting debate to be had.

Why Not Just Replace the Compilers?

In one of my posts, I was trying to articulate how evolution of the field of computer science has allowed us to develop new languages with better capabilities in the past, but Kelly is not buying it:

I still throw it back at you and argue that it is still NOT a problem with language, only with compilers.

I have seen this same argument made countless times over many years. I disagree with it as much today as I did 20 years ago.

The semantics of a language have significant impact on what sort of compilers you can build. You cannot simply attach any kind of compiler to the back of any kind of language. Language design matters profoundly.

PyPy is a Just-in-Time (JIT) compiler for Java. It has been a 14 year effort to make Python run, fast and I don’t think one can say mission accomplished yet. In the meantime newer languages like Julia has eclipsed these efforts.

For example, ahead-of-time compilation of Python code is exceptionally hard. Likewise, creating a Just-in-Time compiler for Python has been a major undertaking. Contrast this with, for example, Julia, where a whole language with a JIT was made with a fraction of the resources spent making a JIT for Python. Yet significantly better performance was achieved.

The semantics of C/C++ makes it hard turn them into something akin to C# or Java running in a managed VM environment. A language allowing pointer arithmetic is going to cause a lot of problems in such a context.

The design and semantics of a language set upper bounds for how far you can take compiler optimization.

But it is not merely a question of performance of the compiler but also of how expressive the language is. How type-safe it is. How good it is for expressing concurrency, etc.

In short, the ability to upgrade technology by simply replacing the compiler is no easy feat. Often, the shortest path is a new language tailored for new compiler techniques and optimizations.

When people began writing Fortran in 1957, it had to be stored on punch cards.

Reading Old Code

What determines your ability to read old code? To Kelly, new languages are a threat to our ability to access a rich legacy of code:

Fortran sent people to the moon, so the best scientists can program extremely complicated algorithms in any language. Can we still read all that beautiful code that sent humans to the moon? No we cannot, because nobody today can read Fortran.

I am not sure if this is meant to be ironic or not, but this is certainly not true. People are still learning and writing Fortran code. It is still a relatively important language in scientific computing. People who work in scientific fields still update Fortran code. For example, a lot of weather simulators are written in Fortran.

Furthermore, Fortran is not nearly as arcane as people seem to think. Here is an example of a function in modern Fortran (2003):

function sort( array )
real, dimension(:) :: array
real, dimension(size(array)) :: sort

real :: temp
integer :: i, j
integer, dimension(1) :: pos

!
! Retrieve the lowest elements one by one
!
sort = array
do i = 1,size(sort)
pos = minloc( sort(i:) )
j = i + pos(1) - 1
temp = sort(j)
sort(j) = sort(i)
sort(i) = temp
enddo
end function

In fact, this is not all that different from Julia, a modern dynamic language which I happen to be a fan of and which I frequently write articles about. I don’t actually know Fortran at all. I have never learned it. But I can attempt to do a rough translation to Julia right here, based purely on guesswork.

function sort(array :: Array{Real})
#
# Retrieve the lowest elements one by one
#
sort = copy(array)
for i = 1:length(sort)
pos = minloc( sort[i:] )
j = i + pos[1] - 1
temp = sort[j]
sort[j] = sort[i]
sort[i] = temp
end
return sort
end

Here are some important aspects to notice. Despite Fortran arriving in the 1950s, the syntactic elements of modern Fortran are not all that different from a brand new language such as Julia. Many keywords and symbols are used in similar fashion. Notice how the :: symbol is used to separate type and variable name in both languages. Both use keywords like function and end.

Somebody programming Basic or Lua will recognize a lot of this syntax as well. Here is some old Algol code:

WHILE smaller <= larger DO
WHILE array[smaller] < pivot AND smaller < last DO
smaller +:= 1
OD;
WHILE array[larger] > pivot AND larger > first DO
larger -:= 1
OD;
IF smaller < larger THEN
swap(array, smaller, larger);
smaller +:= 1;
larger -:= 1
ELSE
smaller +:= 1
FI
OD;

Algol came out in 1958. Yet can anyone claim the code above is impossible to read for a modern programmer? Is this knowledge forever lost in time? Of course not. Modern languages derive from older ones.

New languages are not a fashion statement. People who create new programming languages are careful to try to borrow as much syntactic elements as possible from past languages which people are familiar with in order to make the transition to a new language as easy as possible.

New languages tend to deviate from old ones where there is a real value and benefit to be had from doing so.

I’ve seen objections that I am using relatively modern Fortran. But if you look at that modern Fortran you can still go back to 1990 and find very similar looking code. This is a Fortran 90 Function:

FUNCTION Area(x,y,z)
IMPLICIT NONE
REAL :: Area ! function type
REAL, INTENT( IN ) :: x, y, z
REAL :: theta, height
theta = ACOS((x**2+y**2-z**2)/(2.0*x*y))
height = x*SIN(theta); Area = 0.5*y*height
END FUNCTION Area

You can find a description of the code here. While some stuff looks odd, you may reckognize it if you are familiar with old school K&R C syntax. Here is an attempt at writing the same code in K&R style C:

float Area(x, y, z) 
float x, y, z;
{
float theta, height;
theta = acos((x*x+y*y-z*z)/(2.0*x*y));
height = x*sin(theta); return 0.5*y*height;
}

Yes, one could still argue this is not the original Fortran, but I don’t think that is relevant. If we are arguing over whether a new programming language should be released or not in 2018 such as Julia, then it would be unfair to consider the alternative as Fortran anno 1957. A fair comparison would be Fortran as it looks in 2018. This applies to any language. If considering releasing a new JVM language like Kotlin, it has to be compared to what Java looks like at that time and not what Java 1.0 looked like.

Algorithm Knowledge

We must also ask ourselves what value there is in reading arbitrary old code. Are we missing out on something important if we cannot read old Apollo source code? Kelly seems to think so:

Those algorithms haven’t changed but we are forced to relearn basic techniques at all times.

But how do we best learn algorithms? How do we best explain them to each other? When I studied algorithms and data structures, I remember some of my fellow students being angry that our teacher pretty much never wrote any code. He mostly drew diagrams and pictures.

Honestly, that is the best way most of the time. Your understanding of algorithms should not be tied to a particular programming language. In fact, programming languages are hardly the best way to express algorithms.

When I worked on my master’s thesis, I struggled with many algorithms for planning the movement of airplanes by computer-controlled characters in a computer game. I wrote none of those algorithms in regular code initially. All was written in pseudo code, with heavy use of math symbols. That is why you can still read decades-old papers on algorithms.

This is how a computer scientist will usually describe an algorithm in a book. This isn’t executable program code. It is inspired by Algol syntax, but is really a mix of programming language syntax, mathematical notation and English prose.

The explanations are based on fairly simply pseudo code coupled with judicious use of well-known mathematical symbols and expressions. All of this is accompanied with plenty of text to explain the pseudo code being shown. Frequently you will see diagrams. That is how you learn stuff. Not by poring over thousands of lines of real code. Real code tends to be exceptionally verbose and hard to follow. Reading reams of code is a poor way of learning important data structures and algorithms in my humble opinion. You learn those best by reading books and papers.

Almost any important algorithm implemented in some programming language has already been documented or explained in textual and mathematical form first.

Donald Knuth, author of “the Art of Programming,” describes all his algorithms and data structures using a made up assembly language. The imaginary CPU and assembly code is described in the book, so it is still useful regardless of whatever language is popular two decades after its publication.

Ideas expressed in programming languages should not be viewed as immortal and canonical. If you have something profound to say, you should express it in a paper. Most of the time, programming is an overly verbose expression of that same idea. Programming languages will come and go. It is futile to fight that trend.

Expressing your brilliant idea only in JavaScript because that happens to be hot today and expecting that to stand the test of time would be hopelessly naive.

Are We Constantly Reinventing The Wheel?

New languages often add features that may have existed a long time ago in another language. Kelly is exasperated that Go is only now adding generics:

Take the Go generics coming out.. here we go again: C++ headers, Java generics, C# generics. ANOTHER whole batch of professionals are forced to relearn and AGAIN, stick with purely basic techniques.

Of course, this isn’t anything new. Almost anything we do today in Java, C# and Go has at some point existed in older languages like Smalltalk, LISP, Standard ML, OCaml and Haskell. Yet I don’t see any C# and Java developers complaining why we all didn’t just stick with Smalltalk (1972) or Haskell (1987) all these years instead of inventing C# and Java. I suspect the reason is that people are always upset there are new languages that happen to be different from what they have spent years learning. That whatever they use required older developers to spend considerable effort to get retrained matters less to them.

I am name dropping a lot of languages here. This might help: Quick Overview of Esoteric Languages.

What Exactly is the Job of Programmer?

I remember learning about all sorts of sorting algorithms and data structures. I have certainly worked on complicated software, e.g. doing things like computing synthetic seismic logs. Yet I cannot honestly say that I have ever used much of any advanced algorithm or data structure I have learned.

I have never implemented a sorting algorithm from scratch other than perhaps some toy version for educational purposes. I don’t think I have ever implemented something like a heap. I have certainly never implemented a tree balancing algorithm, even though I learned several.

How can anybody ever hope to do anything complicated with algorithms when we are forced to relearn how to write Hello World ever year? It is the definition of insanity.

Most of us developers simply don’t do the stuff that could be called computer science. There is a reason why MIT switched from teaching Scheme to teaching Python.

One of the truly classic computer science books that teaches all the fundamentals through the Scheme programming language. Despite the mind bending ways this book teaches programming, through Scheme, it was abandoned because normal programmers don’t craft engineering marvels. They glue together lots of pieces of functionality. They are more hackers than engineers.

The professors Sussman and Abelson began realizing that very few developers actually build whole systems in a structured fashion from the bottom up. We programmers are more like gluers. Most of our job is really about stitching together a vast library of existing components and technologies.

Algorithms are secondary to philosophies and practices of managing enormous programs. Software engineering and design patterns tend to matter more. How do you make sure millions of line of code are manageable? The challenge isn’t novel algorithms. The challenge is the size of the code base.

Music, Code, Mathematics and Computer Science

Let’s go back even further, 1400, they first start learning how to notate music in Europe, and over the course of 600 years, the syntax evolves but still is able to communicate old ideas with the same clarity of modern ideas. Compare this to code; I CAN read and master the techniques that have been around for hundreds of years in Music: in software, maybe I can go back 5 years… snorting laugh

This, I believe, is the wrong analogy. The central ideas of programming are not really communicated through code. It is communicated through texts with pseudo code with heavy doses of math. Mathematics and mathematical notation are still at heart of programming. Programming began as sub discipline of mathematics.

Not to mention that the field of computer science is infinitely larger than the field of music. Comparing music to programming is like comparing musical notation to the whole field of physics. It does not do it justice.

Computer Science folks think they are the only people in the world to understand abstraction, encapsulation, logic and complex symbol based notation. Truly they are the laughing stock in the historic preservation of ideas.

I think this is a fundamental misconception of computer science. A computer is about as relevant to a computer scientists as a calculator to a mathematician. Yes, it is a useful tool, but a computer scientist mainly deals in the abstracts and may not be writing very much code at all. Algorithms are, as I have repeatedly pointed out, better expressed in mathematics, pseudo code and diagrams.

Old computer science papers are still readable, regardless of what your favored programming language is today.

Do New Languages Hinder or Evolve the Field?

Kelly, like many others I have read in the past, believes all these new languages are roadblocks:

I suggest you really dig deep and think of the damage having over 150 languages to learn has on to evolution of technique. It is stupidly at its finest and comes from the professionals.

I could not disagree more. My own experience over 30 years of programming points to a very different experience. The way I can express myself in code today would not have been possible with the Amiga Basic I began writing code in.

For years I thought C++ was the language I would stick to forever. Today I realize just how much it held me back in every possible way. Time that could have been spent expressing new ideas and concepts got wasted instead on hunting down static initialization and deinitialization problems. Getting copy constructors right. Figuring out memory leaks. Trying to figure out what caused memory corruption and undefined behavior. Try having a complex constructor through multiple levels of indirection call a virtual method and see how much fun that is to debug and figure out.

Insisting we stick with old languages because new languages require learning something new is like suggesting that you just need to learn how to pedal faster on your tricycle instead of switching to a motorbike. Sure, it requires new skills to operate a motorbike, but it pays off. No matter how skilled you get at operating a tricycle, you will never surpass somebody driving a motorbike in terms of speed.

The Logic of the Hoarder

Hoarding can become a compulsion. I certainly know that. I have had problems letting go of things I have acquired. Sometimes it has taken years for me to acknowledge that I will never use this thing and it is just taking up space and causing problems.

Software is no different. Not all software is an asset.

Software is only 50 years old, and we’ve lost billions of lines of code and techniques due to new languages.

We have not lost it, because it was never all that valuable to keep.

I also think it is a profound misconception to put the lack of understanding of past software at the feet of “new languages.” I have spent considerable time with software that was written in a language that everybody at a company knows. Yet code written in this language is impossible to comprehend because it has been poorly documented and structured. Bad code transcends language. Likewise good, clean and easily readable code can exist in any language.

Software and code are not there to be hoarded. As time progresses, we develop better and cleaner methods of expressing the same ideas. Often it can hold us back. For example, things like banking transactions in the US tends to be much slower than in Thailand. How can that be? Isn’t the US is a richer and more modern country?

It’s because the US banking infrastructure was built a long time ago with old technologies operating in batch mode. In contrast, Thailand built this stuff up more recently. And they seemed to have had very few problems recreating the functionality of all this old code in a better and cheaper way.

While Ford Model T is an unforgettable classic that has written itself into the history books, there are better ways to drive today. Similarly, there are better ways to program today than with Fortran.

Keeping this old software around is about as valuable as trying to keep oiling and servicing 100-year-old Ford Model T motor cars. There are better ways to drive today. Just like there are better ways to write software.

Resistance to Progress

What is holding software progress back? According to Kelly, we are held back by this plurality of programming languages:

Because of this, tribalism in software is the most toxic case of arrogance, elitism, and human stupid we have seen in human history. Don’t buy it? I get it, you’re part of the problem.

I think a lot of the frustrations are mutual. I don’t know how often I have tried to advocate a newer and better solution and met a stone wall. One of my earliest memories from college was when I argued with my professor about the use of Microsoft Foundation Classes (MFC).

It was IMHO one of the worst-designed GUI toolkits ever. It severely diminished your productivity. At the time I had discovered the Qt GUI toolkit, which was a newfangled thing back then and was a breath of fresh air. Anyone with an open mind could see it was light years ahead of MFC.

My professor insisted MFC was the solid thing for industry and we should not waste time on newfangled fads like Qt. It was highly ironic then that I pretty much never spend any of my C++ career after college writing MFC code, but predominantly wrote Qt code. It pretty much took over the whole C++ GUI space. The “fad” became the industry standard so to speak.

Qt Creator used to developer C++ applications using the Qt GUI toolkit.

OK, it is a white lie to say I didn’t use MFC. My first two years, I wrote software built on top of MFC. But everybody at the company had agreed that MFC was such a horrible technology so we put a layer on top of it and tried to forget all about it. It still used every opportunity it had to sabotage us until C# came around and everybody was happy to kill MFC.

In another job I began porting our Objective-C application to Swift. Yet again I was met by people who regarded this as a waste of time. I only ported the most important parts. Then the very same people who had opposed porting the app began working on it.

Most ironically, I got loud complaints about the Objective-C code that was still around. The same people who had once opposed a switch to Swift absolutely loved that language and hated dealing with Objective-C. Thus I had to rewrite the rest of the app. In this case, real progress was stalled using an old technology because new developers could simply not get up to speed with old tech as quickly as they could with a modern language like Swift.

Old crusty languages come with a cost. Sometimes a rewrite saves you a lot of time and effort down the road. My rewrite of Objective-C code to Swift was a good example. I discovered plenty of bugs in the original code in doing so, because Swift has a much stricter type system that caught a lot of bugs undiscovered by the compiler when it compiled Objective-C code.

What is a Language?

Finally I would like to ask: what is really a language? Learning the syntax and semantics of a programming language is often something that can be done within a few days. Certainly with simple ones like Go.

What really takes time is learning the tool chain, the libraries, the idioms. This can take years to really master. Languages can have such profound changes that the upgrade is more different than another language.

I spent about 15 years with C++. Yet I am more capable of reading Java code than modern C++, despite spending very little of my career programming Java. You can probably learn 3 new languages in the time it takes for you to learn the Boost C++ library.

When switching jobs, it is not the language that takes time to master but the whole ecosystem of libraries the new workplace use. What exactly is a language? For natural language, we consider the words in the language as part of the language. A language is not merely the syntax. I would argue in programming the libraries we use in many ways create de facto new languages.

Programming with an entirely new fashionable JavaScript framework is essentially like switching to a new language. The complexity of dealing with the new syntax of another language is completely overrated IMHO. That is simplest thing for a developer to deal with.

The challenge is always to the learn the frameworks, and that challenge can often be equally large or larger within the same language. Switching from, say, MFC programming to Qt programming in C++ probably requires similar effort as switching from C++ to Go.

Libraries in other languages are often modeled on languages from other languages. I was recently looking through popular libraries in Go. Quite a lot of them remarked that the library was modeled on or derived from a similar library in Python.

Thus you can ask yourself what would require more effort: Switching between two major frameworks in Python or switching from Python to Go, but keep using, say, a Web framework that is almost identical to the web framework you used in Python?

One of the reasons I could get up to speed quickly on Go when I first tried it was because so much of the standard library looked like libraries I was used to from both C and Python.

Again, new languages are not fashion statements. Well-proven ideas and approaches tend to get recycled. There is an interest from language designers to make existing coders comfortable and feel familiar with their new environment.

Domain Specific Languages (DSLs)

We could pose this question of what is a language in another way: How does it differ to use a different framework from using a domain specific language (DSL) to do the same thing?

Many languages are flexible enough to express what essentially amounts to mini languages within the language itself. LISP, Julia and Ruby are all quite good at this.

I ask, how is creating a DSL fundamentally more problematic than adding another framework? Both are, in the most abstract sense, a way of creating a new language. Both add a new vocabulary. If new languages are by definition are evil, then there should be some intrinsic and obvious advantage to using frameworks over DSLs.

In the LISP tradition, creating DSLs is very much how you solve problems. You identify a problem domain and then you create a language, or more specifically a DSL tailored to that domain, which can express the ideas and constraints of that domain elegantly. This gives a more compact and succinct expression of a solution.

It is not really different from how natural language works. New fields such as physics, chemistry and psychology develop their own terminology and jargon to express their important concepts. Without these new sub languages, the field would be hamstrung.

Star Gazers

“If you want to master something, teach it.”

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store