Can programming be liberated from the von Neumann style?

Tomasz Jaskuła
Luteceo — software chemistry
9 min readJan 10, 2018
Universal machine: John von Neumann and his high-speed computer, circa 1952.

It’s been 40 years since John BackusTurning Award lecture on 1977 “Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs”. He described a functional-level programming language (not to be confused with functional programming) known as FP that he was working on.

This was a visionary article well ahead of its time. The point is, that he pinpointed a fundamental problem of thought in programming that is still valid 40 years after. The argument is not about imperative vs functional programming but it is more about how people are thinking and seeing programming as explicit statement control flow without even realizing that this is not the only one approach of programming.

The John Backus’ paper is very controversial at the time but it was certainly a very good step in the right direction.

Criticism of conventional programming languages

The first part of the paper starts setting the tone for the following pages:

Conventional programming languages are growing ever more enormous, but not stronger. Inherent defects at the most basic level cause them to be both fat and weak: their primitive word-at-a-time style of programming inherited from their common ancestor — the von Neumann computer, their close coupling of semantics to state transitions, their division of programming into a world of expressions and a world of statements, their inability to effectively use powerful combining forms for building new programs from existing ones, and their lack of useful mathematical properties for reasoning about programs.

40 years has passed since this was written but do you think it’s not relevant anymore? I think that most of it still applies to your preferred mainstream programming language. I hear you saying that this is not true anymore because most of mainstream languages are not imperative but Object Oriented. Different paradigms!

But when I read trough Backus’ article, he is highlighting some of the problems that still apply to mainstream languages. They are getting more enormous but not stronger, they are fat and week, they have still primitive statements, close coupling of semantics and state transitions.

Sure, “word-at-a-time” is not relevant anymore but how many times we write code based on arrays not only to express a collection of elements but encoding different data structures like trees, binary search trees and graphs? In the mainstream languages most of the algorithms are using arrays as the basic data structure and your are left with implementing an error-prone logic and indexing arithmetics. This is like “word-at-a-time” processing even if you don’t call it so. Modifying mutable variables, using assignments and control structures such as if-then-else, loops, breaks, continue and return are the most common informal way to understand imperative programs as instruction sequences for a von Neumann computer. There is a strong correspondance between mutable variables and memory cells, variable dereferences and load instructions, variable assignments and store instructions, control structures and jumps. However how this is scaling up when we conceptualize programs using “word-at-a-time”?

We are solving problems using abstractions and models but how can we express useful abstractions without using awkward way of encoding types in basic structures with flawed control flow semantics?

So many questions but in the end you may say that imperatives languages belongs to the past so it’s not my problem anymore. More, most of the software industry is not interested in solving algorithmic problems at all. Mainstream languages are mostly used to encode business logic and construct models to solve real business problems with Object Oriented programming. Let’s model the real world, they say! Bear with me, most of the time people use objects (classes) as mutable data wrappers combined together with the same control structures used in the von Neumann programming style.

Backus is also highlighting:

…their lack of useful mathematical properties for reasoning about programs.

40 years later, Object Oriented languages has became the most popular choice for software development industry, but I don’t think we have any mathematical model to validate the correctness of OOP. We have Turing machines for imperative programming, lambda calculus for functional programming but nothing for OOP. The question is then, how do we ensure the correctness of an Object Oriented program? In C# for example, you could use “Code contracts” which are just an implementation of Bertrand Mayer’s “design by contract” idea, but I never saw them implemented in the real field. This just doesn’t work! You can use invariants in very simple cases but when it becomes more complicated, when you have to depend on invariants from another class you end up with convulted logic just to implement code contracts. Also checking the pre and post conditions in many real world scenarios are so complicated that they forbid many use cases.

By the way even if we have mathematical model for functional programming (lambda calculus), people writing business software in FP don’t need to provide mathematical proofs that the program is correct. This is not the point (although I think that Haskell for example is compiled to intermediate lambda calculus which is then verified for correctness, that is why Haskellers say “If this compiles, it works”). Simple fact that your programming language is backed by mathematical model allows to think about business problems in different ways and provides to you construction blocks based on mathematical properties allowing you to write better software, with richer expressiveness and a way better abstractions and logic. He continues:

The second world of conventional programming languages is the world of statements. The primary statement in that world is the assignment statement itself. All the other statements of the language exist in order to make it possible to perform a computation that must be based on this primitive construct: the assignment statement.

This is one of the weakest elements of von Neumann programming style that Backus seems to be particulary heated. His vision for a better model is essentially without variables, with coherent algebra, precise definitions and applicative state transition properties.

In the second part of the article, Backus proposes a solution to the von Neumann programming problem.

Functional progamming as solution

Backus describes his vision of functional programming which seems rather modern, even if 40 years has passed. The syntax of the FP language he proposes is quite different from what we can imagine compared to the modern functional programming language but the ideas are still fresh. This was certainly a way to go at the time. It is based on function composition, higher order functions and the program is evaluated through reduction. He describes a short program than it compares it to von Neumann style highlighting the following properties that I’ll try to explain:

a) It operates only on its arguments. There are no hidden states or complex transition rules…

This is the core of functional programming. This is why it is so powerful and easy to work with.

b) It is hierarchical, being built from three simpler functions (+, x, Trans)

This is not 100% clear to me but I think today we talk about “composition” and not “hierarchy”.

c) It is static and nonrepetitive, in the sense that its structure is helpful in understanding it without mentally executing it.

This is also a core idea of functional programming. We care about what the program means and not how it is implemented. There is no cognitive load at mentaly executing a program step by step in order to understand what it does. How many times have you been mentaly executing steps of your program in your prefered popular language to understand what it does?

d) It operates on whole conceptual units, not words; it has three steps; no step is repeated.

Higher level abstractions.

e) It incorporates no data; it is completely general; it works for any pair of conformable vectors.

I’m not sure about that but it seems to me that he still talks about the simplicity of the language that can be abstracted to use any kind of data structures.

f) It does not name its arguments; it can be applied to any pair of vectors without any procedure declaration or complex substitution rules.

Seems like he is proposing a way to compose functions in a simple way.

g) It employs housekeeping forms and functions that are generally useful in many other programs; in fact, only + and x are not concerned with housekeeping. These forms and functions can combine with others to create higher level housekeeping operators.

For me what he is describing here as “housekeeping” are all the constructs that we use repeatedly in our mainstream languages like loops, if-else branching constructs, etc. How many times have you written loops with almost the same form and without even realizing how poor abstraction it is? It is rooted so deeply in our way of programming that we don’t even notice we’re still writing code like a von Neumann “word-at-a-time” style. In functional programming those were already captured as patterns so we don’t need to write them over and over. Instead of looping we have higher order functions, if-else can be replaced with powerful pattern matching decomposition, etc.

While Backus continues on describing his vison of functional programming system there are many details and mathematical proofs that I’m not able to explain correctly, but I hope I have captured the core idea of the paper. Definietly it is worth to read it.

However, you may ask. If the functional programming is so good why...

…FP didn’t take over the world?

Not yet! But it’s getting more and more popular for obvious reasons. The part of the response we find in the paper:

Our fixation on yon Neumann languages has continued the primacy of the von Neumann computer, and our dependency on it has made non-von Neumann languages uneconomical and has limited their development. The absence of full scale, effective programming styles founded on non-von Neumann principles has deprived designers of an intellectual foundation for new computer architectures.

and then:

Moreover, most applicative systems employ the substitution operation of the lambda calculus as their basic operation. This operation is one of virtually unlimited power, but its complete and efficient realization presents great difficulties to the machine designer.

It seems that the functional architectures has failed not because they were unworkable but because there were no support from the market rooted in the von Neumann style. There was no support from the industry or large academia either. Mainstream language have been steamrolled since then and then Object Oriented pattern forced itself as the only valid way of writing programs at large scale getting more and more popular. But the popularity is not related to quality. Not everything popular is bad but definietly not everything popular is good. In the end the popularity might be an evidence of minimum level of quality acceptable by the majority of people.

Object Oriented programming is good enough so it allows it to be widely used. People are happy to choose good enough solutions that seem easy not matter if there are better solution out there. OOP is relatively accessible to average programmer even if in the end it is more complex than functional programming but the root cause may be the way it is taught in universities. Often functional programming is associated with maths which indeed may be helpful, but bear with me, you don’t need to know category theory and lambda calculus to write composable functional software.

We’ve also touched upon the lack of verifying mathematical properties of Object Oriented languages and that it was formally impossible to prove the correctnes of Object Oriented programs but this didn’t stop the wide adoption and growing popularity. Indeed, OOP is still adopted mostly for business applications where ultimate correctness isn’t necessary. You don’t need 100% of correctness most of the time. From the business point of view you need a good enough software so you can earn more money than you can loose. Also very often a time to market is more important factor than ultimate correctness and stability of you program and you compose with available skills on the market. It’s how the business works! However in my opinion I think that you can write software in FP language quicker being more reliable at the same tume than what you can do in OOP. It depends on what paradigm the team is familiar with.

40 years later and I have a feeling that today we’re still fighting similar problems that Backus described in this paper. This is a visionary article ahead of its time. The biggest problem are not the mainstream languages but how most of the people are collectively thinking inside the boxes without even realizing that there are other available paradigms of thought. Many people are solving problems locking them up in mental structure of von Neumann style like if this was the only way. Even if, since some time functional programming is becoming more and more popular it will take some time before we would call it mainstream or another patter would emerge in the meanwhile (maybe quantum computer will change everything we know about programming). On the positive side, many concepts from functional programming are leaking into mainstream languages. For some people this could be good or bad, but what I’m seeing is the opportunity to learn to think and to solve problems differently.

And you, what do you think? Are we still trapped in von Neumann style?

--

--

Tomasz Jaskuła
Luteceo — software chemistry

Founder of luteceo.com My current interests are functional programming, event sourced and distributed architectures, domain driven design, philosophy and music.