Back to the Future — the Historical Context of Low-Code and the Return of HOPL

Carlos Silva
OutSystems Engineering
12 min readMay 28, 2021

It is hard for computer scientists and developers to think about the history of computation without immersing themselves in the history of programming languages. Computers and programming languages are concepts so interwoven that they appear to make little sense apart.

Nevertheless, on any given day, billions of humans use computers without possessing anything remotely resembling knowledge about their main programming languages. Such a rift between operating a computer and speaking its language may be a reality, but it is not an inevitability. It certainly wasn’t the plan at the dawn of computation.

As we will see in this article, computation’s original intent was to enable effortless interaction and communication between computers and the broadest user base possible. For an industry that has grown so exponentially in speed, capability, and impact on society, it’s kind of a mystery how, seven decades later, writing software hasn’t become just as exponentially easier.

Even for those working on making programming and software development more accessible, like OutSystems, the connection to the original goal of computing pioneers is sometimes lost or unknown.

So let’s refocus and reflect on our place in the history of programming languages. Far from betraying traditionalistic views, we will find out that those pursuing simpler and more accessible languages are going back to the roots of computer science and its purpose. When revisiting the past, it becomes clear that the future is low-code.

To better understand this crazy journey, let us contemplate what was once regarded as the future of computation — our past future — and how history actually unfolded — our present past. Maybe this exercise can better prepare us for what lies ahead.

Our Past Future

Room-size computers calculating planet trajectories became engraved in our collective memory as the onset of digital computation. These truly machinal behemoths were physically intimidating and complex to operate. Computers, such as the ENIAC (1946), were ubiquitous because they took up a whole room and not because they were blending in a particular space.

Although that was the reality at the time, when we look at what computation pioneers were envisioning for the role of computers in society, we find numerous visions of a symbiotic relationship between humans and machines that should fundamentally change the way we work, communicate, and entertain ourselves.

As soon as in 1945, American engineer Vannevar Bush published a detailed description of the Memex — a system for storing and retrieving scientific information (think of it as an early Wikipedia) that should be the everyday tool of scientists. His detailed description focused mostly on how one should interact with the Memex system through mediums that included screens for visual output, photography cameras to register and catalog new information, and speech-recognition systems for digital stenography.

Although Bush’s vision was purely theoretical, it inspired other pioneers who, following the integrated circuit revolution and consequent shrink in size and increase in performance, started adding the tools that enabled us to think about user-centric computation.

One of these pioneers of the second age of computation was an American engineer and inventor, Douglas Engelbart, who in 1950 started a major project at Stanford Research. Engelbart’s mission focused chiefly on the development of new user interfaces for computer systems.

The “Mother of All Demos”

In 1968, on what has been known since as the “ mother of all demos,” Engelbart’s team presented an array of new user interfaces — such as videoconferencing, word processing, collaborative text editing, outlining tools, and hyperlinks — all under a unified system known as the oN-Line System (NLS). More than 50 years ago, he was already laying the groundwork for some of today’s most widespread work tools.

In the words of Engelbart, the NLS would contribute to “augmenting the human intellect.” His vision was to increase the effectiveness of individuals through the use of digital computer systems, easily and effectively. In an almost visionary way, Engelbart argued that these systems could fundamentally change the way people collaborate, work, and ultimately, think.

Fig. 1: The image on the right shows one of the several desktop configurations for the NLS — it is important to note how this arrangement is similar to today’s desktop computers. The image on the left shows what had become the most popular mock-up design for the NLS (in this image operated by Engelbart), the Hermann Miller design. Source: Stanford.edu

While Engelbart was pursuing the goal of effortless interaction with computers through novel HCI systems that are now all too familiar, other computer scientists dedicated themselves to effortless communication with computers by developing novel programming languages. Spearheading this pursuit was iconic computer scientist Jean E. Sammets.

Sammets created FORMAC, the first language of significant use for symbolic mathematics. She was also one of the main contributors to developing another hallmark language: COBOL or “common business-oriented language.”

With statements such as ADD, SUBTRACT, COMPUTE, PERFORM, or DISPLAY, it is probably safe to regard COBOL as a first natural language approach to programming, as it purposely used English-like syntax to bridge the developer’s intent and the computer’s execution. Tailoring programming languages to foster ease of use was such a hot topic at the time that the Association for Computing Machinery (ACM) created a somewhat periodic conference called ACM SIGPLAN History of Programming Language (HOPL).

In the words of Tim Bergin, a computer science historian, HOPL became a place to “examine the early development of selected programming languages — with an emphasis on the technical aspects of language design and creation,” thus becoming the main venue for presenting new programming languages. Most importantly, it would be the place to discuss its design and figure out its potential impact on how humans communicate with the machine.

Computers Are for Everybody

In the first HOPL conference (1978), the keynote speaker — the legendary Grace Hopper — identified the goal of programming languages as twofold:

  1. To create better programs with less effort.
  2. To allow non-specialists to use computers to solve problems.

If there were any doubts about the original intent and vision of programming languages, Commodore Hopper made sure to clarify it, stating that computer scientists should always aim to program with less effort while broadening the scope of those who should be capable of doing it.

Of course, at the time, by “non-specialists,” she meant the likes of non-computer sciences engineers and mathematicians, but this foundational idea was laid out. As technology evolves, programming should be made increasingly easy and accessible to anyone who desires to develop computer programs. Ring a bell?

So why is it that after 40 years from the first HOPL we’ve reached a point where, despite an unimaginable technological leap, the large majority of humans still see programming as some kind of magic? Why do we face this apparent disconnect where computers became increasingly powerful and embedded in our daily lives, but the means to talk with them directly remain cryptic, much like an odd foreign language?

Fig. 2: The Babel tower of Programming Languages. An illustration that became famous after appearing on the cover of Sammet’s book Programming Languages: History and Fundamentals (1969). Source: craftofcoding.wordpress.com

Our Present Past

In its foreword to the book Masterminds of Programming, computer scientist Sir Tony Hoare points out that although “[…] there are so many programmers [and researchers] who think they can design a programming language better than one they are currently using […] very few designs do ever leave the designer’s bottom drawer.”

Known for his focus on the verifiable correctness of the implementation, Hoare understandably justifies the difficulty in creating new programming languages with the seriousness of the business — i.e., minor errors in language design can bubble up to large errors with potentially catastrophic consequences. This effect highlights one of the main differences between programming and human languages: while the human listener is quite tolerant to language imprecisions or even grammatical errors when compiling a language, the computer is unable to guess what you mean, unable to fill in the blanks or even help you finish a sentence.

There’s also no prosody in a programming language, all those little elements of human language that contribute toward acoustic and rhythmic effects, allowing you to decipher the speaker’s meaning. So, the computer doesn’t know if you said something pretty confidently or if you’re a little bit unsure. Computers are gullible: by default, they will take you on your word.

Thus, the effort to develop a flexible language that accommodates some “speech creativity” but with a stricter structure that facilitates its verification and improves security is a pretty challenging balance. And this is probably the main reason why programming languages today, for the untrained eye, look pretty similar to those presented in the first HOPL 40 years ago.

Targeted Domains vs. General Purpose

Another critical factor for the slow progress in making software writing easier is the old law of supply and demand. For many years, software was constricted to mainly scientific, military, and medical applications. These are all fields that promote a specialist rather than a generalist approach. As such, languages that grew to serve a specific field tend to become increasingly intelligible for a specialist in that area while becoming increasingly (or at least remaining) cryptic for the general public.

For many years, there was no much incentive to aim at Hopper’s goals when designing a new language because, for the most part, the target user would be somewhat familiar with the jargon. At one point, we’ve witnessed the divide between domain-specific languages (DSL — languages that were tailored for and targeted specific application domains) and general-purpose languages (GPL — a language that is more broadly applicable to different application domains).

Although a good idea in principle, this was a missed opportunity to focus on usability and accessibility in the scope of GPLs while leaving specialist jargon constricted to the domain of DSLs. Once again, it is safe to say that there was not sufficient incentive to keep simplifying GPLs.

And, perhaps ironically, DSLs saw some of the first genuine efforts on making programming languages more user-friendly. These had a broad audience of non-specialist users, such as those targeting gaming and 3D modeling. Think of GameMaker language, UnrealScript, or even the logic bricks-type programming that we see on game engines, such as Blender.

Finally, a third and non-negligible factor has to do with technical (including hardware, architectural, and infrastructural) problems. A hypothetical graph of the programming language evolution could resemble a step function with long step treads and step risers associated with technical innovations.

A fundamentally different language paradigm often requires harnessing the power unleashed by a groundbreaking technological innovation. Summarizing in a very simplistic way the last 70 years, we can pinpoint the integrated circuit revolution as the trigger that moved us from room-size punch-card operated computers to script-interpreting digital desktop machines.

Later on, compilers created a layer above the assembler that allowed for third-generation languages (3GL) or high-level languages, such as FORTRAN, C, Java, and Visual Basic. And currently, the maturity level of web infrastructures, advanced software architectures, and hardware improvements made cloud computation possible, accelerating the viability of low-code and visual languages capable of reaching larger audiences.

Catching a Slow-Moving Train

What is noteworthy is that each of these technical innovations preceded an increment in the programming language abstraction level — from machine language to assembly language, from imperative to functional, from low-code to visual languages.

Typically, these increments have always opened the door to a new set of developers that, either due to their background or the effort needed to master previous languages, were kept outside the pool of highly valuable professionals who know how to develop software.

So, let us summarize this journey so far. We know that the pioneering computer scientists envisioned programming languages evolving to a level where non-specialists could rapidly create computer programs. We also understand that the evolving speed was slower than initially predicted.

We saw that there were, arguably, three main reasons for that:

  1. The complexity reason — creating a well-implemented, secure, and robust computer program is much more complicated than just creating a computer program.
  2. The economic reason — for many years software applications were constricted to specific domains, and only recently have we witnessed full, widespread access and complete ubiquitousness of digital tools, and thus, a gargantuan need for new developers.
  3. The technological reason — we can only abstract a language as far as our technical knowledge allows, so a hypothetical graph of the programming language evolution could resemble a step function with long step treads and with step risers associated with technical innovations.

To close this section on a high note, the good news is that we are improving and getting better at escaping each of these three constraints. So let’s have a look at what the future holds.

A Possible Future

In his book The Future, which is a concise history of futuristic visions and how they got it wrong or right, Professor of Digital Media Nick Montfort states the following about fortune-telling endeavors: “It’s impossible to look at positive, productive visions of the future without seeing the context of despair out of which they arise or into which they sink.”

This sentence is a truly concise, elegant, and brutally honest way of uncovering the fundamentals of devising visions of the future. And make no mistake, a little bit of futurology is at the core of innovation. Above all, what Montfort is saying is that envisioning exercises are helpful as they can be productive and result in actions for the present.

However, to be of any use, these visions of the future have to carefully consider both the tortuous past that brought us here today, as well as the multiple possible paths that may lead us astray going forward. We need this information to define the course of action towards one of the numerous potential outcomes.

In the industry, we all feel that the three constraints to the evolution of programming languages mentioned in the last section are on the verge of becoming, if not surpassed, at least loosened. An array of tools tailored at automatically verifying your code, ensuring good practices, and providing additional security to your applications address the formalism, implementation, and protection issues.

We are currently facing a major problem in this industry, which is the ever-increasing need for developers to support digital transformations. General-purpose languages, as designed and (primarily) relying on the same principles of the last decades, are not suitable to solve this issue. A true leap forward in abstraction is required to bring a new set of developers aboard and solve this professional shortage.

And finally, technology has finally reached a place that allows us to harness the computation power of virtual machine clusters while using the web as the main gateway for developing and managing your development work.

Low-Code: Breaking the Harder to Master

Undeniably, one of the leading agents of technological disruption in this context is low-code development platforms. These platforms evolved from the RAD ( Rapid Application Development) tools that appeared in the early 2000s and became all the rage, with their promise of accelerating and easing development by freeing developers from the heavyweight of writing 3GL code.

The main characteristic of low-code development platforms is that they intentionally hide some complexity, which is safely automated, under the hood of simpler user interfaces, often primarily visual in nature. This is a game-changer because it is the first effective attempt to break away from the harder to master script-based languages.

The new abstraction layer promoted by low-code platforms emphasizes “designing” your logic, instead of “writing” your code. Arguably, designing is more universal than writing in a specific language. As so, with this new abstraction layer, we have finally opened the door to transforming the operation of an advanced user interface — such as drag-and-drop action, drawing a connector, or toggling a switch — into a set of commands that trickle-down, quickly piercing through traditional abstraction layers.

Have we finally found the missing piece to complete the puzzle of developing a programming language that genuinely answers the goals Hopper laid out in the first HOPL? Steps in the abstraction level of programming languages have always generated some doubts, sometimes justified concerns, and many times knee-jerk reactions opposing change. But now that we have briefly dived into some of the histories of programming languages, we have a better sense of the big picture where evolving towards easier means of communication with machines appears to be the central theme.

Fig. 3: The left image shows a snapshot of the GRaIL (Graphic Input Language) system that Alan Kay began developing in 1968. The image on the right shows the same principle of graphical manipulation of a logic flow in OutSystems (2021).

We all have had the experience of talking about OutSystems to developers and witnessing one of these common reactions: 1) They love and advocate for it, or 2) They dismiss it for its simplicity and visual focus. Those that fall in the second category usually forget how difficult it is to bridge the gap between handling a computer and speaking its language and are often unaware of what our computer ancestors envisioned at the onset of our field.

Now that we know a little more about our history and the forces that shaped it, we can confidently say that the future is low-code. And perhaps this future is just around the corner. According to Gartner, low-code will be responsible for more than 65% of application development activity by 2024. As the company that coined the term “low-code,” at OutSystems, we are constantly striving to tick all the boxes that will place us as the de facto answer to the complexity, economic, and technological challenges that hinder the evolution of programming languages towards an effortless and accessible way of communicating with computers.

This year marks the return of a new HOPL conference, a rare comet-like event — past conferences were held in 1978, 1993, and 2007. At HOPL IV, we will see for the first time presentations on fourth-generation languages (4GL), such as LabView, MATLAB, and R. Looking at how HOPL has been tracing the evolution of programming languages, we need to embrace the challenge. As the archetype for a low-code language, we hope to see OutSystems in its rightful place amongst the languages presented in HOPL V.

Originally published at https://www.outsystems.com.

--

--

Carlos Silva
OutSystems Engineering

I’m a researcher in the fields of Human-Computer Interaction (HCI), Human Factors & Ergonomics, and Experimental Psychology.