Programming in 2037

Hello!

I’m deeply honored to give this keynote talk here at the All Things Computational conference on year 2037.

In this talk I want to take a look at the recent history of software development and remind you, my dear audience, about the changes in the software engineering profession during the last twenty years, so approximately since 2017. The late 2010s provides an interesting reference point because the greatest changes in our profession since perhaps the times of Turing and von Neumann were just beginning to manifest themselves then.

Rise of semantic computing

Perhaps the most remarkable change during the past two decades has occurred in how programs are built. A large fraction of programmer’s time used to be taken by the grunt work of translating human level concepts to machine instructions. If you wanted to create a service with user logins, recommendations, or shopping carts, you had to break the concepts down into small pieces and ultimately express them as assignment statements, loops, and other low level constructs.

The rise of semantic computing has changed that. A programmer is now able to describe a solution roughly on a natural language and a compiler AI turns it into program code by using semantic understanding to disambiguate vague descriptions and soliciting more feedback from the programmer when it doesn’t understand something. Programming is now more of a collaboration (some would even say a discussion) with the computer.

Today programmers specify goals rather than procedures and show examples rather than write instructions. The shift is reflected even in job titles as the term software teacher has recently become more popular than the more traditional software developer.

There has been a tremendous push to advance semantic understanding during the last decade because computers should adapt to humans rather than the other way around. While far from perfect, semantic understanding has already substantially expanded the scale and scope of problems that we are able to attack. Without it, we surely would not have virtual entertainment holodecks which produce simulations of any scenario that a user describes.

Another reason why the role of a programmer has shifted towards a collaboration with the machine is that machines are far better at handling complex algorithms. In the past, it was the human programmer who had to discover an algorithm for solving a task. As tasks grew ever more complicated, it became obvious that most interesting computation is too complex for a human mind to grasp. Computers, on the other hand, can mine the space of algorithms to find one that fulfills the goals. In retrospect, it is obvious that convincing simulation of humans and nature requires intractable algorithms because the nature itself is intractable.

Of course, there still are traditional software engineers who write carefully hand-crafted code on legacy languages such as Javascript — Javascript will be here forever — but their numbers are minuscule relatively to all programmers.

Disappearance of computers

Another noticeable change has happened on equipment: laptops and mobile phones have disappeared. Laptops (gray oblong boxes you see in old movies) used to act as both a computational unit and a tool for editing source code. Nowadays you rarely see anyone carrying around their own computer unit because all computation can seamlessly switch between a personal computing core woven in the fabrics of personal clothing, local area computing resources embedding in buildings, and the cloud depending on the needs. Augmented reality glasses and retinal displays provide a way to view content, and high-accuracy finger position tracking turns any surface into a keyboard.

Direct brain-machine interfaces have been rapidly superseding other systems during the past few years. When content can be projected directly on the visual cortex there is no need even to wear glasses. The brain-machine interface is based on medical implants originally designed to assist paralyzed and blind people.

Expansion of the profession

The number of programmers has exploded during the recent decades. This is partly because the advancements in natural language understanding have made programming more accessible but also because the progress on modern professions, such as virtual reality environment design or 3D printable food chemistry, depend crucially on our ability to instruct computers to perform complex tasks. Even the professions that have existed since before computers have been profoundly changed. Ability to understand complex models by running simulations has opened researchers on all fields a whole new toolkit. A computer programmer used to be a separate profession, today every professional is a programmer.

Programming used to be a generalist skill set that one could apply on different application areas after acquiring some domain knowledge. This is not true anymore. As the number of programmers has exploded, they have also specialized. It is rare to see, say, a blood nanobot programmer moving into law data mining because, even though both in principle program computers, the necessary skills have have diverged so much.

Another force driving the specialization is the emergence of new computing technologies. The problems facing someone working on self-assembling nano computers are quite different from those encountered on massively parallel DNA computing.

A lot has changed in the software development during the past two decades. Still the core remains: programmers solve problems and automate the solutions.

This is a textual transcript of the keynote presentation. To access the full-sensory recording activate your brain-machine interface and think the words “Aloha Altavista! Show me the All Things Computational 2037 keynote.”