When “traditional programming” or “programming” terms are used, we usually mean using programming languages like python, Java, C#, etc. to reshape or re-express a programmer’s solution for a problem in a way to be executable by a computer system. We all remember courses such as fundamentals of programming, advanced programming, data structures and algorithms, which were mandatory to pass in our bachelor’s.

[credit] Leonardo.ai, sentence: “van gogh style painting of a man resting while a robot building a car”

As I can clearly remember, our teachers were emphasizing that these are so important courses for succeeding in the field. The primary goal was to gain the skill to convert an idea to a program written by programmers, which is human-readable, and human-comprehensible. Also, people in the time when computer science was taking off, were required to know about processors to develop programs as they were programming with very low-level languages like machine or assembly languages. I, as a person with a hardware background, usually have a miscommunication problem with software and database researchers in terms of using the correct terminology for hardware and system concepts. I used to think how they dare to not appreciate how delicately complicated a CPU is, for example how its pipeline is working while encompassing a branch prediction unit, scoreboard unit, etc. The reason was that I am enthusiastic about them and expected everybody to be the same. But, those people don’t need to know that level of detail about the processors for their work thanks to the abstraction layers.

Moreover, I remember during my bachelor a research team’s focus was on machine vision. They were developing algorithms and mechanisms for enabling machines to see. They were doing a lot of programming and mathematical analyses to conclude that something specific is detected or seen. However, now that I look back, I can see how machine learning, changed the name of that research to “Traditional Computer Vision!”

I think that in the future we won’t have programming as we know it these days. For complicated applications that we cannot even imagine how to develop an algorithm for solving their problem, we will train deep learning models with data. Also, for simple programs like the programs and scripts we develop in our everyday routine, there will be a model or application that will generate them for us! I believe in this way after witnessing the emergence of GitHub Copilot, ChatGPT, DALL-E, and Leonardo.ai, which produce code, text and graphic respectively! The following graphic is generated by a deep learning model by giving “van gogh style painting of a two love dinosaurs” description.

[credit] OpenAI DALL.E, sentence: “van gogh style painting of two loving dinosaurs”

Then, we may wonder and say OK! So what will programmers do in the future? will any programmer exist? — - It seems that programmers will be responsible for checking if a training dataset is right for training (data to be representative and balanced), and evaluating the training process with appropriate metrics. The trend shows that programmers are getting further away from low-level abstraction layers. In the beginning, they were exactly knowing how a specific processor was executing their code. But, these days, they only know their high-level codes are translated by some software layers to what a processor can execute. In the future, they won’t even deal with defining the rules for processing the input into output. They will monitor and supervise what AI models are learning and performing!

[credit] OpenAI DALL.E, sentence: “van gogh style painting of a man resting while a robot building a car”

Please share your ideas.

References

--

--