Until just yesterday, I thought AI will never replace me as a programmer.
Well, programming as a profession was waltzing with its own death from the very beginning. I imagine, when someone came up with the Assembler for the first time, a lot of people thought that this is the end.
What? A program that turns human-readable scribbles into real machine code? So every manager can write code now? Are we obsolete? Have we had been automated? Is it time to pack out things and go?
Then soon enough high-level languages came in. Languages such as FORTRAN and COBOL. Now, this definitely renders real programmers unnecessary, doesn’t it? You can be a mechanical engineer or a business analyst and be proficient with a computer. You don’t need a programmer to write your code for you anymore.
And then inductive programming came. Functional as in Haskell or logic as in Prolog. The idea of inductive programming is — you don’t write code, you only set the constraints in which a program should operate and, if possible, the language will write the code for it itself.
This became language agnostic as it progressed into computer algebra systems. Now you can write some equations in a Python notebook and turn their solution into code for any other language. So not only you have no need for a programmer to write code, you don’t even have to write code yourself.
And while all of these things were threatening to kill programming as a profession, a sheer amount of programmers rose, rises and continues to rise. And exponentially, too. Every five years the number grows twofold. Way faster than mankind grows its population. At this rate, we all, all ~10 Billion of future us, will have three programming jobs each by the year 2075.
Of course, this growth has to stop at some point. There is simply not enough cocaine in the world to keep us all going programming non-stop for days and days. But it wouldn’t stop because of some disruptive technology it would only stop when the demand in other professions outweighs the demand in programmers.
Disruptive technologies such as assembly, high-level languages, inductive programming, or AI can only change the way we do programming but not kill programming as an occupation.
Or so I thought before yesterday.
You might think, I had a chat with GPT-3 and it finally convinced me to pursue the glamorous and prosperous career of a male prostitute. But no. In fact, yesterday was just another day, I was just doing my job. Well, not my job. I was doing a job C++ should have done for me but failed. I was porting a piece of high-performance code from MSVC to GCC.
What makes programs run fast in 2021? The memory is still relatively slow and comes in huge words so explicit memory management is a must. We have many cores per CPU now so we need parallel computation. Also, processors are not ticking much faster than some 20 years ago but they do have a lot of conveyors now, so if we want fast code, we have to superscalarize everything. So these three things.
C++ sucks at all three.
- Yes, there is a standard way to allocate aligned memory in C++ but MSVC doesn’t support that.
- From-the-box concurrency is so anemic, you have to rely on third-party libraries such as Intel’s TBB.
- Compilers try to exploit SIMD when possible, but they fail to do this effectively so you have to write your code in intrinsics to get all the benefits.
Which is all fine if you’re stuck with Microsoft and Intel for life.
But when you try to port things from one platform to another you see that C++ quietly lost the portability game over the years. Yes, you can fight the alignment inconsistencies with defines. You can technically do a TBB clone for ARM since Intel rightfully has no interest in supporting their rivals. But with intrinsics, you have a full stop. Intrinsics are processor-specific so your code is either underoptimized or importable.
Which is ironic since C was originally invented to port what will become UNIX from PDP-7 to PDP-11. Its sole purpose was to enable portability. Now 50 years after, we face the uncomfortable truth. To maximize your performance, you have to use processor instructions better than the compiler does. So essentially code just like the very first programmers did.
Ok, but what does it have to do with AI?
Glad you asked. I feel that while the demand for programmers still follows the global market, the demand for innovation peeked in the 70s and slowly declines over the years.
There will not be another disruptive technology as brilliant as inductive programming or even high-level languages, since there is zero demand for such. I’m obviously unhappy with C++ lack of development in high-performance computing but guess what. I’m not that unhappy to invent another FORTRAN. Apparently, no one is.
C++ kind of works. It’s not ideal but it’s not that bad either. That’s why it stays.
Yes, I spent a day rewriting the code from MSVC C++ to GCC C++, and it was boring, and I wrote this rant, but this changes nothing. A day is just a day. A rant is just a rant.
Now for the disruptive technology. While AI can’t yet do everything my clients want from me, it has all the capability to do the boring part of my work. It can fix cross-compiler inconsistencies or optimize code on intrinsic-level. Even re-implementing a parallel-for with std::threads looks tedious enough to be delegated to the machine.
AI can even write a rant for me afterwards.
This is all possible, profitable, and probably fun for someone who has more experience with AI than I do. We already use SymPy to write code in C++, exploiting AI to optimize this code will be just one small step further, but people will gladly pay for this step. It has startup potential, so I’d expect this to appear anytime soon
There is a major chance that AI will eventually replace me, maybe, not as a programmer but as a C++ programmer. But a little chance that some new technology will kill off the C++ itself.