Jensen Huang is Wrong about Coding

Nathan Anecone
8 min readFeb 28, 2024

--

Image credit: NVIDIA

The CEO of NVIDIA thinks children should no longer bother to learn coding because in the future AI will do it. He also says AI makes everyone a programmer now and that in the near future the only thing you will need to know to program is your native language. All of this is misguided on many levels and perpetuates several pernicious misconceptions about AI and the value of programming knowledge and the value of possessing knowledge for ourselves generally.

Mr. Huang, a businessman with a background in electrical engineering who runs a hardware (not a software) company, and who is neither a programmer nor an AI scientist, made his comments at the World Government Summit in Dubai to an audience of powerful people whose decisions ripple out and affect the lives of millions. This makes his comments all the more irresponsible. So it’s important for those who know better to call out these misconceptions and correct the record.

All Aboard the Brain Drain Train

The idea that you shouldn’t bother to learn coding because AI can do it fits into a genre of unintentionally disempowering attitudes that promote ignorance and a loss of intellectual autonomy. Attitudes which, if fully embraced, could lead to a future “ignorance explosion” or “knowledge collapse” caused by AI. It’s the same reason why Socrates warned that it’s a mistake to think you possess knowledge just because you own a library of books you haven’t yet but might some day read, except worse. Because even if you don’t yet possess the knowledge in a book you own, the potential is still there. With AI over-reliance, you actually never learn anything. Such an attitude puts you in a state of epistemic dependence on an external source, in this case a machine, sort of like how intuitive sense of navigation has declined as more people have grown accustomed to using their smartphone’s gps to get around . Old school taxi drivers who rely on memory to pick routes have been shown to have bigger hippocampi — brain structures implicated in memory processes. We’ve all bought into a faustian bargain of exchanging a diminished brain region for the-admittedly, quite handy-convenience of gps. Perhaps our navigation skills is a worthy sacrifice, but what about logical thinking skills and the ability to control computers on a root level?

It’s true that first and foremost coding is a practical skill that involves assembling instructions to build the software that run on computers. This is the side of programming, and often the only side, that non-programmers such as Mr Huang see. The other side is that learning to program teaches you how to think precisely and critically and to get deeply acquainted with the inner workings of computers. This is the side that only programmers see, because only they experience it. This is the true side, the soul of programming if you will, and it would disappear from the face of the earth if only AI did coding.

Learning a programming language also immerses you in deep computer science concepts, which helps you to “think like a computer” and understand what’s going on in it. An ignorance of programming languages means an ignorance of computer function, and that’s disempowering.
It’s no wonder that an increase in the use of AI generated code is leading to a decrease in code quality according to research. Code that nobody has thought about or understands, that was just spat out by a LLM without any guiding principles and then carelessly pasted into a larger codebase with no attention paid to the overall code pattern is obviously going to be more watered down than code that was thoughtfully and intentionally written with the bigger picture in mind. AI generated code is leading to a vast sea of fragmented, “orphan code”, which nobody really claims ownership over or invests effort into improving. A world in which nobody understands coding but blindly wields code sounds like a nightmare in which bugs arise that are beyond our comprehension. And if your attitude is just “well then AI will fix the bugs,” then frankly you’re part of the problem.

On a different level, there is such a thing as recreational programming. For some, there is a joy in devising a logical solution to a problem, or a feeling of magic when you get the computer to work. Every programmer knows that pure dopamine kick of feeling like a genius after putting a lot of effort into devising some clever code. It’s extremely gratifying and perhaps even a character-building experience, especially after overcoming a great amount of frustration and conquering a tough challenge. Learning programming is great for children’s brains. Do we wish to deprive children of this whole area of potential human experience? Okay, fine I guess.

(A related analogy to underscore the point: why bother to learn to paint or draw if AI can do art? Should we just deprive ourselves of the joy of creation just because a machine can sort of do it?)

So a world in which we depend too much on AI to code is one where we’re more ignorant and the code is of lower quality, with one less engaging, meaningful activity for us to do. It’s true that trends could change, and the quality of AI generated code could improve, but that would only make the first consequence, AI dependence induced ignorance, worse if we fall into the trap I’m warning against.

For the record, I’m not a luddite when it comes to AI as a coding or learning tool. I think it can be a great enhancement to coding and education, but using it as a substitute for possessing your own knowledge of subjects is wrongheaded. On the contrary, I foresee a future in which programmers are coding with AI more often, but they’re still talking and doing code on a high level.

Natural Language is Underpowered and Overrated

The second error of Mr. Huang’s speech has to do with the claim that in the future, all coding will be, or should be done with natural language. This is misguided on its face. If natural language could do everything a programming language could do then why invent the latter?

One possible answer to that question is that we didn’t have natural language enabled AI yet, so we needed a specialized language to interface more directly with the machines. That’s not true, because truthfully, calling programming languages “languages” is kind of a misnomer. Programming languages are not languages in the conventional sense of a communication method that strings together nouns and verbs and such like through a grammar. Programming languages are collections of tools that perform precise and predictable changes in computer behavior. I highly doubt we can or should dispense with these tools and replace it with a much fuzzier, differently purposed natural language.

The utility of technical languages is that they are denotative: they say one thing and one thing only. Natural language is connotative, a statement in a natural language can mean different things depending on the context. It’s strengths are also its weaknesses, because that ambiguity is anathema to the precision you need to program well. English is good for poetry, not for programming.

The utility of speech-to-code all hinges on the degree to which the AI can translate natural language into corresponding technical language. Lots of programmers can attest to the difficulty of verbally describing in natural language the intricacies of code. Arguably, there are certain technical problems that cannot be adequately described verbally at all, any more than mathematicians can do math in English. Plus, if you never learned programming or software because you deferred to AI to do it for you, how are you even going to find the right words to talk about it? How are you going to evaluate or make use of the results, without understanding it? By asking the AI to understand it for you? Any programmer will tell you that struggling to try to explain what they need in words is often way more time consuming and inefficient than just coding it.

I think I understand the psychology of Mr. Huang’s thinking. I don’t know how to do a lot of advanced math, but I know English decently well. Wouldn’t it be nice if I could skip all the hard work of learning advanced math and just do it all in English? Sadly, if you want the results you have to do the work of learning how to use the specialized tools for the job. Any shortcuts will be imperfect and less fulfilling.

Let Them Eat Code

As an obligatory political aside/rant, I find it rather irksome that Mr. Huang, who is already a multibillionaire and is surely being made even more wealthy from the AI boom his company enormously benefits from, is so ready to throw the programmers whose livelihoods depend on coding and whose work helped him get rich, under the bus. I don’t attribute any malice to his words, but there’s a degree of carelessness about them for which he should be held accountable.

Even well-intentioned statements deserve scrutiny. Mr Huang isn’t exactly the most impartial party in all this either. His job is to drum up hype about AI and woo shareholders, to keep the chips that power AI his company produces flying off the shelves and stock prices soaring. Because he is an influential person, and we as a society continue to insist that being rich and successful is a qualification to speak on any subject authoritatively, it behooves us to call out the irresponsible or incorrect statements of such people rather than blindly lend them credence. Such comments have a human cost which is often ignored.

Preventing Drynet

In some science fiction scenarios, the AI apocalypse isn’t a scary Skynet scenario in which the AI uses an army of killer robots to exterminate humanity. Instead, AI leads to a “new dark age”, in which superstition prevails despite a high level of technological development, and human reason slips into recession as future generations forget what made the technology of their ancestors tick. People come to worship AI as idols or even as gods, having grown reliant on it for everything and knowing nothing themselves. Call it “Drynet”, because it’s a much less thrilling and action packed dystopia than the one depicted in the Terminator movies.

Drynet might not even be a dystopia in the classical sense, in that this world run by the machines might even be more carefree for us and offer a high standard of living. But to me, sacrificing knowledge for convenience is ultimately damning. Certainly it makes us very vulnerable in situations where we can’t rely on AI and have no knowledge or skills to fall back on ourselves. While it’s hyperbolic to say this kind of scenario is a given if we come to rely too much on AI , allow our own skills to atrophy and abandon teaching programming, such a hypothetical future should be avoided if it can be helped by not doing these things. Better for AI to help elevate and advance our knowledge, rather than give us a reason to know nothing.

--

--