Prompts as the new coding language: The abstraction battleground for jobs and purpose

Bobby Mantoni
Brass For Brain
Published in
3 min readSep 7, 2023

It’s often said that AI will replace us or take our jobs. Maybe. But, which jobs first? I’ll argue that Gen AI will replace most work that lives in the middle ground of abstraction (the first battleground) leaving for us humans to toil on the value-based strategic work, and the specialist work.

An example of a typical application’s layers, with the right side increasingly implemented by reusable packages, and the developer working on the left and sometimes in the middle.

Here I’m visualizing how a typical project is implemented, moving from an idea on the left to the low-level implementation on the right. Across the bottom are some examples of languages that are used at each level (roughly). I see Gen AI taking over the middle area first, bridging the gap between well-defined hardware capabilities on the right and well-defined feature descriptions on the left.

On the far left side of the picture above there is a dependency on unformalized, human-centric, value-driven goals and ideas. These may not even be expressed in words yet, so can’t be syntactically processed or automated until they are sufficiently formalized. The right side of the diagram depends on physical hardware, which also has a limit to what can be automated (it may not exist, or its capabilities may not be fully utilized or even conceived in software/language). So I’d argue that the middle is where Gen AI will first have the biggest impact in replacing human work, technical and otherwise.

In the same way that Python can read, parse, and transform a JSON file in 2 lines of code where that would take a few more in C++ or NASM, what remains is the coder’s expressed intention to “call this API, parse the response, extract these fields and add them together.” That intention still needs to be expressed somehow (and the result needs to be verified). Prior to that, there is an even more fundamental intention, which might be to use a given data source to look for specific kinds of trends in past market performance, for example.

In the picture above, much of the right side is already not implemented by the average developer. As history has marched on, these lower levels are implemented in reusable packages that developers rely on. Gen AI will continue that trend pushing into the sphere of natural language feature requests.

Each higher level language, from machine code, to C++ or Python, allows people to formalize those intentions with increasing brevity. LLMs expose an even higher level language to the user: the prompt. Prompt engineering is already a field unto itself with interesting techniques such as Chain-of-Thought Prompting. In-Context Learning could be thought of as an analogy for the coders “main debug loop”: code, run, test, repeat.

I see this more as an evolution than a revolution. Rather than copying and pasting bits and pieces of code from myriad sources on the web, the LLM can do that for you. A self-driving car can get you from point A to point B, but you still need to decide where you want to go (for now).

So, where do we want to go? How will we know when we’ve arrived? These are the truly difficult questions and LLMs cannot help with that, since they are merely transforming symbols they’ve gobbled up from the internet.

There are some interesting things to explore further:

  • How does testing keep up? Maybe there’s an equivalent path for generative testing to develop along with the generated features.
  • And I do see a way for AI to encroach on the far right side of this, but I’ll ave that for next time.
  • Will the intermediate layer of libraries and frameworks become inscrutable (if they exist at all), having been generated by AI for use by AI?

--

--

Bobby Mantoni
Brass For Brain

Parallel programming, CUDA, AI and Philosophy. Degrees in both. Software engineering veteran. Father, Proud American.