Why AI Will Never Take Our Jobs

Michael Toback
3 min readMar 24, 2024

--

“If you please– draw me a sheep!” — The Little Prince

From “The Little Prince” by Saint-Exupery

For thousands of years, technology has moved the bar of what work means.

For most of human history, that has been with machines. Mechanical things that convert physical labor to guiding an instrument to accomplish the same thing faster, easier, and less costly.

Since the invention of the telegraph in the 1830s, that has changed.

Communicating and transforming information has become what we call work.

And now, the very substance that we use to do this is gaining the intelligence to do it itself.

Or is it? Or can it?

I have a very simple response to that. It can’t do it, because we are always asking it the wrong questions.

People are always being asked to do things. Sometimes, these things are well understood. So, the machine will almost always get it right.

But in the case where real work is required, we don’t know what that is. So the worker has to realize that the person got the question wrong.

How? Well, there are many answers.

First, because the question wasn’t asked correctly. An imperfect human prompts a program that was written by an imperfect human.

There is always information missing. When the Little Prince asked the Aviator to draw a sheep, there was no content. A big sheep? Little sheep? Where to draw the sheep? On the Prince’s home (which of the Aviator had never seen)? In the desert?

Or the person asking the question didn’t understand the question in the first place. Marketing never understands engineering, and vice versa. So most of the time there needs to be an iterative process.

As a former software engineer, patent attorney, and family law attorney, the client (person asking the question, making the request, etc.), almost always asks the wrong question.

For example, a manager might ask for a report on fraudulent loans in a particular bank over the last 30 days. After looking at the information, I realized that there was no one indicator that the loan was fraudulent. So, I go back because the real question is “What loans have one or more of the following indicators in the last 30 days?” And of course, then I need to go back to the people who defined these indicators and find out what they are.

One thing missing from AI is life experience. Let’s say you train the AI to have the competencies of a graduate-level engineer. But to accomplish your task, you need to give it so much detailed information that, you have to know how to do it yourself, and you pretty much end up doing that anyway.

Here is a real-life example of the life experience problem. A few years ago, code camps grew all over the world. The idea was you could teach anyone the basics of coding in Python, HTML, JavaScript, and one or more front-end languages like Rust, and they would magically become a junior developer.

Sure, if they have seen code before? But if they are coming into this with zero experience? It just can’t work. Yes, they have done a few mock projects of 1–2 week duration and learned to work in teams. But remember their team leads will ask:

  • incorrect questions
  • incomplete questions
  • questions they didn’t understand in the first place, and
  • the wrong question?

This is the problem with AI. This is why AI won’t ever take our jobs.

Wait, but AI does work. It does a LOT!

Well, so did assembler code when we stopped having to wire computers.

Then programming languages when we stopped having to understand machine instructions.

Then API calls and command line scripts that we could glue together when we stopped having to understand programming languages.

So our jobs have changed and will continue to change, but we will still need engineers, programmers, etc.

What are your thoughts?

--

--

Michael Toback

I have a lot to tell you. I was an software/bio/electrical engineer, cybersecurity analyst and lawyer at various times.