Is AI Going to Take Away Our Jobs? How Smart is Artificial Intelligence?

Lakshmi Prakash
Design and Development
7 min readApr 12, 2023

I’d like to firstly assume that since we are here discussing this subject that we are fans of artificial intelligence. I know I certainly am one. And in my experience, every time I tell someone who works outside the AI industry but has some knowledge of all the hype around these developments (thank you, social media!), one of the first few questions would be either “Do you think AI will take away our jobs and leave all of us unemployed in the next 5 to 10 years?” or “How smart is AI? Can AI defeat humans?”. Reminds me of those times when I would tell people that I was a therapist and many of them would ask, “could you read my mind now?”.

Will AI Take Away Your Job?

How Smart is AI?

During times like these, when we say “AI”, many of us are talking about ChatGPT, for the simple reasons that it’s garnered so much attention recently and is also available for most of us to use, thereby making itself popular among people across the world.

But how often does ChatGPT get things right?

My friend/colleague was recently joking about how Python code generated by ChatGPT failed. And in turn, I shared my experience honestly with code written by ChatGPT: that the code fails more often than it works.

ChatGPT-generated Code

This is not to say that the code is absolutely incorrect or some rubbish. The AI gets the context and meaning clearly most of the time, and that’s what you would expect of a good language model, that the understanding is clear, so the NLU part is mostly good with ChatGPT.

The AI gets the context and meaning clearly most of the time, and that’s what you would expect of a good language model, that the understanding is clear, so the NLU part is mostly good with ChatGPT.

One of the most common problems, though is that the answers generated by ChatGPT are not always reliable and/or correct! The same would apply to Google Search or Bing or any other NLP system.

I believe that these are super common problems with regard to any NLP system. These are just language models, and they try their best to answer questions and solve problems, but chances are that they could fail easily. They need to be updated every single day, every single minute, and can you imagine how much that would cost, maintaining such updates on every little thing that there is?

I think that people do not udnerstand these problems and expect artificial intelligence to behave like a know-it-all, for AI to be god! And that is amusing; it is hilarious! But I know enough to not expect the common man to understand these problems, rather blockers in the development process because they have little idea on how these artificial intelligence systems are built.

As of now, the best language models can do the following:

Understanding users’ queries (NLU)— advanced level understanding — 5/5, or at least 4.5 (mostly in English and in some other languages the models are trained well in)

Translation in several different languages — 4/5 for most of the languages with large training material, and say, 3/5 for languages with less training data (like Indian languages, for example)

Speech-to-text and text-to-speech — 3.5/5, to be honest, Speech-to-text is really difficult for a machine to udnerstand because spoken language does not work in the same way written language does (but we will save this for another day!)

Speed of Response — 5/5, this is what computers have always been praised for, and we have technology at its best now, so this should not be a surprise!

Relevance of Responses — 4/5 because we have a wealth of information these days, but it could also be bad depending on how much information there is on a certain topic. This part is closely tied to NLU, but the response dataset also plays a significant role. A typical machine learning or deep learning model tries hard to find the best answers, and when there are no answers or not enough answers, it picks from the existing pool of answers to give you the one(s) that is(are) of the highest probability value(s). This could be a miss many a time, and you should not be surprised if you are looking for something extremely specific.

Does Google know Everything???

Accuracy of Responses — 5/5 or even 1/5, this again depends on the availability of information and the accessibility, can Bing, now powered with ChatGPT tell me about military secrets of a country? No. Just because information exists and is used for training, could that mean that it was fact-checked? No. A language model is just a language model; it does not do fact-checking, and that is not an easy task either.

Reliability and Bias — 3/5 perhaps? This has been and will continue to be a problem in training AI because it is human nature to have bias, and combined with limited knowledge and several different perspectives and approaches, this is a challenge we will have to work on regularly.

For your amusement, you can watch this: https://www.instagram.com/p/CnzXcszjEBZ/

The same logic and similar problems apply to artificial intelligence systems in other domains as well, I believe.

“As of May 15, 2022, 12 reporting entities have submitted incident reports for 392 Level 2 ADAS-equipped vehicle crashes.” says this report by the National Highway Traffic Safety Administration by the U.S. Department of Transportation.

Problems can be either silly or really serious, but it is a fact that none of these systems are anywhere close to perfect.

Silly problems could include what we humans might find amusing or even those little failures that make us lose our patience, like chatbot failures or failure to identify faces or thumb impressions. On the other hand, extremely serious concerns could involve human and/or animal lives being put at risk, loss of life, or loss of big amounts and property, or anything with respect to safety.

So coming to the question, …

Can Artificial Intelligence Take Over, Will AI Steal Your Job?

The simplest answer is this: if your job can be automated, if there is not much to lose with things going wrong here and there, if your employer can afford an AI, then if AI would not totally take away your job, clearly, AI can do most of it, and you will most probably be required to step in to handle situations that have gone wrong.

Take for example customer support. This is a field that has always been about human beings answering questions raised about human beings. With language skills and understanding questions, terms, the process to be followed when a concern is raised, and what answers to give is all that this field of work requires (not to mention tons of patience, but that is not a factor when we are talking about technology!). And large language models have now evolved enough to easily do this on their own! But still, AI can’t always be perfect, so when automation fails on the initial levels, before the user or customer gets really annoyed, a human being intervenes. This is the current-day reality of this field. This is why food delivery apps like Zomato and Swiggy, cab request apps like Uber and Ola, and many such apps across different domains still have human assistance for users. 😉

As mentioned earlier, most of the code written by ChatGPT often fail, at least for me, they do. I frequently see deprecated functions being used, or it is just plain wrong.

Imagine, if AI needs backing up and is still learning in such simple fields, how can AI be relied to handle some of the more complicated tasks, like building and launching a satellite or defending a person in the court or driving a car in dangerous areas?

Relax, your jobs are safe. As of now, AI can only assist humans in what we do. Your actual concern should be how good you are at what you do because fellow humans beating you at what you do is much, much more possible and can and will happen if you’d take your job for granted because there are a lot of talented people these days!

(Recollecting that scene from The Big Bang Theory)

Sheldon: “Penny, the technology that went into this arm will one day make unskilled food servers such as yourself obsolete.”

Penny: “Really? They are going to make a robot that spits on your hamburgers?”

Coming close to human intelligence, let alone outsmarting humans is not happening anytime soon! Tell me what you think in the comments! 😊

And for Indian millennial men and their moms, living in the day and age of stunning technology, if you still want a wife or a daughter-in-law who would make “perfectly round rotis”, marry a bot! I can’t tell you about the chemistry, but I can promise you that the geometry will be accurate! 😇

--

--

Lakshmi Prakash
Design and Development

A conversation designer and writer interested in technology, mental health, gender equality, behavioral sciences, and more.