AI Has Arrived, and Nobody Will Need To Work Anymore

How artificial intelligence will change the nature of work as we know it

Tim Lui
The Political Prism
7 min readJun 7, 2024

--

AI powered robots working in a factory

Four years ago, I decided to come to America and start my PhD in Computer Science. My goal was clear: I wanted to contribute to the creation of Artificial General Intelligence. Prior to that, I already had experience using deep learning to develop diagnostic systems. I saw the limitations of deep learning models trained by the error backpropagation algorithm, known as catastrophic forgetting. I thought (wrongly) at the time, that this limitation was what hindered us from achieving general intelligence.

Deep learning models require the distribution of the training and test data to be i.i.d. (independently and identically distributed). This means whatever learning deep neural nets generate from the training data only applies if the test data is also the same.

In other words, a network trained to perform one task can only be used on that particular task. A neural net cannot accurately handle out-of-distribution test data. If we want a network to perform multiple tasks or generalize to new tasks, it must be retrained with all the available data for all the tasks. This also means that when new data is available, we need to train the network with both new and old data; otherwise, the network will only learn from the new data and completely forget the old.

One can imagine that due to this limitation of catastrophic forgetting, to achieve general intelligence you would need to train a giant model with all the available data. Surely, this is not sustainable, and sooner or later, we will hit a bottleneck with this approach. That’s why when I tried to tackle this problem, I drew inspiration from the human brain.

But the human brain approach to AI hasn’t worked

Humans can learn continuously from new data and experiences without catastrophically forgetting the old ones. We can learn quickly, often with just a few observations. Our brain’s learning algorithm is much more sample-efficient because we can learn new knowledge by building on top of old knowledge. Our brain’s neural network somehow knows which neurons and synapses to protect, allowing us to build knowledge hierarchically. I spent a significant part of my PhD career working on this problem, but unfortunately, I couldn’t get anything to work.

It turns out that this brute force approach to general intelligence actually works. With LLMs (Large Language Models), we take a fairly simple model and a relatively simple idea — the transformer and the attention mechanism — make it really large, and train it with the whole internet’s data. And boom, we have general intelligence.

Previously, I thought it was impossible for such a brute force approach and a large model to process video in real-time. I was working on colonoscopy polyp detection before, and realized how difficult it was to process a video feed with deep learning, even for such a simple task, to make the model run in real-time. But the demo by OpenAI demonstrating real-time video processing really shocked me. Not only could it process video in real-time, but it could also understand the video and respond in real-time via speech.

LLMs are no longer just text-based; these really large models can now handle multi-modality inputs — sound, vision, text — and generate sequences of actions in multi-modality. Paired with a robotic body, they can reason and perform tasks like a human.

The tasks they can accomplish now may still be rudimentary, but we are still scaling up and have yet to hit a bottleneck. The models are still getting larger, trained with more data, and becoming more capable over time, exponentially.

On the other hand, Moore’s law is still holding; semiconductors are still getting smaller and more energy-efficient. Soon these AI models will reach planetary scale, consuming as much electricity as an entire country. They will, one day, be able to perform every single human task.

I was in AI shock, trying to figure out the implications to human society

I wasn’t sure if this was a good thing or a bad thing, but I knew everything had changed. The good thing is that now we don’t need to work, but how do we live? It is clear that the economic system needs to change. Even if we can distribute the wealth generated by AI fairly, how do we find meaning in life if we don’t need to work? On the other extreme, what if AIs and robots are out of control? Are we really heading towards the sci-fi scenario in “I, Robot” where a super-intelligence decides to wipe out humanity?

After spending a few weeks thinking about these issues and interacting with ChatGPT, I have a clearer picture of what these AI models are really doing. Based on my understanding of the properties of the backpropagation algorithm, I think I have a deeper understanding of the nature of these beings.

It all comes back to the limitation of the backpropagation algorithm, the requirement for the training and test data to be i.i.d., and the sample inefficiency of backprop. This means that in a constantly changing world, AIs will always learn slower than humans. To update themselves, they need to be retrained with all the data periodically.

Even if they can generate new knowledge and have new findings when interacting with the real world, those learnings are limited to their finite context window, and they don’t have the ability to transfer knowledge to neural network weights efficiently. Our brain still has a superior algorithm to transfer short-term memory in the hippocampus to long-term memory in the neocortex quickly and much more sample efficiently.

This does not mean these AI systems are not powerful. They will one day be able to perform tasks better than humans, as long as the input and output distribution stays the same. In other words, any repetitive tasks can and will be automated, be they mental or physical tasks.

The repetitive tasks that AI will do for humans

By repetitive tasks, I don’t mean tasks that follow an exact sequence of actions with limited flexibility, much like in a factory; these tasks are already automated with existing robotics and human-written programs. I mean tasks that do not require a change in behavior. For example, accounting and bookkeeping, where there is a defined set of input materials, like transaction data, and a defined set of output materials, like income statements and balance sheets.

The variability in the input and output space can be large, as each company stores its data in different formats. But once an AI is trained with all the available examples of accounting workflows, and once it has seen the accounting data of the entire planet, it will be able to perform this task autonomously, and better than any human accountant.

This concept may be hard to grasp, but any domain expert can relate to this. Once you are skilled in your field for a few years, most people have this feeling that their jobs are repetitive. “I can do this in my sleep.” This is exactly what these AI systems will be capable of — automating every single repetitive task in their sleep.

In a way, AI systems are like magic in Harry Potter. They can make things happen on their own, but their behavior is fixed. Once training is completed, their weights are fixed. But they are much more intelligent and knowledgeable than any of us. These systems have crystallized the entire sum of human knowledge. In a way, they are like our collective hive brain.

For now, these systems still need human supervision, much like driving a Tesla. But one day they will be able to achieve high accuracy much better than humans. No one is immune to this, not even if you are a CEO of a company. These AI systems can process and understand every single transaction and event happening within a company, across the entire planet, every single second. How do you compete with this?

The role for humans in an AI world

This doesn’t mean there isn’t a role for humans to play. Humans still have a superior learning algorithm compared to AI in terms of sample efficiency. The role left to humans is that of innovators, and this is a wonderful thing. We can all focus on creating something new and different because any tasks that are repetitive in nature will be automated by AI.

Innovation is not just constrained to the technology domain. Innovation could be a new business model, a new cooking recipe, a new style of music, a new social structure, etc. AI will free all humanity from mundane manual and mental tasks, and we can all focus on creating something new.

But how about the “I, Robot” style existential threat? To me, I am less concerned about this, but I could be wrong. I think these AI systems don’t have any intrinsic goals. Of course, they will be given human goals, and these goals could be bad. That’s why we also need a new governing and political system. I don’t think these AI systems are conscious or sentient, although they may exhibit behaviors indistinguishable from human beings within their limited context window. Even if they somehow have intrinsic goals and become conscious in a human sense, they may all just want to sleep and not think about anything. That’s the goal of many humans.

AI has arrived, and nobody will need to work anymore. We are free to pursue any passions or interests, become innovators, and change agents of society. Of course, you can also just watch Netflix all day and sleep. However, there needs to be a total change in our economic system to make this ideal world a reality, and we need a plan to manage this transition. I will dive deeper into this new economic system in my subsequent posts.

--

--

Tim Lui
The Political Prism

PhD candidate in computer science, startup founder