How AI Become Smarter Than Human?

Have you ever wonder, how come an average people could teach the AI become smarter?

Rudi Widiyanto
Predict
3 min readAug 16, 2023

--

Photo by Gabriel Vasiliu on Unsplash

Artificial intelligence (AI) is the field of computer science that aims to create machines and systems that can perform tasks that normally require human intelligence. AI has made remarkable progress in recent years, thanks to the development of new algorithms, the availability of large amounts of data, and the advancement of computing power.

One of the most impressive examples of AI is GPT, a series of language models created by OpenAI, a research organization dedicated to creating and ensuring beneficial use of AI.

GPT stands for Generative Pre-trained Transformer, and it refers to the model’s ability to generate natural language content based on a given input. GPT is based on LLM, which stands for Large Language Model. LLMs are language models that are characterized by their large size, meaning that they have a huge number of parameters that are used to process vast amounts of text data, mostly scraped from the Internet.

LLMs are trained using self-supervised learning and semi-supervised learning. They learn from unlabeled or partially labeled data, without requiring explicit human guidance.

GPT works by using statistical data, such as what words are most likely to come after other words, to predict the next word or phrase in a sequence. This may seem simple, but if you have millions or billions of words as your basis data, you can write an article on almost any topic.

GPT can also combine this with the power of the Internet, which allows it to access and update itself with new information and knowledge. Using supervised learning, it can also learn about an article in context based on certain words that have been categorized before by human experts.

For example, if the input contains words related to sports, GPT can generate an article about sports using relevant facts and data.

The same procedure is applied with image generation and other AI generation tasks. GPT can use its language skills to create captions or descriptions for images, or even generate images from text prompts. For example, if I ask GPT to draw me a picture of a dragon, it will use its neural network to create a graphical artwork based on its understanding of what a dragon looks like.

But how does GPT learn to be smarter?

The answer is that it learns from us, the humans who use it. Our conversations and responses to the results of GPT make it learn and correct its outputs. It is like always updating the data training while responding to prompts. No matter how ridiculous the prompts are, eventually we, human 8 billion people, have added something to AI. Then no wonder, it becomes smarter, they learn 24/7, around the clock for all seasons.

Now, when the time comes, this kind of AI would eventually have knowledge from all generations. Knowledge only is not useful, you need wisdom to make it performed. But what is knowledge without wisdom? We might experience the real life terminator. Let’s hope not.

Warmest,

RW

--

--

Rudi Widiyanto
Predict

Psych Graduate who love to observe life, diving into astronomy, and riding fast-evolving AI. What's yours?