Understanding Generative AI

Vjeran Buselic
In Search for Knowledge
5 min readAug 16, 2024

In our context, knowledge refers to the accumulation of facts, information, and skills acquired through experience, education, or research. It’s the awareness or familiarity with specific concepts, subjects, or practices. Presumably, everyone reading this already has some knowledge of Generative AI.

But do we truly understand what it is?

Hard task, need hard men. Anyone better to volunteer?

One of my favorite quotes is, “Any fool can know. The point is to understand.” This quote underscores the difference between merely acquiring information and genuinely comprehending it.

Irony is that on Internet it’s attributed to Einstein, which is not true, but is completely irrelevant, because … the point is to understand it!
Not to know or attribute it!

Understanding goes beyond knowledge; it involves grasping the meaning, significance, and implications of that knowledge. It’s about connecting different pieces of information, recognizing patterns, and applying them in various contexts.

This doesn’t necessarily mean it’s a more complex concept to grasp.

On the contrary, we can grasp its pure meaning, significance, and implications and use it in our context of acquiring any knowledge we crave. Not by understanding all the deatails how it works, which if far beyond my (and probably your) abilities.

And to leverage on recursion principle, even in learning best possible usage of Generative AI, we will help ourselves by (properly) using any available (and free) Generative AI tools.

We’ll focus solely on free, user-accessible text generation (language model) tools like OpenAI’s ChatGPT, Google’s Gemini, or Microsoft’s Copilot. Our goal is to understand fundamental principles pertinent to our quest for knowledge.

As we witness rapid and constant changes — some minor, some major — we’ll adopt a principles-based approach to safeguard our understanding, adjusting as necessary with future developments.

And please remember, this is my understanding prone to my mistakes. Don’t take my word as gospel or blame me if something doesn’t work for you. Conduct your own research and form your own understandings, especially if your goals differ from ours.
However, many of proposed principles will still be beneficial.

LLM — The Brain

If I have to use allegories, which we often do, as one powerful learning principle, one might compare Generative AI to a human.

When we converse, we typically think before speaking, though sometimes our mouths move faster than our brains, leading to miscommunication. The mouth, obviously, serves as the output interface, while ears and eyes are input interfaces. And, as mouth-brain communication can be prone to error, the input one can lead to much severe problems. Not listening, or not listening good enough, leads to poor understanding — the most important issue to address!

In a nutshell this brain-mouth-eyes is my simplification of basic Generative AI principles, I am so proud of 😊.

The Large Language Model (LLM), our “brain,” is designed to create new content, such as text (images, music, or code, to which we will not pay attention to), based on the data and language patterns it has been trained on.

How clever it is (what it knows and how it thinks, if any) will be a subject of some further columns, as of now let’s further develop our input-output paradigm.

Chatbot — The I/O Device

It’s crucial to differentiate the Chatbot (the I/O function) from the LLM.

One LLM can serve multiple Chatbots, acting as interfaces to users. For example, both ChatGPT and Copilot are different interfaces to the same LLM — OpenAI’s GPT-3.5 or GPT-4. They communicate differently with users, each offering unique environments and capabilities. Copilot, in particular, integrates seamlessly with Microsoft’s 365 environment, enhancing productivity and creativity across various applications — a clear advantage for heavy Microsoft users.

LLMs enable Chatbots and other applications to connect primarily via APIs provided by the LLM vendor, such as OpenAI.

This connectivity has ushered in a new era of AI-based applications, flooding the market with myriad solutions that promise quick results.

While this is a legitimate market competition with numerous opportunities (and potential scams), our focus remains on the quest for knowledge.

Important, but also Not in Our Scope

Another vital aspect of Generative AI, which we’ll not address in our columns, pertains to security, privacy, and ethical concerns managed through the Chat interface.

In essence, while an LLM might “know” how to build a nuclear bomb or provide harmful information, it’s the responsibility of the communication layer to refuse such requests for obvious reasons. It also ensures that responses avoid racist, sexist, or culturally inappropriate content.

Knowing More

· Generative AI: Beyond Creation. Generative AI possesses the capability to produce novel content based on learned patterns, encompassing text, images, music, and even code. It represents a subset of artificial intelligence focused on generating new outputs derived from its training data. Unlike traditional AI, which primarily classifies or predicts data, Generative AI crafts new, creative outputs by assimilating vast datasets (knowledge).

· Large Language Models (LLMs): The Transformer Backbone. LLMs are grounded in the Transformer neural network architecture. Introduced in the seminal paper “Attention is All You Need” by Vaswani et al. (2017), the Transformer model employs a self-attention mechanism, allowing the model to assess the significance of different words in a sentence and comprehend their interrelationships. Typically, LLMs undergo pretraining on extensive text corpora using unsupervised learning, during which they grasp general language patterns. Subsequent fine-tuning on smaller, task-specific datasets tailors the model for specific applications like sentiment analysis or question answering. Models such as GPT, BERT, and T5 each build upon this foundation, adjusting their architecture and training methodologies to excel in diverse tasks.
Exploring academic papers, open-source implementations, and real-world applications of these models can offer deeper insights into their strengths and limitations.

· Chat Interfaces: Bridging Humans and AI. The chat interface in Generative AI serves as a user-centric platform enabling interaction with AI models through natural language. This medium allows users to pose questions, provide prompts, and receive human-like responses from the AI, effectively simulating a conversation. Central to the user experience, the chat interface demystifies complex AI technologies, rendering them accessible via straightforward text-based interactions. This seamless human-computer interaction leverages the prowess of LLMs to deliver real-time, context-aware, and coherent dialogues. Appreciating this interface entails understanding not only its underlying technology but also its pivotal role in democratizing AI for a broad audience.

· AI effectively “hacked” the operating system of human civilization. In a 2023 article, Yuval Noah Harari argues that AI has “hacked” the core of human civilization by mastering language and communication, which are central to how societies function. He warns that AI’s ability to generate and manipulate content could profoundly impact politics, culture, and personal relationships. Unlike previous technologies that distributed human-created content, AI can autonomously create new narratives, potentially influencing society in unpredictable ways. Harari stresses that if not carefully managed, AI could reshape human civilization in ways that may be irreversible.
Harari’s insights underscore the urgent need for thoughtful integration of AI into society. While AI offers tremendous potential, its unchecked power to shape human culture and thought poses significant risks that must be addressed.

Thus, the importance of understanding Generative AI, particularly its potential in self-education. Such understanding is crucial, as self-education is a key pillar of human civilization as well, and our best defense against the risks posed by AI.

By mastering these tools, we empower ourselves to harness AI for constructive purposes rather than being passively shaped by it.

In Search for Knowledge publication
Mastering Insightful Dialogue with Gen AI

<PREV Planning is indispensable
NEXT> Tiger Spotting Expert

--

--

Vjeran Buselic
In Search for Knowledge

30 years in IT, 10+ in Education teaching life changing courses. Delighted by GenAI abilities in personalized learning. Enjoying and sharing the experience.