Invert, always invert

Vjeran Buselic
In Search for Knowledge
6 min readSep 5, 2024

Last article was long and brutal, but comprehensive. We managed to model Generative AI with just 5 tigers. Some bigger than the rest, some very intuitive and understandable, but this is it. If you want to understand Generative AI (as a tool for getting knowledge), just reread last article. And try to understand.

I told you the understanding is a catch! 😊

Article is short, 11 minutes read, says Medium, but it would take much more if you want to understand it.

I am glad are still reading, so you did not quit (yet), because we will start to use some valuable methods to help as along. First one is Charlie Munger tactics — use models, a latticework of models, and apply them to your problem/task in order to gain understanding, or as he says: just to lower uncertainty.

And as smart as he was, he would often start with the Invert Model.

So, let’s start inverting

Whenever we cope with very complex and difficult issue (problem, model, people, …) it is probably the best approach. The oldest trick in a book, if you may.

Let me use an example:
Most of you are married, or at least in a steady relationship, or are inclining to. So, if I ask you how to have great marriage, what would you say?

Well, it is complex question, depends on you, your partner, circumstances, society, many, many factors are involved, so it is not easy at all.

Even if you personally have good experience, even if you have a good marriage, it is still difficult to advise anybody.

But what if I ask you the exact opposite: do you know how NOT to have good marriage?

You will be instantly relieved:

Just came home drunk, beat your partner, never be present, supporting, helping, never give any money — just get whatever you can from your partner. And constantly be rude, aggressive, violent.

Easy, peasy, everyone will answer in a second.

So, the essence of Invert model is to swap complex problem, which cannot be resolved by following a set of rules, with the opposite one. And then just do the opposite!

Of course, this solution is not sufficient, but is necessary!

I hope you are familiar with those (logic) terms. It means that it will maybe not be enough, so no guarantee, but avoiding certain behaviors (like violence, disrespect, and neglect) is necessary for a successful marriage.

If you engage in these behaviors, a great marriage is impossible.

On the other side, it is still not sufficient. So, if you consistently behave in a supportive, loving, and respectful manner, it is a good path to have a good marriage.

However, it’s important to recognize that even this doesn’t guarantee success, as other factors also play a role.

But in a best of Mungers spirit — it lowers uncertainty and help you make better decision instantly.

This model, known by Charles Munger, is even older, much older than him, it is probably the oldest model in human history!

Do you know who apply it first?

Direct VS Invert — the home team got housed last night, losing 2:8 (in sports lingo)

Almighty One!

When giving Ten Commandments to Moses, at mount of Sinai.

I guess you are at least enough familiar with Ten Commandments to notice that eight (out of ten) are using Invert model — You shall not … 😊

But, do you know why?

Because those eight are very complex, so it was much simpler to tell the people what they should NOT do!
Not only, simpler, it is much more easier to comprehend where are those soft boundaries, you know — crossing the line, point of no return.

Otherwise HE would have to write a blog, or even series of blogs, explaining in details, giving examples, hire influences, …

The other two are very simple and straightforward — honor your parents and celebrate the sabbath.

Can machines think?

Let’s Invert at least LLM, the most complex component of our five concepts model from the last article: LLMs, Tokens, Transformers, Probability, and Context.

To be honest, all five of them belongs to LLM itself, but I decided (for the sake of understanding) to use these five components.

So, if because of complexity, and/or insufficient knowledge, or lack of attention, or anything else, you didn’t understand what LLM is, lets apply Invert model, and try to understand what LLM is not!

And try to avoid it, wishing that it will be sufficient (which may be, probably is not, but is necessary).

So, summarizing LLM paragraph from the last article (which can any Generative AI do for you as well):

A Large Language Model (LLM) is a type of artificial intelligence that uses deep learning, specifically neural networks, to process and generate human-like text. It is trained on vast amounts of text data, allowing it to learn language patterns and by using probability models and strategies predict the next word in a sequence, enabling the creation of coherent and contextually relevant text.

So, obviously it is NOT designed to apply or use any form of logical reasoning.

Out of its description we can easily conclude LLMs really sucks at logical reasoning, but it does not mean that its output, LLMs generated sentences, will not have logical coherence.

On the contrary, most of them have, but not because LLM understand logic behind, they inherited logic form vast corpus of (logical coherent) material.

It also does not understand the math at all, just statistics and probability, which is easily illustrated by example:

If you ask for simple computation, like to compute 2+2, LLM will provide you with result 4, because of probably 99.9x % of sources would have 4 as a solution.

But what if you asked for complex calculation (or math problem) like to divide 2342458909351 by 88793211, which are the numbers I just invented for this occasion, so that they are not dumbed down anywhere.

Surprise!

It will give you exact solution, to five decimal places accuracy in a second!

So, something is fishy in my conclusion that LLMs does not know the math!

No, it is not!

Chatbot is the one which recognized the (serious) math problem, wrote simple python procedure, pass the numbers and provide us with the result!

GCTP-4 explanation

And LLM has not been even informed — WoW!!!

This is very useful principle, on which we will rely heavily further on :

Rely on appropriate communication with Chatbot in order to compensate on topics LLM is not reliable itself!

And the problem of logical reasoning (thus hallucinating) is high on To-Do list, so many chatbots are making “adopters”, and next gen of LLMs will resolve it. I am (pretty) sure.

Another principle — the past behavior does not last forever.

People, and LLMs, learn, and resolve their challenges. One (easy) way (chatbots) or another, the difficult one (embedded logical reasoning in a model).

We will not further drill into what LLM is NOT, just to demonstrate that even ONE characteristic obtain from Invert model is powerful enough to change, or at least heavily influence our understanding of LLMs.

They are (logically) dumb, they have to relay on chatbots, or our logical reasoning capabilities.

So, it looks that machines still cannot think!

Even if it looks like a duck, if it walks like a duck, and even talks like a duck — it is still something else. Very useful, but not duck. Yet!

Knowing more

Can Machines Think? is the title of famous Alan Turing’s article published in the philosophical journal Mind in 1950. It is considered one of the foundational texts in the field of artificial intelligence, as it shifted the focus from philosophical debates about the nature of thinking to practical questions about machine behavior and intelligence.

Instead of directly answering this, he proposed an imitation game (now known as the Turing Test) as a measure of machine intelligence.

The test involves a human judge engaging in conversation with both a human and a machine. If the judge cannot reliably distinguish between the two, the machine is said to have passed the test.

The paper was groundbreaking because it shifted the focus from defining “thinking” in abstract, philosophical terms to measuring a machine’s ability to imitate human intelligence.

Today, by successfully imitating human conversation, thus hacking the operating system of human civilization, Harari warns of AI’s great influence and expected significant cultural changes ahead.

Maybe even paradigm shift.

On the positive, progressive side, as modern conversational agents have come close to fooling humans in live interactions, the bar for what is expected of machines has been raised higher.

So, Turing test itself is passé, but the question still stands!

In Search for Knowledge publication
Mastering Insightful Dialogue with Gen AI

<PREV Appreciate What You Have
NEXT> Dual system of reasoning

--

--

Vjeran Buselic
In Search for Knowledge

30 years in IT, 10+ in Education teaching life changing courses. Delighted by GenAI abilities in personalized learning. Enjoying and sharing the experience.