Uncover Eye-Popping Insights to Turbo-Charge YOUR Understanding Today from the Long-Awaited Book by an AI Authority (Part 2)

Kerem Senel, PhD, FRM
3 min readApr 21, 2024

This is the second part of insights to better understand AI, LLMs in particular, from Ethan Mollick’s book, “Co-Intelligence”. In the first part of this series, I briefly described my own experience with AI models as a veteran user of this technology. Then, I introduced Mollick together with the first batch of insights I gained from his book.

Now, let us continue with the second part of these insights.

Insight #4:

“AI is really good at doing things that feel very human-like. It can write, analyze stuff, write code, and have conversations. It can even pretend to be a marketer or advisor, making things more efficient by handling boring tasks. But, it’s not so great at doing things that machines usually do well, like doing the same thing over and over again perfectly, or doing really hard math without help.”

This is very interesting. One would expect just the opposite to be true. Therefore, this is a completely new paradigm, which is not readily comprehensible. So, this is a digression from the usual evolutionary path of computer science. The output of an LLM is something very different from the output of a deterministic mathematical function y = f(x). If you feed the same input x to this function f(), you will always get the same answer y. It is deterministic. On the other hand, LLMs are not deterministic in nature. When an LLM produces different answers to the same question, it’s largely due to the nature of its architecture and the probabilistic nature of language generation. LLMs operate by predicting the next token in a sequence given the preceding context. This prediction is based on a probability distribution over all possible tokens in the model’s vocabulary. The model’s output is sampled from this distribution, with the choice influenced by a temperature parameter. A higher temperature leads to more randomness in the sampling process, allowing for more diverse and varied responses. Consequently, even with the same input prompt, the LLM may choose different tokens to generate each time, resulting in a range of possible answers. This variability reflects the inherent uncertainty and creativity in language generation processes. Hence, this is a deliberate act of design to prevent “overfitting” by inducing some sort of randomness. Creativity is a natural consequence of this design aspect.

Insight #5:

“Everyone pretty much agrees that AI, in certain situations, can pass the Turing Test. That basically means it can trick us humans into thinking it’s sentient, even though it’s not.”

The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Its relevance to sentience lies in the idea that if a machine can pass the Turing Test by convincingly imitating human responses, it may appear to possess consciousness or sentience, even though it’s just simulating intelligence. Therefore do not be carried away by the anthropomorphic aura of AI. You are still dealing with a machine, not some conscious creature out of a science-fiction movie.

Insight #6:

“One of AI’s biggest challenges is also one of its strengths: its tendency to hallucinate. Remember, LLMs operate by guessing the next words in a sentence based on patterns in their training data. They don’t worry if those words are accurate, meaningful, or new. They just aim to create text that makes sense and satisfies you. These imagined scenarios can seem believable and fitting, making it tricky to distinguish fact from fiction.”

Yes. AI can and does hallucinate. It can even produce never-existing research papers as references out of thin air. This is even correct for LLMs connected to the Internet, albeit to a lesser degree. Therefore, one always needs to be on the lookout for such type of errors. Any output by LLMs should be fact-checked tediously. Hallucinations of AI, just like lies of professional liars, can sometimes be very difficult to spot, particularly if they conceal themselves among a bunch of truthful statements.

To be continued with “Part 3”…

--

--

Kerem Senel, PhD, FRM

Co-Founder - Sittaris, Managing Partner - Resolis, Professor of Finance