Supercharge your AI learning or any learning with these creative techniques

Maddie Lupu
7 min readFeb 13, 2024

--

Photo by Dstudio Bcn on Unsplash

Keeping up with the latest on Generative AI and Large Language Models (LLMs) can feel overwhelming at times. From new research papers to complex-sounding techniques, it’s easy to feel lost. Yet making sense of these does not have to be a daunting endeavour.

As a self-taught AI thinker, I’m always on the hunt to accelerate my learning. A great source of inspiration recently came from Barbara Oakley’s book A Mind for Numbers. Now, I know it sounds focused on math and science, but I promise these learning techniques can help anyone untangle complex concepts of ALL kinds, not just numbers.

Let’s start with some of my favourites which include Chunking, Recall, and the use of Metaphors and Analogies. Yes, even math can be better understood with a little story mixed in. Let’s unpack these!

Chunking — one at a time

Photo by Ashkan Forouzani on Unsplash

Chunking can be used to break down big topics and concepts into bite sized information.

Here is how to form chunks:

  1. Focus your attention on the concept you want to study. Avoid time confetti, as Adam Grant calls it — those interruptions that break your concentration and disrupt your flow.
  2. Understand the basic idea in its simplest form
  3. Practice it in different contexts to get an intuition of when to apply it

Let’s take an example. Perhaps you want to master the concept of LLMs. Well, this can involve a LOT. But what about you start with the mere concept of what a LLM is. The basic idea you might come up with is that LLMs are big statistical calculators that generate text based on probabilities. To practice this in different contexts you might want to step back a little and understand how they differ from their elder, popular sibling: Recurrent Neural Networks. Once you have mastered this chunk, you can move to your next chunk such as understanding how LLMs work or their architecture.

While our short-term memory may only hold about 4 chunks of information, those chunks can be surprisingly complex and interconnected. To build intuition for the things we learn, we need to shift these chunks from short-term to long-term memory. The more we practice, the deeper and stronger the neural pathways in our memories become, making them easier to recall.

Forming chunks as illustrated in A mind for numbers. The more you practice, the firmer, darker and stronger the mental patterns become.

Recall — practice makes progress

Bad news is that just rereading the material is unfortunately not enough for newly formed chunks to move from short-term to long-term memory. Think of short-term memory as a juggler keeping a few balls in the air at once. You can actively focus on a limited amount of information, but if you add too much or get distracted, things start to drop. Long-term memory is like a warehouse or RAM (Random Access Memory) that you can access anytime or almost anytime.

Illusion of competence is real. Just reviewing notes or a worked-out solution might trick you into feeling like you understand a concept. This might be amplified by aha moments, but this does not necessarily mean that you will be able to solve the problem in a different context. To combat the illusion of competence and truly grasp the material, make your learning more active.

Rehearsing is crucial to make information stick. Recall is one type of practice to make learning more effective. Recalling the key ideas by writing them down, talking to yourself or explaining these to someone is much more helpful. The trick here is to alternate focused and diffuse thinking. Focused thinking happens when you are deliberately working on a problem. Diffuse thinking happens when you relax your attention and let your mind wander. That’s why it’s important to take breaks or engage in low-intensity activities like exercising, walking, or napping to refresh your mind and enhance your learning. These breaks are going to allow your diffuse thinking to still work on the problem in the background and generate your best ideas.

Save the chunk — use spaced-out repetitions and don’t wait too long for the recall practice. When learning new concepts it’s a good rule of thumb to not let things go untouched for longer than a day. Otherwise, vampires will come by and suck the little chunk.

if you don’t make a point of repeating what you want to remember, your ‘metabolic vampires’ can suck out the neural pattern related to that memory before it can strengthen or solidify. — Barbara Oakley

Metaphors and Analogies — make it memorable

Here is the catch — we are not built to memorise numbers nor complex concepts. Our memories are rather spacial and visual. Why — might you ask. Remember the last time you entered a room and you passively paid attention to your surroundings. You might still remember a lot of the objects and how they were placed. It is an evolutionary thing as you might have already anticipated. Our ancestors did not need to memorise numbers but they had to pay attention to their surroundings to figure out how to get back to their camp or home after coming back from hunting.

We can unlock this superpower through powerful or dare I suggest silly visualisations and metaphors. These might not be perfect and sometimes can take a lot of time to build, but they can make a world of a difference!

Let’s put this into practice. Imagine you want to memorise a linear function, f, defined by its slope (w), input variable (x), and y-intercept (b)

f = wx + b

To memorise this more efficiently, imagine in your mind’s eye a wise(w) letter X(x) drinking a bobba tea(b).

perhaps this X might not look that wise to you, but you get the point, sometimes a goofy image is all your brain needs to memorise abstract concepts

Did you know Einstein’s theory of relativity did not particularly arise from his mathematical skills — he sometimes needed help from other mathematicians to make progress. Einstein had a great ability to pretend. He imagined himself as a photon and then imagined how a second photon might perceive him. What would the second photon see and feel?

Let’s take another example, this architecture for fine tuning LLMs with Reinforcement Learning from Human Feedback (RLFH)

RLHF by deeplearning.ai

At times, LLMs can generate toxic, harmful or unhelpful text. The objective of RLHF is to align LLMs with human preferences and values. The diagram depicts the agent, the LLM and its objective to generate aligned text. The environment is the current context window or the prompt space. The LLM examines the existing conversation (the state) to decide what to write next (the action). The model takes an action based on its statistical training representations to generate words. Humans examine the LLM completions and give a rank based on how well it aligns with the pre-agreed metric. For positive outputs, the LLM receives a reward, allowing the agent to learn and improve over time through the reward signal. In practice, a reward model trained on human judgments is used, but for now, we will focus on direct human evaluations.

Let’s work with an analogy to make this process more sticky. Imagine a young child learning what’s good and bad behaviour. This is similar to how a Large Language Model (LLM) starts out — exploratory and a little mischievous. The parent providing guidance is like the human feedback in the fine tuning process. Sharing a toy earns a smile, like a positive rating for a helpful LLM output. Hitting another child brings a frown and an explanation, mirroring negative feedback given to the LLM.

Over time, just like the child learns to understand why some actions get smiles and others don’t, the LLM starts figuring out which types of text are preferred by humans. With this understanding, it becomes more likely to produce human aligned and appropriate responses, much like the child chooses actions that get a positive reaction.

Metaphors and analogies can be super powerful not just for memorising. These can help overcoming the Einstellung effect which means accepting your first idea when solving a problem. This is rarely the best solution. My favourite part about using metaphors and analogies is that it makes you more creative in the process, while it helps you memorise better and explain concepts easier to others.

Whatever learning journey you’re on, embrace the Law of Serendipity and the power of perseverance. As Barbara Oakley states in her book:

“Lady luck favours the one who tries”

Whether you consider yourself a slow or a fast learner, perseverance is often more important than intelligence. Approaching material with a goal of truly understanding it creates a unique path to mastery.

The next time you’re wrestling with challenging concepts, long functions, or complex equations, try some of these techniques and observe how your learning progresses. You might be surprised!

Enjoyed the read? Show some love with a few claps below 🙌

--

--