DEEP TECH OR OLD HAT?

I Do Not Understand This Apple LLM Breakthrough — Or Do I?

The important parts anyway

Anthony (Tony/Pcunix) Lawrence 👀
Mac O’Clock
Published in
3 min readDec 27, 2023

--

A technical diagram illustrating data transfer within a computer system. A central CPU is connected to several solid-state drives (SSDs) labeled with “SSD”. Arrows from these SSDs point towards the CPU, indicating the flow of code. Another set of arrows leads from the CPU to a series of RAM modules, emphasizing the data’s journey from storage to active memory. The RAM modules are highlighted to signify their higher speed.
ChatGPT’s conceptualization

So Apple says they are using flash memory as a smart cache for LLM data, or at least that’s how I interpreted the paper and the video I found.

This is heavy geek stuff, so unless you already have at least a basic understanding of caching and LLM AI’s, scroll right on to the next paragraph where we can talk like normal people.

Okay, well, somewhat like normal people. Kind of like a Brit trying to explain cricket to an American baseball fan. Not really normal talk, but more so than this video. Go ahead, scroll down, I would.

So I said you’d need a basic understanding of caching and LLM AI’s to grok that video or the research paper it’s based on. I have more than a basic understanding of caching and multiprocessor computing because I had to pass a Sun Expert Level certification test on their Unix kernel years ago, but I know pretty much nothing about LLM AI’s, and at this time in my life I have no desire…

--

--

Anthony (Tony/Pcunix) Lawrence 👀
Mac O’Clock

Retired Unix Consultant. I write tech and humor mostly but sometimes other things. See my Lists if your interests are specific.