DEEP TECH OR OLD HAT?
I Do Not Understand This Apple LLM Breakthrough — Or Do I?
The important parts anyway
So Apple says they are using flash memory as a smart cache for LLM data, or at least that’s how I interpreted the paper and the video I found.
This is heavy geek stuff, so unless you already have at least a basic understanding of caching and LLM AI’s, scroll right on to the next paragraph where we can talk like normal people.
Okay, well, somewhat like normal people. Kind of like a Brit trying to explain cricket to an American baseball fan. Not really normal talk, but more so than this video. Go ahead, scroll down, I would.
So I said you’d need a basic understanding of caching and LLM AI’s to grok that video or the research paper it’s based on. I have more than a basic understanding of caching and multiprocessor computing because I had to pass a Sun Expert Level certification test on their Unix kernel years ago, but I know pretty much nothing about LLM AI’s, and at this time in my life I have no desire…