PinnedWei LuIntel Core Ultra 5 vs. Apple M1 on LLM inferenceI bought an Intel “AI PC” equipped with a Core Ultra 5 125H and 96GB of memory, which means a graphics card with 48GB of VRAM. This mini PC…May 241May 241
Wei LuIf Language ≠ Thinking, what about the Large Language Model?Recently, Nature magazine published a paper written by the Massachusetts Institute of Technology (MIT) and other institutions, “Language is…Jun 26Jun 26
Wei LuLlama-3 inferences on Intel® Core™ Ultra 5: DirectML and ONNX vs. IPEX-LLMAs mentioned in the previous article, Intel provides hardware acceleration for ONNX + DirectML. we have done some experiments on these.Jun 171Jun 171
Wei LuLlama-3 8B & 70B inferences on Intel® Core™ Ultra 5: Llama.cpp vs. IPEX-LLM vs. OpenVINOAs mentioned in the previous article, Llama.cpp might not be the fastest among the various LLM inference mechanisms provided by Intel. This…Jun 122Jun 122
Wei LuLlama3–70B inference on Intel Core Ultra 5 125HAs mentioned in the prior blog, i’ve got a mini-pc with an Intel Core Ultra 5 125H and 96GB DDR5 5600 DRAM. Today i tried llama-3 70b in…May 271May 271
Wei LuNext generation of UIThe release of AI PCs by Intel and AMD didn’t cause much of a stir, but Microsoft’s release of the AI PC concept, “Copliot+PC”, has brought…May 21May 21
Wei LuLLM for Coding, the State and Initiatives, Part 2Continuing from where we left off.May 20May 20
Wei LuLLM for Coding, the State and InitiativesAt the AI & Agents Forum of GOSIM Europe 2024 conference on May 6th at Delft, The Netherlands, I, as a guest speaker, delivered a speech…May 17May 17
Wei LuDevin and more Software Development AgentsThis is an early view of unfinished part of the AI Code Assistant Internals series, if you’re curious about what the recently hyped Devin…Mar 15Mar 15