KZ LimInfinite Context Length in LLMs - is it possible? Infini-attention seems to hold the key to this.Google recently introduced Infini-attention. Is that a significant milestone? Many of you asked.Apr 28Apr 28
KZ LimGemini 1.5 Pro: Google’s Game-Changing Multimodal Large Language Model with a 1M Token Context…Mixture-of-Experts (MoE) is key, Mistral did you hear that?Feb 23Feb 23
KZ LimOpenAI’s Sora: A Quantum Leap in AI Video Creation and Its Potential in Various IndustriesFrom Text to Reality: Unpacking OpenAI’s Visual Spacetime PatchesFeb 20Feb 20
KZ LimGoogle’s Lumiere: The Dawn of AI-Generated Video RevolutionFrom Text to Cinematic MasterpiecesFeb 15Feb 15
KZ LimAI Showdown: The Rise of Gemini series and the Challenge to GPT-4’s ThroneNot too long ago, Google dropped some pretty big news with their AI model, Gemini, and before we knew it, the online version, Gemini Pro…Feb 7Feb 7
KZ LimThe Future of Prompt Engineering: Will it Extend its Success with Large Language Models to…In the realm of artificial intelligence, a new art form is emerging, one that blends creativity, technical prowess, and a touch of magic…Dec 28, 2023Dec 28, 2023
KZ LimRobotaxi Revolution: Insights into the Global Deployment LandscapeAs 2023 draws to a close, let’s look at how far we have progressed in the autonomous journey across the continents.Dec 21, 2023Dec 21, 2023
KZ LimMixtral 8×7B -the new open-source kingMistral introduces its latest model, Mixtral 8×7B. This groundbreaking model is the first open-source Sparse Mixture-of-Experts foundation…Dec 20, 2023Dec 20, 2023