NumentaUnlocking Generative AI on CPUs with NuPIC 2.0NuPIC 2.0 is designed to overcome today’s challenges by offering scalable and private AI deployments on CPUs, with no AI expertise needed.Jul 25Jul 25
NumentaAI’s Secret Weapon: Unlocking the Power of CPUs for LLM DeploymentIn the face of the ongoing GPU shortage, it’s time to explore the untapped potential of CPUs. With Numenta’s AI platform, running LLMs on…May 13May 13
NumentaNumenta x Weights & Biases: Simplifying LLM DeploymentsLearn how Numenta and W&B are working together to help enterprises deploy large language models at scale.Nov 6, 2023Nov 6, 2023
NumentaIntroducing the Numenta Platform for Intelligent ComputingNew AI platform leverages neuroscience discoveries to finally enable large language models on CPUsOct 13, 2023Oct 13, 2023
NumentaAI is harming our planet: addressing AI’s staggering energy cost (2023 update)AI models consume massive energy levels, accelerating the climate crisis. Read how brain-based techniques can lead to sustainable AI.Aug 10, 2023Aug 10, 2023
NumentaDecoding LLM Curiosity: Unpacking the Top 5 Questions We Hear from CustomersMany customers ask us about Large Language Models. In this blog, we’ll address the 5 most common questions we receive.Jul 28, 2023Jul 28, 2023
NumentaQ&A with Jeff Hawkins on ChatGPT, the Brain, and the Future of AIWritten by Jeff Hawkins, Numenta Co-FounderJul 14, 20231Jul 14, 20231
NumentaBuild and Scale Powerful NLP Applications with Numenta’s AI PlatformBuilt on decades of neuroscience research, Numenta’s AI platform is optimized to run LLMs at high speeds, and on your own infrastructure.May 30, 2023May 30, 2023
NumentaGenerative AI’s Hidden Secret — How to Accelerate GPT-scale Models for AI InferenceWritten by Lawrence Spracklen, Numenta VP of Machine LearningMay 3, 2023May 3, 2023
NumentaNumenta and Intel Accelerate Inference 20x on Large Language ModelsNumenta running on the Intel Xeon CPU Max Series delivers 20x inference acceleration compared to other CPUs.Apr 8, 2023Apr 8, 2023