shashank JaininGoPenAIResidual Networks Explained: Deep Learning with Residual NetworksIntroduction6d ago6d ago
shashank JaininGoPenAIUnderstanding Kolmogorov-Arnold Networks (KANs) and Their Application in Variational AutoencodersToday, we’ll be diving into the Kolmogorov-Arnold Networks, or KANs for short. We’re going to explore how KANs can potentially…Jun 28Jun 28
shashank JainUnderstanding Cambrian-1: A Deep Dive into Advanced Visual-Language AIIntroduction:Jun 26Jun 26
shashank JainUnlocking the Power of LoRA: Efficient Fine-Tuning with Low-Rank AdaptationImagine having a pre-trained model that’s great at identifying different breeds of dogs. It’s been trained on a vast dataset, capturing…Jun 15Jun 15
shashank JainAutoEncoders and Variational AutoEncoders: An IntroductionIn this blog, I will try to formulate AE and VAE from a probablistic framework point of view and explain how they work and what are the…Jun 9Jun 9
shashank JainExploring Java Virtual ThreadsVirtual threads are a new feature in Java that aims to dramatically reduce the effort of writing, maintaining, and observing…Feb 25Feb 25
shashank JainCreating storyboard Sketches using SDXLI experimented with SDXL base model and using LoRA weights from here https://huggingface.co/blink7630/storyboard-sketch to create sketches.Jan 15Jan 15
shashank JainNavigating the Future: Exploring the Advanced Capabilities of GPT-4V and MM-Navigator in Smartphone…In this blog I will briefly explain this paper https://arxiv.org/pdf/2311.07562.pdf on GPT-4 V and its applicability in navigating and…Nov 15, 2023Nov 15, 2023
shashank JaininGoPenAIUsing Transformers for Mixture of ExpertsTranformers Library recently released a way where the LoRA adapters can be added dynamically to the base model. This can provide us a way…Sep 4, 20231Sep 4, 20231
shashank JainQuantizing Flan Alpaca XLI was looking at libraries which could quantize Flan Alpaca XL of size around 3 b parameter and around 9 GB in size. I looked at gptq but…Aug 31, 2023Aug 31, 2023