PinnedAriaBypassing the V100 : Train LLM on a Single 3060 CardIntroduce the LoRA and quantization methods. Using these two approaches, we can significantly reduce the GPU memory usage of the model.Nov 23, 20232Nov 23, 20232
AriaEnhancing Retrieval-Augmented Generation with Knowledge GraphsRetrieval-Augmented Generation (RAG) is a technique designed to enhance large language model (LLM) outputs by incorporating real-world…Jul 81Jul 81
AriaThe Pitfalls of Mocking: Enhancing Python Code Test Quality!Using mocking in testing is often criticized because it can lead to a false sense of security and does not adequately test interactions…Jun 19Jun 19
AriaIn the evolving landscape of artificial intelligence, the synergy between Large Language Models…The GitHub repository at https://github.com/WLiK/LLM4Rec-Awesome-Papers serves as an excellent starting point. It is highly recommended…Nov 26, 2023Nov 26, 2023
AriaFrom Pixels to Pencils: Your personal GPTs with OpenAIIn the previous blog post, I introduced how to build your first GPTs. In this article, we’ll do a simple example: building a bot that can…Nov 13, 2023Nov 13, 2023
AriaMake Your first GPTS from scratchPart2: From Pixels to Pencils: Your personal GPTs with OpenAINov 12, 2023Nov 12, 2023
AriaWeakly Supervised Instance Segmentation using Class Peak ResponseObject Counting and Instance Segmentation With Image-Level SupervisionOct 26, 2019Oct 26, 2019