Injecting reasoning capabilities in small language models help them outperform LLMs with reduced…
Quantizing pre-trained model weights combined with LoRA adapters without compromising…
Updating only a fraction of the parameters during fine-tuning rather than updating all the parameters…