Timo LaineBoost the productivity: Advanced copilot solutions for freeVarious AI assistants and copilots have become increasingly popular in programming and creating all kinds of material. For example, GitHub…Jun 20Jun 20
Timo LaineGPTQ Quantization of Poro-34B LoRA fine-tuned LLM with S Group dataThis article continues the theme of creating and implementing company-specific LLM models. Earlier we showed how to fine-tune the Poro 34B…Mar 19Mar 19
Timo LainePoro-34B’s LoRA fine-tuning with publicly available S Group dataCustomization of Large Language Models (LLM) have become increasing popular. With a small additional training or fine-tuning, their…Mar 1Mar 1
Timo LaineInstall quantized Poro LLM on Google GCP and GPU: Step-by-Step GuidePoro LLM is a 34B parameter decoder-only transformer pretrained in Finnish, English and code. The latest fully trained checkpoint was…Feb 21Feb 21
Timo LaineLLM architectures and model merging are key to creating new and better performing modelsLLM models are getting bigger and bigger, the base models training cost a massive amount of money, and yet the models are missing many…Feb 12Feb 12