PinnedBalazs KocsisLLM for German language on OllamaPort VAGOsolutions/SauerkrautLM-SOLAR-Instruct from Huggingface to Ollama and run it locallyMar 212Mar 212
PinnedBalazs KocsisinPython in Plain EnglishHome Surveillance with LLMs? Ollama using LLaVa 1.6Maximum privacy running LLM on your own hardwareMar 11Mar 11
Balazs KocsisHow good is Groq?LLM inference on groq.com would costs about 10 times less, compared to OpenAI’s pricing for GPT-4oAug 7Aug 7
Balazs KocsisinPython in Plain EnglishFunction calling in Ollama 0.3.0Tool use with any model privately with LLama3.1Jul 291Jul 291
Balazs KocsisIs perplexity.ai the future of search engines?The benefit of combining LLMs with internet and document searchJun 19Jun 19
Balazs KocsisRun Ollama on Nvidia Jetson devices — part 2Comparing quantized LlaMa3-8b and phi3–8b on Nvidia AGX Xavier GPU with OllamaMay 1May 1
Balazs KocsisRun Ollama on Nvidia Jetson devicesComparison of small LLM on Nvidia Jetson developer boards — running locally, maximum privacy, low-cost and low-power consumptionApr 7Apr 7
Balazs KocsisHow to connect compute instances with storage buckets on Google CloudUse gcsfuse to reduce copy-paste from/to a storage bucket and VM instance on Google CloudMar 23Mar 23