Balazs KocsisinPython in Plain EnglishFunction calling in Ollama 0.3.0Tool use with any model privately with LLama3.14d ago4d ago
Balazs KocsisIs perplexity.ai the future of search engines?The benefit of combining LLMs with internet and document searchJun 19Jun 19
Balazs KocsisRun Ollama on Nvidia Jetson devices — part 2Comparing quantized LlaMa3-8b and phi3–8b on Nvidia AGX Xavier GPU with OllamaMay 1May 1
Balazs KocsisRun Ollama on Nvidia Jetson devicesComparison of small LLM on Nvidia Jetson developer boards — running locally, maximum privacy, low-cost and low-power consumptionApr 7Apr 7
Balazs KocsisHow to connect compute instances with storage buckets on Google CloudUse gcsfuse to reduce copy-paste from/to a storage bucket and VM instance on Google CloudMar 23Mar 23
Balazs KocsisLLM for German language on OllamaPort VAGOsolutions/SauerkrautLM-SOLAR-Instruct from Huggingface to Ollama and run it locallyMar 212Mar 212
Balazs KocsisinPython in Plain EnglishHome Surveillance with LLMs? Ollama using LLaVa 1.6Maximum privacy running LLM on your own hardwareMar 11Mar 11
Balazs KocsisHow to run Stable Diffusion on your GPU and use models from civitai.comGenerate Midjourney quality — and beyond, images for free on your GPUMar 3Mar 3
Balazs KocsisGemma with Ollama — small and private LLMRun Google’s latest LLM with Ollama offlineFeb 25Feb 25