Balazs KocsisRun Ollama on Nvidia Jetson devices — part 2Comparing quantized LlaMa3-8b and phi3–8b on Nvidia AGX Xavier GPU with OllamaMay 1May 1
Balazs KocsisRun Ollama on Nvidia Jetson devicesComparison of small LLM on Nvidia Jetson developer boards — running locally, maximum privacy, low-cost and low-power consumptionApr 7Apr 7
Balazs KocsisHow to connect compute instances with storage buckets on Google CloudUse gcsfuse to reduce copy-paste from/to a storage bucket and VM instance on Google CloudMar 23Mar 23
Balazs KocsisLLM for German language on OllamaPort VAGOsolutions/SauerkrautLM-SOLAR-Instruct from Huggingface to Ollama and run it locallyMar 212Mar 212
Balazs KocsisinPython in Plain EnglishHome Surveillance with LLMs? Ollama using LLaVa 1.6Maximum privacy running LLM on your own hardwareMar 11Mar 11
Balazs KocsisHow to run Stable Diffusion on your GPU and use models from civitai.comGenerate Midjourney quality — and beyond, images for free on your GPUMar 3Mar 3
Balazs KocsisGemma with Ollama — small and private LLMRun Google’s latest LLM with Ollama offlineFeb 25Feb 25
Balazs Kocsis10M token window length Gemini 1.5 — what does this mean?Exploring Google DeepMind’s Gemini 1.5 technical reportFeb 18Feb 18
Balazs KocsisScikit-learn evenings — more on OutputCodeClassifierSeries in Classical Machine Learning #1 — OutputCodeClassifierFeb 11Feb 11
Balazs KocsisChat with your images — privately, with Ollama and LlaVa 1.6 from the CLICaption your private images and ask follow up questionsFeb 61Feb 61