Balazs KocsisRun Ollama on Nvidia Jetson devices — part 2Comparing quantized LlaMa3-8b and phi3–8b on Nvidia AGX Xavier GPU with Ollama·3 min read·May 1, 2024----
Balazs KocsisRun Ollama on Nvidia Jetson devicesComparison of small LLM on Nvidia Jetson developer boards — running locally, maximum privacy, low-cost and low-power consumption·5 min read·Apr 7, 2024----
Balazs KocsisHow to connect compute instances with storage buckets on Google CloudUse gcsfuse to reduce copy-paste from/to a storage bucket and VM instance on Google Cloud·4 min read·Mar 23, 2024----
Balazs KocsisLLM for German language on OllamaPort VAGOsolutions/SauerkrautLM-SOLAR-Instruct from Huggingface to Ollama and run it locally·3 min read·Mar 21, 2024--2--2
Balazs KocsisinPython in Plain EnglishHome Surveillance with LLMs? Ollama using LLaVa 1.6Maximum privacy running LLM on your own hardware·6 min read·Mar 11, 2024----
Balazs KocsisHow to run Stable Diffusion on your GPU and use models from civitai.comGenerate Midjourney quality — and beyond, images for free on your GPU·5 min read·Mar 3, 2024----
Balazs KocsisGemma with Ollama — small and private LLMRun Google’s latest LLM with Ollama offline·3 min read·Feb 25, 2024----
Balazs Kocsis10M token window length Gemini 1.5 — what does this mean?Exploring Google DeepMind’s Gemini 1.5 technical report·4 min read·Feb 18, 2024----
Balazs KocsisScikit-learn evenings — more on OutputCodeClassifierSeries in Classical Machine Learning #1 — OutputCodeClassifier·6 min read·Feb 11, 2024----
Balazs KocsisChat with your images — privately, with Ollama and LlaVa 1.6 from the CLICaption your private images and ask follow up questions·2 min read·Feb 6, 2024--1--1