Using Ollama with Mistral and Apache NiFi

Tim Spann
3 min readFeb 28, 2024

--

LLM, OLLAMA, Apache NiFi, Generative AI, Machine Learning, Local

OLLAMA is a great option for local (free) execution of Large (or medium) Language Models with Generative AI.

I installed it on Mac OSX Silicon very easily, with a long multi-gigabyte download.

ollama run mistral

To experiment from the command line, check out the example here:

  1. ReplaceText — Regex Replace — {“model”: “mistral”,”prompt”: “${inputs:trim():replaceAll(‘“‘,’’):replaceAll(‘\n’, ‘’)}”, “stream”: false}
  2. InvokeHTTP — we call http://localhost:11434/api/generate
  3. EvaluateJSONPath — extract attributes.
  4. PublishSlack — We send the FlowFile as an attachment. Plus this text:
==== OLLAMA: ${Date} : ${created_at}
Eval Count/Duration: ${eval_count} ${eval_duration}
Load Duration: ${load_duration} Total Duration: ${total_duration}
Prompt Eval Count: ${prompt_eval_count} Prompt Eval Duration: ${prompt_eval_duration}
Model: ${model}
Prompt:
${inputs}

Response:
${response}

==== Meta Info:
Slack msg id: ${messageid}
${messagerealname}
${messageusername}
${messageusertz}
UUID: ${uuid}
=====

This is the most important piece, you need to call the OLLAMA REST API via InvokeHTTP.

Resources

--

--

Tim Spann

Principal Developer Advocate, Zilliz. Milvus, Attu, Towhee, GenAI, Big Data, IoT, Deep Learning, Streaming, Machine Learning. https://www.datainmotion.dev/