LLM, OLLAMA, Apache NiFi, Generative AI, Machine Learning, Local
OLLAMA is a great option for local (free) execution of Large (or medium) Language Models with Generative AI.
I installed it on Mac OSX Silicon very easily, with a long multi-gigabyte download.
ollama run mistral
To experiment from the command line, check out the example here:
- ReplaceText — Regex Replace — {“model”: “mistral”,”prompt”: “${inputs:trim():replaceAll(‘“‘,’’):replaceAll(‘\n’, ‘’)}”, “stream”: false}
- InvokeHTTP — we call http://localhost:11434/api/generate
- EvaluateJSONPath — extract attributes.
- PublishSlack — We send the FlowFile as an attachment. Plus this text:
==== OLLAMA: ${Date} : ${created_at}
Eval Count/Duration: ${eval_count} ${eval_duration}
Load Duration: ${load_duration} Total Duration: ${total_duration}
Prompt Eval Count: ${prompt_eval_count} Prompt Eval Duration: ${prompt_eval_duration}
Model: ${model}
Prompt:
${inputs}
Response:
${response}
==== Meta Info:
Slack msg id: ${messageid}
${messagerealname}
${messageusername}
${messageusertz}
UUID: ${uuid}
=====
This is the most important piece, you need to call the OLLAMA REST API via InvokeHTTP.