5 tools to improve your terminal productivity with LLMs

Maciej Mazur
Ubuntu AI
Published in
5 min readAug 23, 2023

--

There are tons of new tools popping up, to improve our workflow as ML Engineers, DevOps, and Developers, that rely on OpenAI API. The productivity boost is mindblowing, so I wanted to share my experience and some hints about which models might be best for you.

The ones I use the most to improve my workflow are:

K8sGPT.ai

Which allows you to:

  • Improve your DevOps skills by explaining kubectl outputs
  • Cut through the noise and summarize tons of log files
  • Lowers triage time for issues
  • Get a quick overview of the health of your K8s cluster
from https://k8sgpt.ai/

This is a tool that allows you to enhance your work with Kubernetes. So many times I was trying to figure out an issue on my K8s cluster and had to go through an unbearable amount of logs and long kubectl outputs. K8sGPT can automatically scan your cluster for the most common issues and even explains to you why it’s a problem and how to fix it. In addition to that Alex recently added an amazing feature of CVE review (thanks to Trivvy integration).

Butterfish

Which allows you to:

  • Ask questions about your shell history.
  • Generate and autocomplete shell commands.
  • Give GPT a goal and let it execute commands.
  • Summarize a local file.
  • Generate and search embeddings of local files.
  • See and edit the actual AI prompts.
from https://github.com/bakks/butterfish/

Butterfish allows you to utilize the power of LLMs without leaving the terminal. This is a huge improvement over the commonly available access via web UI or a REST API. If your workflow utilizes vim+tmux or you prefer to go the emacs route like me, or you work on a remote machine somewhere in the data center and you ssh to it you are spending most of your time in front of a terminal. Butterfish smoothly integrates with that workflow. It’s most helpful when you know exactly what you need to do, like “I want to change the file format from .xyx to .py in all subfolders of /home/code/projectY that start with a letter A and has 3 numbers in the folder name” and you don’t want to spend 5 min on browsing the docs and figuring the right syntax.

Autocomplete

Pops up subcommands, options, and contextually relevant arguments in your existing terminal

  • Easy install with brew
  • Context-aware hints
  • Wide area of tools supported
  • Fast
from https://github.com/withfig/autocomplete

This is yet another tool that helps you with the command syntax in the terminal. It is also improving your skills. I learned so many new capabilities of the tools I use daily just seeing the possible options in the autocompletion window while typing. The arguments that you see are suggested based on the context you are working on, so they are like 80% time super useful.

ChatGPT Emacs shell

  • Use LLM directly in Emacs
  • Integrates with org-mode to superpower your notes and TODO
  • Generate images directly to a file you are editing
  • MELPA package available for easy installation
from https://xenodium.com/

I think this one would be my favourite, but I’m biased being a long-time emacs and org-mode user. Although if this is not your path you can find similar plugins for Obsidian which is another great note-taking app. With the Emacs integration, you can supercharge your productivity even further. Writing a blog and need to generate a diagram or an image, no problem. If you are writing a code and need to generate a nice doc string, you are covered as well. What is even better, from a data scientist’s perspective, is when you are editing a Jupyter Notebook, it also works there. That integration really makes your notebooks much better, especially if you want to share it with more junior team members, you can generate a detailed description of what each cell is actually doing.

LocalAI for privacy and accuracy

This is all awesome, but looking into it from a privacy angle, you need to ask yourself “Do I want to share all my terminal prompts to OpenAI?”.

There are also doubts about the degrading quality of the answers, and continuously added limitations, in the early days you could ask: “What are the most popular ways to rob a bank that rely on IT?” (as I’m a CISO building a threat map), or “how to write an exploit for WordPress vX.Y” (as I’m writing tests for my new environment).

These days we could go into meme-level answers like “Sorry, as a self-taught language model, I cannot give you instructions on how to cook rice. Cooking is an extremely dangerous process that could result in harm to yourself or others”

So what is the solution? Hosting your own LLM with a standard API on your own machine. And the best way to achieve that is by using Local.AI which gives you:

  • Local, OpenAI drop-in alternative API.
  • Works offline
  • GPU Acceleration is available in llama.cpp-compatible LLMs.
  • Embeddings generation for vector databases
  • Download Models directly from Huggingface

And it’s super simple to run

git clone https://github.com/go-skynet/LocalAI
cd LocalAI
docker-compose up -d - pull always

A more detailed setup tutorial written by Tyler from Spectro Cloud, which shows the K8sGPT integration is HERE

What is important is choosing which model is best for the job and I want to share with you some insights after using such a workflow for a couple of weeks:

model comparison

So, all in all, I would highly recommend that you start experimenting with embedding LLMs into your terminal workflow. There are tons of tools allowing you to do that in a secure and convenient manner on a local setup. It’s also not a super resource-hungry setup and any decent developer laptop like XPS 13, MacBookPro, … are handling it easily. If you are looking productivity boost you should definitely try it.

If you would like to learn more about generative AI, large language models or open source MLOps, meet me & my team during Canonical AI Roadshow.

--

--