Designing My Custom AI Chatbot

A Private Network of 5 or 6 Servers

Jay Greathouse
Hyperobjects
Published in
2 min readNov 13, 2023

--

UPDATE: I’ve decided to start with an Orange Pi 5 Plus for an AI Backend and another Orange Pi 5 Plus for a Front End instead of the Raspberry Pi. Of course, the 32G option seems superior to the 16G option but the 16G option is available now and the 32G option is not.

I'm considering using 5 Raspberry Pi 5s as servers for my custom AI chatbot. I assume the first one to set up would be the AI Backend, maybe with something like LLaMA or Alpaca.

Once that is working, would setting up the Back-End server come next followed by Websocket and the Front-End server? Would I need Redis?

If all except 1 server was run headless, which one would be the best one to control it all from? Maybe the Front-End server? The Back-End server? The AI Backend?

Or would it be better to set up the Database server and the Vector Database server(s) before the Back-End and Front-End servers? If I did this, could it generate code to complete the installations and configurations? Could it analyze and reiterate all accessible code?

If I wished to run both Vector databases, I assume that would require a 6th server.

On which server is Langchain installed?

TIA :)

Could the initial AI Backend, perhaps with the Database servers connected, generate the HTML, CSS, and Javascript for the Front End? Python? Configure Django, Websocket, and Redis? Also could it Configure Postgres, NLTK, Pinecone, and Chroma? Just a thought LOL.

TYVM for the article.

I’m looking at 5 LLM additional options for the RPi.

  • phi-1.5
  • BTLM-3B-8K
  • Metharme_1.3B
  • StableBeluga 7B
  • Orca Mini v2 7B

--

--