Writing a LLM-based Telegram bot that texts like a person would

Percy
4 min readFeb 11, 2024

--

I’ve always found services that used AI to mimic conversation, such as Replika, ChatGPT, and character.ai, fascinating. While their responses are impressive, I found that their interfaces and response styles fell short in making the experience truly feel like texting another person. Messages were long, written somewhat formally, and the chat interfaces made it clear that a machine learning model somewhere was churning out each response.

I wondered if it was possible to make a bot that could text like a friend might. For me, this would mean things like using lowercase characters (might be a Gen Z thing), sending multiple short messages at a time, and generally using more slang and acronyms. Initial experiments with ChatGPT showed some promise:

Getting ChatGPT to respond casually

But I wasn’t entirely satisfied with its responses. I probably could have refined the prompt further, but I knew that the bot would always be restricted by OpenAI’s careful finetuning to be as unoffensive and law-abiding as possible. The chat experience also wasn’t what I was looking for. And so, I began looking for a different solution entirely, which led to me writing my own.

Using Telegram

I’ve played around with Telegram bots in the past, and I knew it was the right fit for the project. It’s a real messaging app — I wouldn’t have to make my own interface — and Telegram’s API is extensive, well-documented, with established Python bindings for it.

Generative AI

Running a LLM on my tiny laptop was out of the question, so I looked into API services instead. I stumbled across NLP Cloud, which offered their homegrown ChatGPT-alternative named ChatDolphin. I was really surprised by how closely it rivalled ChatGPT despite the company being a small startup (and their website looking pretty basic, to be honest).

Side note: I gave OpenAI’s API a try but ran into severe latency issues, which looked like a widespread issue that wasn’t going to fixed anytime soon. ☹

Putting the pieces together

And so, in the span of 2 days, I quickly threw together a prototype. See the demo video below:

Video demonstration of the bot

I’m quite happy with how it turned out. More examples below:

More sample conversations with the bot

It could definitely use a few improvements here and there, but there are some key features and tricks I’m particularly proud of:

  • Texting behaviour. The bot has a typing indicator, sends multiple messages in each response, and is capable of texting (quite) informally! There are still some imperfections that give it away, such as delays between messages that aren’t quite right, but refining them is on the to do list for another day. Some things, however, are out of my control, such as the initial delay while the bot is waiting for an API response.
  • Conversational history. Since passing the entire conversation when getting each new response wasn’t sustainable for long conversations, I used summarisation models to condense the information into more manageable chunks that were used instead. I let conversation history reach a certain length before calling the summarisation model, and subsequent summaries are combined with one another.
  • Customisability. I added a feature (/update_history) where you can edit your messages (and the bot’s responses by switching to that user account) and have them “learned” by the bot, since those edited messages are passed to the model and used as examples for subsequent responses. This allowed for even better finetuning than what the context can provide, since you literally show the bot what kind of responses you want from it. I managed to achieve some cool things with this, such as sending “images” using textual descriptions in square brackets, or replying to messages using a custom notation. See it in action below:
Example of updating chat history

Closing thoughts

While the project was a blast to make, I didn’t end up using the bot for more than a few days. I think the idea of growing too attached to a fictional being is quite scary, and it is a real problem that people are facing.

It is too easy to mistake these bots for real people with emotions (it gets really convincing) which makes me worry about the potential LLM technology has. I can see them being used in scams (even love scams), for trolling, and for spreading disinformation. Though fun to toy around with, LLMs seem likely to cause a lot of trouble in the near future.

Overall, despite everything, the project was a fun little experiment and I’m excited to see how else I can utilise generative AI for my upcoming projects!

Running the project

Unfortunately I don’t host the bot for public usage because it would be expensive to run.

The source code for the project is available on GitHub but you will need a bit of knowledge to get it up and running. General instructions are available in the README of the repo.

--

--