Mobile Assistant: Create a Telegram Bot Using ChatGPT & Kotlin

Patrick Jung
arconsis
Published in
7 min readJun 15, 2023
Photo by Christian Wiediger on Unsplash

I’ve been using ChatGPT for a while now, mostly for work purposes, but it has also simplified my personal life here and there. It’s quite handy to be able to ask someone all sorts of questions and get an answer right on the spot. The only downside so far is that once I’m away from my computer, having a convenient ChatGPT-powered assistant on the phone either requires the use of the browser or one of the ChatGPT-powered services that offer mobile-optimised chatbots, all of which are paid services (at least the ones I’ve found so far; excluding already exisiting self-hosting open source solutions).

But instead of using one of those services, I wanted something that’s more like a personal assistant. Ideally, I’d like to reach my assistant the same way I reach out to my friends and family: by using a chat app.

Since WhatsApp is a bad thing (and does not offer a public API anyway), the decision to use Telegram was a quick one. Telegram also has a nice API and a very easy and convenient way to create your own bots.

I won’t go deeper into detail on how to set up your bot. If you’re interested in creating your own bot, check out Telegram’s guide on how to do it.
Once you’ve created your bot, you can use your token to authenticate against Telegram’s API. We’ll save the token for now, as we’ll need it later.

Besides that, we’ll also need our OpenAI API key, which you can find or create here.

Since I wanted to use Kotlin for this project, I checked out some repositories on GitHub and collected two libraries that I’ll use for my approach:

That’s really all we need. Let’s start cooking!

Building the Assistant

First things first: I created a new Kotlin project which is using Gradle as a build system. Once the project is set up, I create an entry point for my application, which for now just prints “Hello World”.

fun main() {
println("Hello World!")
}

You can now run the main function to check that everything works as expected. There should be no errors and the “Hello World” message should appear on your terminal.

The next thing we need to do is add our external dependencies to the project. These are added in the build.gradle.kts (or build.gradle if you’re not using the Kotlin DSL).

/* ... */

repositories {
mavenCentral()
maven("https://jitpack.io") // required for telegram bot dependency
}

dependencies {
// OpenAI dependency (BOM)
implementation(platform("com.aallam.openai:openai-client-bom:3.2.3"))
implementation("com.aallam.openai:openai-client")
runtimeOnly("io.ktor:ktor-client-okhttp")

// Telegram Bot dependency
implementation("io.github.kotlin-telegram-bot.kotlin-telegram-bot:telegram:6.0.7")

testImplementation(kotlin("test"))
}

/* ... */

Hint: If you are having trouble configuring your dependencies, check out the READMEs of the projects for more details.

Once the dependencies are added, we can start setting up our bot.
The first thing we want to do is configuring our Telegram bot to listen for incoming messages.

import com.github.kotlintelegrambot.bot
import com.github.kotlintelegrambot.dispatch
import com.github.kotlintelegrambot.dispatcher.message

fun main() {
val bot = bot {
token = "YOUR_API_KEY"
dispatch {
message {
// HANDLE INCOMING MESSAGE HERE
}
}
}
bot.startPolling()
}

Now we’re ready to listen to incoming messages and process them with our own logic. From here you can go crazy and implement all kinds of commands or additional logic – but for me the only purpose of the bot at the moment is to accept incoming messages, process them via OpenAI and then send the result back to the sender of the message.

So let’s configure the OpenAI part so that we can process the incoming message and send the results back. This is what it might look like:

fun main() {

val openAiClient = OpenAI(
OpenAIConfig(
token = "YOUR_OPENAI_TOKEN",
logLevel = LogLevel.Info,
)
)

val bot = bot {
token = "YOUR_API_KEY"
dispatch {
message {

update.message?.text?.let { message ->
val chatId = ChatId.fromId(update.message!!.chat.id)

val result = bot.sendMessage(
chatId = chatId,
text = "Processing..."
)

val messageId = result.get().messageId

val chatCompletionRequest = ChatCompletionRequest(
model = ModelId("gpt-3.5-turbo"),
messages = listOf(
ChatMessage(
role = ChatRole.User,
content = message
)
)
)


runBlocking {
val response = openAiClient.chatCompletion(chatCompletionRequest)
val responseText = response.choices.firstOrNull().message?.content ?: "A completion error occurred!"
bot.editMessageText(
chatId = chatId,
messageId = messageId,
text = responseText
)
}
}

}
}
}
bot.startPolling()
}

Let’s look at the above changes in more detail:

First, we’ll create the OpenAI client before creating the Telegram Bot instance, so we can reference it later when processing messages. Also, we only need one instance of this class for the whole runtime.

Once a message is received, we’re creating a ChatCompletionRequest for OpenAI to handle the messages (this is part of the OpenAI API, you can read more about completions here). You can adjust additional parameters here, such as temperature or the language model, but for simplicity’s sake we’ll stick with the defaults provided.

As soon as we receive the request from the Telegram bot, we immediately send back a "Processing…” message. This has two purposes:

  • To notify the sender that the message has been received and is now being processed
  • For the bot to have a chat message reference, which it can later update with the actual response

When the info message is sent to the user, we proceed with sending the completion request to OpenAI. As soon as a response is received, we’ll pass it on to the user by updating the "Processing…” message we sent earlier.

That’s all we need for the very basic implementation. Let’s run the application and see if it works.

Running the Assistant Standalone

At the moment we’re only able to start the assistant from within the IDE, which works fine during development. Ideally, however, we would like to be able to run it without having to rely on our development environment all the time.
Since we’re using a JVM-based language, we can easily create a fat jar so that we can run the project on any machine that has the Java Runtime installed.

What is a fat jar? TL;DR A fat jar is a Java archive that contains both, your own compiled classes and all your dependencies, allowing you to run the application standalone.

To create such a jar file, we add a new gradle task called fatJar. We can do this by adding the following snippet to our build.gradle.kts file:

tasks.register<Jar>("fatJar") {

duplicatesStrategy = DuplicatesStrategy.EXCLUDE

archiveClassifier.set("fat")

manifest {
attributes["Main-Class"] = "com.arconsis.bot.MainKt" // reference your application entrypoint here
}

from(sourceSets.main.get().output)

dependsOn(configurations.runtimeClasspath)
from({
configurations.runtimeClasspath.get().filter { it.name.endsWith("jar") }.map { zipTree(it) }
})
}

Once the snippet is added, we can run the build from the command line:

./gradlew fatJar

Once the build is finished, we can find our jar file in the build folder: build/libs/{APP_NAME}.{APP_VERSION}-fat.jar.

To run the bot, all you have to do is execute the jar file:

java -jar build/libs/{YOUR_FAT_JAR}.jar

Is the bot started, you can open your chat app and start messaging with your newly created bot!

Chatting with the Chatbot

And that’s about it… well, at least for this article. I hope you found it interesting and I was able to help you get started creating your own virtual assistant. Please keep in mind that this article is limited to the very basics for showcase purposes only. Always make sure that you do not include your API keys in the source code! Do not share your API keys with anyone!

If you’re looking for some handy/useful additions, here are some ideas that I’ve implemented in my own bot, which are not part of this article:

  • Implement streamed responses. Instead of waiting a few seconds until the complete OpenAI response is received, you can also stream chunks of the response as it is generated (similar to ChatGPT’s Website). This makes your bot look and feel more responsive.
  • Add support for voice messages. OpenAI provides a transcription API. You can use it to transcribe incoming voice messages into text, then process the transcribed text and send the response back. This will save you a lot of time because you won’t have to type your questions anymore, but can simply record a voice message instead. So far, this feature came in most useful for me. It even works with multiple languages!
  • Make the assistant context aware so that it can respond to follow-up questions. Currently, each incoming message is handled separately, without any reference to the previous messages.
  • Security & Containerisation: Remove all hardcoded values and load them dynamically from the environment (or similar). Create a Docker image for your bot to deploy to any cloud infrastructure you like. A compose stack also works well for VPS/VMs.
  • Roll out your assistant to more platforms. By splitting the codebase, you can create a reusable architecture that allows you to extend your assistant to more platforms (e.g. if you don’t like Telegram, you can use it with other platforms like Discord, etc… — or even run them all together in parallel).

If you’re interested in a follow-up part where I go deeper into these features and how you can implement them, feel free to let me know.
In case you want to read more about experiments with ChatGPT or technology in general, check out and follow us at arconsis.

Thanks for reading and have a good one! ✌️

--

--