Building Telegram Bot infrastructure-as-a-code
Continuing the series of articles on developing Telegram bots on the AWS platform, this article focuses on automating the deployment of infrastructure for our bots. In previous articles (#1, #2), we explored the basic ideas of using serverless infrastructure.
During active development, we often need to update the code in test or stage environments. Doing this manually every time is not only labor-intensive, but also prone to many errors. Therefore, we will simplify our lives a bit by deploying our code in a semi-automatic mode.
AWS CDK (Cloud Development Kit) provides a great opportunity to describe infrastructure as regular code. The source code for the prototype solution we’ll discuss is published on GitHub.
In the repository, you can find two folders:
- src — the simplest implementation of a lambda function
- infra — CDK code that forms our infrastructure.
Let’s see what’s changed compared to previous versions:
Changes in the Code
Separation of Bot Logic and Infrastructure Code
The bot logic has been moved to the bot_logic
module. The main entry point is the handle_event(msg)
method, which forms the Update object and passes it to pyTelegramBotAPI for processing. This module also contains standard handlers for Telegram commands and messages.
Future expansions of our Telegram bot’s functionality will be done within the bot_logic
module.
The infrastructure layer — integration with AWS - the lambda function, is greatly simplified. Its task is to receive a message from the HTTP API and pass it to the bot_handler
. The lambda function code looks like this:
import json
from bot_logic import bot_handler
def lambda_handler(event, context):
body = json.loads(event['body'])
bot_handler.handle_event(body)
return {
'statusCode': 200,
'body': json.dumps('Message processed successfully')
}
Local and AWS Environments
Our bot’s logic can be quite complex, and we need a way to enable local development and debugging of the logic without constantly updating the code in AWS. The simplest solution at this stage is to allow the bot to run locally. For this, there are two files in the bot:
src/lambda_entry_point.py
- the entry point for the lambda functionsrc/console_entry_point.py
- the entry point for local development and debugging.
The console_entry_point.py
code looks like this:
import os
import json
import lambda_entry_point
import console_esm_simulator
CHAT_ID = 11224242
USER_NICK = "user321"
def main():
aws_event = console_esm_simulator.get_aws_event(CHAT_ID, "/start", user_nick=USER_NICK)
print(json.dumps(aws_event))
lambda_result = lambda_entry_point.lambda_handler(aws_event, {})
print(json.dumps(lambda_result))
if __name__ == "__main__":
main()
The main
function "emulates" AWS behavior by generating a message similar to those used in real infrastructure using the get_aws_event(…)
function. The generated event is passed to the lambda function handler and further processed through its code. Thus, the local behavior is very similar to what it will be in AWS.
It is a point for future extensions. In the future, we can automate testing using this approach.
Dynamic Determination of Telegram Token
For proper integration with Telegram, we have to use telegram bot token. Storing it in the code is a very bad idea for at least two reasons: 1) it is not safe; 2) it is very difficult to change it depending on the environments.
We will use the BOT_TOKEN
environment variable to store the token. The lambda function's environment variable will be set at the infrastructure deployment time and can vary depending on our deployment environments.
import os
pyimport telebot
BOT_TOKEN = os.getenv("BOT_TOKEN")
if not BOT_TOKEN:
print("BOT_TOKEN environment variable undefined")
exit(1)
bot = telebot.TeleBot(
token=BOT_TOKEN,
threaded=False
)
...
Lambda Function and Its Layers
In a previous article, we learned how to create layers for lambda functions. In the layers, we will store both the bot logic code (bot_logic
) and additional libraries. During deployment, we will form two layers:
- bot-logic-layer — it will be packed with the contents of the
./src/bot_logic
folder. It can change quite often. - requirements-layer — a layer containing external libraries, formed based on the
./src/requirements-lambda.txt
file. This layer will only change if the list or versions of external libraries change.
Infrastructure
As mentioned above, we will use CDK for deployment. The CDK stack description is in the prototype_stack.py
file. The stack forms two layers, a lambda function, an API gateway, and integrates the lambda into the API.
To deploy the infrastructure prototype in your AWS account, you need to do the following:
$ cd ./infra
$ cdk deploy
cdk deploy
command will form the CloudFormation stack named TelebotPrototypeStack
with the following resources:
Integration with Telegram
The formed API Gateway is integrated with the webhook lambda and is ready for operation. You can find the URL at which the API is available on the public internet in the stack output.
To integrate with Telegram, you need to:
- Specify the Telegram token in the lambda function settings:
- Register the webhook URL in Telegram:
And voilà! The bot prototype is deployed 😊
In the next articles, we will explore the bot’s logic and infrastructure security. In the meantime, feel free to experiment with the code.
Good luck and have a great day!