Building a translator assistant with the ChatGPT API

Izabel Barros
Indicium Engineering
7 min readJul 12, 2024

--

In today’s globalized world, effective communication across different languages is more important than ever. Therefore, a tool able to translate accurately and efficiently is necessary.

Whether you’re a student, a professional, a traveler, or simply someone passionate about languages, the ability to translate with quality can open up a world of opportunities.

With that and our exciting expansion into the U.S. in mind, I decided to build a quick and simple tool to help achieve that goal of effective communication in English from Portuguese (our native language here in Brazil): a translator assistant!

The ChatGPT API, developed by OpenAI, enables developers to integrate their language model into various applications, such as chatbots, virtual assistants (like the one we’re gonna build), content creation, and more.

With this API, it is possible to create interactive and intelligent systems that not only understand context, answer questions, generate coherent and contextually relevant text, but also perform tasks based on prompts.

The translator assistant is going to provide immediate translations and help users who are learning English understand the vocabulary better. It is also going to provide tips and insights to them.

Finally, it will be a facilitator for communication to people who are still learning the language, breaking down the barriers and promoting better understanding between our contributors.

Step 1: setting up

First, we need to set up our Python environment. I suggest doing everything online to easily manage version control. So, let’s start by setting that up.

python3 -m venv .venv
source .venv/bin/activate

We also need to install OpenAI to access the API and the “dotenv” library to set up our key on the next step.

pip install openai
pip install dotenv

Now, we need to set up our OpenAI key so we can use the API. If you don’t have one ready, create an account on the OpenAI website and generate one.

Create a new Python file (I’m naming mine “translator_assistant.py”), import the tools, and define the OpenAI key.

# translator_assistant.py
import os
import openai
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']

Step 2: creating the translator

With the environment set up, we are going to create a function that will call the API and send the query to the model, which we are calling “get_translation”. The OpenAI function “create” helps us tune our response with its parameters:

  • The “messages” parameter is a Python dictionary that helps us shape the model’s response, and it’s going to feed the user’s query to it. I’m going to talk more about it in the next step!
  • The “model” parameter specifies which version of OpenAI’s language model you want to use.
  • The “temperature” parameter controls the randomness of the text generated by the model. Lower values (closer to 0) make the output more deterministic and focused, while higher values (closer to 1) introduce more randomness, leading to more creative and varied responses.
  • The “max_token” parameter ​​sets the maximum length of the output text by limiting the number of tokens the model will be allowed to generate.
def get_translation(messages, model="gpt-3.5-turbo", temperature=0, max_tokens=500):
response = openai.chat.completions.create(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
)
return response.choices[0].message.content

Step 3: putting it all together

Let’s build our main function step by step together.

The languages were kept in a variable so they can be easily changed whenever needed.

def main():
source_lang = "Portuguese"
target_lang = "English"

To make this script more user-accessible, let’s print a welcome message that briefly explains what it does. I wanted the assistant to have a friendly English teacher tone to it, so let’s use that welcome text snip to showcase that to the user.

print(f"Welcome! I'm your {target_lang} teacher for today! I'll help you translate messages in {source_lang} to {target_lang}.")

Now, we collect the user’s query.

To keep it user-accessible, let’s make an input box with a question label to help guide the user and save it in a variable “user_message”.

user_message = input("How can I help you today? ")

Saving the user input in a variable is going to help us build the “message” dictionary, a key component for structuring the input to the model.

Each “message” dictionary contains two keys: “role” and “content”. The “role” key specifies the role of the message sender, which can be system, user, or assistant. The “content” key holds the actual message the role outputs.

The user message content represents the input from the person interacting with the model, so its content will be the “user_message” variable.

messages = [
{'role': 'system', 'content': system_message},
{'role': 'user', 'content': user_message},
{'role': 'assistant', 'content': assistant_message}
]

The system message content defines the behavior and sets the context for the assistant, like instructions or background information.

Our system message here not only sets up how we want the assistant to interact with and respond to the user, but also asks that it answers them with the translation, tips, and insights. The better you explain what you want to the model, the better it will interact with your user.

system_message = f"""
You're a {target_lang} teacher. Your task is to correctly translate the message the user gives you in {source_lang} to {target_lang}, in a cheerful tone. You have to provide the user some helpful tips and further explanation on the translation, giving some more insight and tips on {target_lang}. Respond in a friendly and helpful tone, with concise answers. Don't congratulate the user on the successful translation. Don't make up things you don't know.
"""

The assistant messages content is used to provide responses from the assistant, either for generating continued dialogue or for context in ongoing interactions. Here, we can use it to help further model our assistant’s behavior by providing some examples.

example_input_1 = """
traduzir "Vamos marcar uma reunião
""""
example_output_1 = """
"Let's schedule a meeting."
In English, the verb "to schedule" is commonly used when setting up appointments, meetings, or events. It's a versatile word that can be used in various contexts. Remember to use "let's" before the verb to suggest doing something together. Keep up the good work!
"""

This is how our “message” dictionary is going to look like:

messages = [
{'role': 'system', 'content': system_message},
{'role': 'user', 'content': example_input_1},
{'role': 'assistant', 'content': example_output_1},
{'role': 'user', 'content': user_message}
]

Now, we are ready to call our “get_translation” function.

translation = get_translation(messages)

And print out the result to the user!

Let’s use a label to mark the output to keep our assistant as user-friendly as possible — plus it helps to keep with the cheerful English teacher tone.

print("\nTeacher:")
print(translation)

Last but not least, let’s add moderation to our input and output.

​​Moderation helps ensure that the assistant is up with safety guidelines and legal requirements by filtering out harmful, inappropriate, or illegal content.

Since this is thought to be used within our company, moderation will help maintain a positive brand image by preventing the assistant from generating offensive or controversial responses that could damage the brand’s reputation or are not in compliance with its culture and guidelines.

moderation_input = openai.moderations.create(input=user_message)
if moderation_input.results[0].flagged:
return print("Sorry, we cannot process this request.")
moderation_output = openai.moderations.create(input=translation)
if moderation_output.results[0].flagged:
return print("Sorry, we cannot process this request.")

Now that we have all the pieces set up, this is how the main function will look like:

def main():
source_lang = "Portuguese"
target_lang = "English"
print(f"Welcome! I'm your English teacher for today! I'll help you translate messages in {source_lang} to {target_lang}.")
user_message = input("How can I help you today? ")
moderation_input = openai.moderations.create(input=user_message)
if moderation_input.results[0].flagged:
return print("Sorry, we cannot process this request.")
system_message = f"""
You're a {target_lang} teacher. Your task is to correctly translate the message the user gives you in {source_lang} to {target_lang}, in a cheerful tone. You have to provide the user some helpful tips and further explanation on the translation, giving some more insight and tips on {target_lang}. Respond in a friendly and helpful tone, with concise answers. Don't congratulate the user on the successful translation. Don't make up things you don't know.
"""
example_input_1 = """
traduzir "Vamos marcar uma reunião"
"""
example_output_1 = """
"Let's schedule a meeting."
In English, the verb "to schedule" is commonly used when setting up appointments, meetings, or events. It's a versatile word that can be used in various contexts. Remember to use "let's" before the verb to suggest doing something together. Keep up the good work!
"""
messages = [
{'role': 'system', 'content': system_message},
{'role': 'user', 'content': example_input_1},
{'role': 'assistant', 'content': example_output_1},
{'role': 'user', 'content': user_message}
]
translation = get_translation(messages)
moderation_output = openai.moderations.create(input=translation)
if moderation_output.results[0].flagged:
return print("Sorry, we cannot process this request.")
print("\nTeacher:")
print(translation)

Step 4: running your translator assistant

Now, all you need to do is type the following command line on your terminal to call our assistant!

python3 translator_assistant.py

Remember to change the name of the file on the command line if you named it something else.

Conclusion

In conclusion, having a tool like ChatGPT’s API to create a translator assistant offers a powerful and flexible solution for real-time language translation needs.

By building a highly responsive translation tool, we can quickly bridge the gap between contributors that speak different languages and make such valuable knowledge super accessible to help further their skills.

Acknowledgment

This blog post was inspired by the insights and knowledge gained from the Building Systems with the ChatGPT API course offered by deeplearning.ai.

This blogpost is also accompanied by a GitHub repository, where you can find the script showcased here and interact with it!

https://github.com/belbarros/translator-assistant

--

--