Build a Twitter Bot with Mantium, Tweepy, and Heroku

Learn how to create an application that leverages a large language model with Python!

Svitlana Glibova
Analytics Vidhya
10 min readFeb 2, 2022

--

Photo by Chris Ried on Unsplash

In this tutorial, we will explore a quick way to create an automated application to post a fun piece of text to Twitter. The example text we will use is a list of unique ice cream flavors — each day, the bot will generate a new flavor by using the example text as the baseline pattern. Feel free to use your own example text if you’d like to focus on a topic that isn’t ice cream flavors.

Check out how it works with this example prompt! Simply click “Execute” and see what flavor you get back.

This will be a simple Twitter bot whose sole job will be to post at a regularly-scheduled interval with text generated by a large language model — its only tasks will be to generate a piece of creative text, connect to the Twitter API, and publish the text in a Tweet. In later tutorials, we will explore how to add functionality such as liking and replying to Tweets!

Note: Twitter is very strict about being candid about where posted information comes from. Please be sure to disclose that your application is a bot.

Account Setup

If you already have accounts set up with these services, feel free to skip any steps you’ve already completed. You’re also welcome to choose your own scheduler if you would prefer to not use Heroku’s.

Mantium Account

  • If you do not already have an account, register on the Mantium platform.
  • Choose your provider. You can use Mantium’s large language model or, if you want to use another provider, navigate to Integrations > AI Providers, then paste your API key into the corresponding provider form. Click here for details on obtaining API keys by provider.

Twitter Developer Account with Essential Access

  • In order to access the Twitter API, you must have a Twitter account with developer access. For a bot, you may want to create a separate account from your own so it can Tweet on its own behalf. Sign in with the Twitter account here, then click “Sign Up” and follow the instructions for Essential Access. Once you have confirmed your account, save your API Key, API Key Secret, and Bearer Token in a secure location for later use.
  • API access comes in three flavors: Essential, Elevated, and Academic Research. To enable automated posting, you must have Elevated access. In your Dashboard, you will see a Project 1 link. Click that link, then select ‘Apply for Elevated’. Fill in the application and be as thorough as possible in the descriptions — you should be granted Elevated access almost immediately.
  • Go back to your project dashboard, click on your project, then click on the key icon by your application. This will allow you to generate an Access Token and Secret — save these in a secure location.
  • Note: Occasionally Twitter will use an automated system to remove API access from applications that seem suspicious. If this happens to you, it may have been random. Submit a support ticket and be sure to follow up if your access isn’t re-granted. Be mindful of your intentions when creating a user-facing application. We support developing with integrity and safety in mind — applications that facilitate trolling, spamming, or harassment of any kind do not align with our community standards.

Heroku Account

  • If you want to set your bot to Tweet at regularly-scheduled intervals, Heroku has a simple option for automatically running your scripts with a scheduler.
  • Sign up for a Heroku account. This is where you will be pushing your application up.
  • Install the Heroku CLI for easier deployment of your code.

Prompt Engineering

The simplest way to prototype a prompt to create your desired output is to go through the Mantium UI.

Prompt Creation

  • Navigate to your Mantium account and click AI Manager > Prompts > Add new prompt.
  • Add a Security Policy
    Adding a security policy to your prompt allows for controlling input and output content quality — you can configure your own or use Mantium’s default policy, which is included with every account. Click Add Security Policies under Security Policies and drag Default Policies to Selected Policies. Click Done to save. You will know the policy has been applied when its name is visible under Security Policies.
  • Choose a Provider: Co:here
    Here you can select any provider whose API key you have integrated. You are welcome to choose a different provider.
  • Choose an Endpoint: Compose — Generate
    All providers have an endpoint that support generating text. Select the corresponding endpoint for the provider you selected. The provider endpoints related to generative text are the following:
    OpenAI: Completion
    Co:here: Compose — Generate
    Mantium: Completion
    Ai21: Complete
  • Model: Large
    Models vary by provider and are optimized to perform better on different tasks. You are able to test out different models during this configuration process, so feel free to experiment with the different options.
  • Prompt Line
    Add the text that the model will follow to generate output. Be mindful of formatting — language models are sensitive to punctuation, symbols, white space, etc.
    For this example, I have attached a text file with a list of creative (and sometimes not-so-delicious) ice cream flavor ideas. Feel free to use this text to paste into the prompt line, or to create your own. Be sure to include one blank line at the end of the prompt body — this is crucial in order for the model to complete the next text pattern correctly.

ice_cream_flavors.txt

Prompt Settings

For this simple prompt, most of the settings can be left as default values. The most important settings to configure are Max Tokens, Temperature, and Stop Sequence.

  • Max Tokens: 12
    A token is approximately .75 of an average English word. Because we are generating short strings, this value can be fairly low.
  • Temperature: .95
    Temperature controls “creativity” — higher temperatures will produce more unique and creative outputs but also are more likely to become nonsensical. For creative and interesting responses that don’t necessarily need a factual response, a higher temperature value is totally fine.
  • Stop Sequence: \n
    Stop sequences are a method of controlling model output — they allow you to define any text sequences that force the model to stop. Because we want the model to generate only one line of text, it should stop the next time it creates a line break (“\n”).
  • This prompt does not require an input to function. Click Test Run to test the model’s output! Clicking Test Run multiple times will likely yield different results each time — you can use this as an opportunity to test and tweak prompt settings. Once you are consistently happy with the results, click Save.
  • Back in the Prompts menu, click on the prompt you just configured — this will open the prompt’s drawer view. In the Deploy App URL, you’ll find the prompt ID between “https://” and “.share.mantium.com”. Copy and save this value for setting in your project’s environment file.

Directory Setup

This tutorial assumes that you are somewhat familiar with initializing repositories and version control. Set up your project’s top-level directory however you are most comfortable, or use the examples provided below. Make sure you have your virtual environment active before you start installing packages.

Example command for project and virtual environment setup:

Initialize the files you’ll need for this project:

Your directory structure should now look like this:

Add the following libraries to your active virtual environment:

Then, use pip freeze to add your dependencies to the requirements file:

If you end up adding more libraries to your dependencies using pip install, you can always re-run this command to update your requirements file.

File Contents

  • .gitignore
    I use this template as a catch-all for many Python projects. You can trim it down to exclude the files you don’t need but always make sure your .env file is in your .gitignore file. Never ever push personal credentials to a publicly-visible repository.
  • .env
    Here you will initialize all of the credentials you’ll need to be able to interact with Mantium and Twitter. This is where you will paste that Prompt ID you saved earlier in the guide.

Let’s Code!

app.py

In the main body of the application, we accomplish several things:

  • Load Twitter credentials using a load_twitter_env() function that we define in load_env.py
  • Save prompt output in the prompt_output variable by calling prompt_results() from the mantium_call.py module. Calling this function is what will execute the prompt we configured earlier in the tutorial.
  • Authenticate, connect, and verify login to the Twitter API.
  • Using the Tweepy api.update_status method, we post prompt_result to the bot’s Twitter timeline as a status update!

Now let’s take a look at the helper modules: load_env.py and mantium_call.py

load_env.py

Using the python-dotenv library, we call load_dotenv() to load the credentials that we set in the .env file above. I separated the process into two functions: one for logging in and using with Mantium and one for logging in and using with Twitter.

mantium_call.py

Here, we use the load_mantium_env() function from load_env.py to grab Mantium credentials and confirm a successful login by checking if we were able to get mantium_token by calling client.BearerAuth().get_token().

The prompt_id value will be used to retrieve the specific prompt you configured earlier in this guide.

The prompt_results() function executes the prompt with an empty string as input, as this prompt does not require any input. We then check the prompt status and refresh the prompt while the status is not ‘COMPLETED’, followed by checking the results of the prompt to make sure they are not an empty string. Once it is confirmed that both the prompt status is complete and the output value is not an empty string, the function will return the prompt output as a string.

The code at the bottom allows for testing the individual script to make sure that it successfully authenticates to Mantium and can retrieve a prompt result. To test this script on its own (without worrying about app.py posting to Twitter), you can run the following command from inside the twitter_bot directory:

Test Out Your Script!

In your terminal, you can test your script by running the following from the twitter_bot directory:

If successful, you should have a fresh Tweet on your bot’s Timeline!

Deploy & Schedule on Heroku

Note: If you use Python Poetry, you’ll need a Poetry Buildpack — follow the instructions to create a requirements file for Heroku to use.

Deploy

To deploy using the Heroku CLI, first log in to your Heroku account from your web browser, click New > Create new app in the top right corner of your main dashboard, and name your app.

Using the CLI, you can log into your heroku account and follow the prompts to set up an SSH key.

Once you’re set up, use the following commands while in your twitter_bot directory. If you’ve already initialized a git repository, skip running git init.

To change your main deploy branch name to the modern practice and re-deploy, run the following:

You main branch is now named main and any subsequent pushes will be done using git push heroku main.

Set Config Variables for Heroku

Because we are not pushing up the .env file to Heroku, we can set the configuration variables in the Heroku CLI with the following commands:

You can also set multiple variables at the same time by including the declarations in one command if you prefer, or you can configure them in the Heroku UI. If you ever want to remove any of your config variables, you can run a command that looks like:

More on Heroku configuration variables here.

Schedule

To add a scheduler, click on your application in your main dashboard and locate the “Configure Add-ons” link in the left half of the menu. Click there and search for “Heroku Scheduler” in the “Add-ons” search bar. From there, you will be able to configure a command and time for the command to run. Click “Add Job” in the top right corner, select the timing and paste in the following command:

Click “Save Job” and you are all set!

Check Your Logs

Once your application has been run by the Heroku Scheduler, you should be able to access your most recent logs by typing in the following command in your command line:

To view only your logged prompt outputs in the Mantium interface, you can log into your Mantium account and click on Monitoring > Logs.

Conclusion

With this code, you now have a template for creating other fun applications that leverage text generation with a large language model! The setup is relatively straightforward and has plenty of room for additional creativity. If you’d like to take this project further, or have any questions about this tutorial, please don’t hesitate to get in contact with me on Medium, LinkedIn, or join us in the Mantium Discord!

--

--

Svitlana Glibova
Analytics Vidhya

Python Engineer at Mantium | Developer Relations | Data Science | | B.S. in Mathematics | Former Certified Sommelier | Seattle, WA