Building RabbitMQ Consumer With Python & Connecting Cassandra DB

Süleyman Aydoslu
Wingie / Enuygun Tech
4 min readJan 25, 2021

Nowadays, speed is the most important thing in mobile projects. Users don’t want to wait for long periods while they’re using an application. In order to satisfy our users’ needs, we have to avoid long process timeouts. There are some techniques to do this and one of them is using queue messages to manage some jobs in parallel to shorten our main job in our projects. If you have a job which takes a long time or maybe delays your process in a booking process, you can push a message to queue for doing it later and your consumer fetch and then consume that message according to your configuration.

In this article, I will cover up sending messages to RabbitMQ queue and consume with a Python consumer. Our consumer will connect CassandraDB and store our data.

Firstly, you can use this consumer independently in your project. The most important thing is pushing your messages to RabbitMQ. You can do this with any framework and any language. Once you push your message to queue, our consumer only read message and to its own logic.

In following integration we used Python 3.x and RabbitMQ and CassandraDB on your application environment.

Firstly, we’ll create a keyspace named as “bookstore” and a table named as “books” :

CREATE KEYSPACE "book_store" WITH REPLICATION = {'class': 'SimpleStrategy','replication_factor': 1};CREATE TABLE "book_store"."books" (
"id" UUID,
"title" VARCHAR,
"page_count" INT,
PRIMARY KEY (id)
);

After we created our table we need to push some message to queue and consume these messages with our python consumer.

We’ll use json, datetime, pika and cassandra-driver libraries for this project, json and datetime library is already installed in python but we need to pull pika and cassandra-driver library with:

pip install pika
pip install cassandra-driver

In this example I declare and push my message to queue with python but we can do this with any other language. I just created a Python file to push message like this :

With pika library we connect RabbitMQ and declare a queue. After declaring the queue we publish our message as a json string. In this json string all keys are equal to the column names of our table. In our consumer script, we dump the json data and insert into CassandraDB.

NOTE : I’ve used uuid library because CassandraDB has no auto increment primary key like MYSQL. Common usage like using uuid format as unique primary key.

Let’s run our script and push our message to queue like:

python consumer.py

After running our script we can check our messages on RabbitMQ dashboard. In order to reach dashboard you can go to http://localhost:15672/

NOTE: 15672 is default port , if you’ve installed RabbitMQ on another port, this should change, and also if you don’t have RabbitMQ you can check on : https://www.rabbitmq.com/install-homebrew.html

As we see here, uuid generated and our payload build successfully.
So how we will consume this message ?

Our consumer script is like this:

We use Cassandra library to connect CassandraDB. Pika library’s basic_consume method works with callback. In callback method, we get message from books_queue and then we connect CassandraDB. The most important thing is connecting exact key space, of which is “book_store” in our case, and querying with “JSON”.

So let’s start our consumer and see what happens:

It says “Processed!”, let’s check it on DB :

At the final point our data inserted to CassandraDB successfully.

You only need to make a supervisor config to work this consumer properly and every message pushed to queue will be consumed automatically after all.

NOTE: If you want to learn about supervisor you can check on : http://supervisord.org/ and example supervisor.conf like:

In this article we learned how to manage a process with RabbitMQ and Python, thanks to RabbitMQ and Python we connected CassandraDB and finish our job. If you have big data to work on and too many transactions to consume, you can use this practices to save time and source easily.

If you want to join our team, share your CV with us: kariyer@enuygun.com

--

--