Apache Spark Structured Streaming Via Docker Compose

ZEKERİYYA DEMİRCİ
Bentego Teknoloji
Published in
6 min readApr 21, 2022

Building a data pipeline could be challenging especially when you have to take into account portability, flexibility, scalability etc. To overcome these challenges docker is one the well-known solution. In this article, we are gonna talk about building a data pipeline via docker-compose file.

System Architecture

0. Installation Processes

You are able to install all required components to realize this project using the given steps.

Installation of ROS

We won’t address the whole installation process of ROS but you can access all required info from ROS Noetic & Ubuntu 20.04 Installation.

Installation of Docker on Ubuntu

You can utilize this URL

Installation of Kafka-Python Library used for publishing data received from ROS to Kafka

❗ If you haven’t installed kafka-python, use the given command and then run given files.

pip install kafka-python

1. Prepare a robotic simulation environment

ROS (Robot Operating System) allows us to design a robotic environment. In this project, we will use ROS as a data provider. “odom” is a type of message that represents the position of a vehicle. We utilize the given code that generates arbitrary “odom” data and publishes them.

Run the given code and analysis the data we will use

This script publishes odometry data with ROS “odom” topic. So, we can see the published data with the given command:

# run the script environment
python3 odomPublisher.py
# check the topic to see data
rostopic echo /odom

In this use case, we will just interested the given part of the data:

position: 
x: -2.000055643960576
y: -0.4997879642933192
z: -0.0010013932644100873
orientation:
x: -1.3486164084605e-05
y: 0.0038530870521455017
z: 0.0016676819550213058
w: 0.9999911861487526

2. Prepare Docker-Compose File

First of all, we generated a network called datapipeline for the architecture. The architecture consists of 4 services and each has a static IP address and uses the default port as the given below:

  • Spark: 172.18.0.2
  • Zookeeper: 172.18.0.3
  • Kafka: 172.18.0.4
  • Cassandra : 172.18.0.5

We use “volumes” to import our scripts to containers.

❗ You have to implement “ ../streamingProje:/home” part for your system.

You can access the docker-compose and replace your configs.

3. Running docker-compose file

Open your workspace folder which includes all files provided and run the given command as below.

# run docker-compose file
docker-compose up

You will have a view like:

After all, container is running, you can set up your environment.

Prepare Kafka for Use Case

First of all, we will create a new Kafka topic namely odometry for ROS odom data using the given commands:

# Execute kafka container with container id given above
docker exec -it 1c31511ce206 bash
# Create Kafka "odometry" topic for ROS odom data
kafka$ bin/kafka-topics.sh --create --topic odom --partitions 1 --replication-factor 1 -bootstrap-server localhost:9092

Check Kafka setup through Zookeeper

# Execute zookeeper container with container id given above
docker exec -it 1c31511ce206 bash
# run command
opt/bitnami/zookeeper/bin/zkCli.sh -server localhost:2181
# list all brokers topic
ls /brokers/topics

You will have a view like:

Prepare Cassandra for Use Case

Initially, we will create a keyspace and then a topic in it using the given command:

# Execute cassandra container with container id given above
docker exec -it 1c31511ce206 bash
# Open the cqlsh
cqlsh -u cassandra -p cassandra
# Run the command to create 'ros' keyspace
cqlsh> CREATE KEYSPACE ros WITH replication = {'class':'SimpleStrategy', 'replication_factor' : 1};
# Then, run the command to create 'odometry' topic in 'ros'
cqlsh> create table ros.odometry(
id int primary key,
posex float,
posey float,
posez float,
orientx float,
orienty float,
orientz float,
orientw float);
# Check your setup is correct
cqlsh> DESCRIBE ros.odometry

⚠️ The content of topic has to be the same as Spark schema: Be very careful here!

4. Prepare Apache Spark structured streaming

You are able to write analysis results to either console or Cassandra.

(First Way) Prepare Apache Spark Structured Streaming Pipeline Kafka to Cassandra

We will write a streaming script that reads odometry topics from Kafka, analyzes them, and then write results to Cassandra. We will use streamingKafka2Cassandra.py to do it.

First of all, we create a schema the same as we already defined in Cassandra.

⚠️ The content of schema has to be the same as Casssandra table: Be very careful here!

odometrySchema = StructType([
StructField("id",IntegerType(),False),
StructField("posex",FloatType(),False),
StructField("posey",FloatType(),False),
StructField("posez",FloatType(),False),
StructField("orientx",FloatType(),False),
StructField("orienty",FloatType(),False),
StructField("orientz",FloatType(),False),
StructField("orientw",FloatType(),False)
])

Then, we create a Spark Session and specify our config here:

spark = SparkSession \
.builder \
.appName("SparkStructuredStreaming") \
.config("spark.cassandra.connection.host","172.18.0.5")\
.config("spark.cassandra.connection.port","9042")\
.config("spark.cassandra.auth.username","cassandra")\
.config("spark.cassandra.auth.password","cassandra")\
.config("spark.driver.host", "localhost")\
.getOrCreate()

In order to read the Kafka stream, we use readStream() and specify Kafka configurations as the given below:

df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "172.18.0.4:9092") \
.option("subscribe", "rosmsgs") \
.option("delimeter",",") \
.option("startingOffsets", "earliest") \
.load()

Since Kafka sends data as binary, first we need to convert the binary value to String using selectExpr() as the given below:

df1 = df.selectExpr("CAST(value AS STRING)").select(from_json(col("value"),odometrySchema).alias("data")).select("data.*")
df1.printSchema()

Although Apache Spark isn’t capable of directly writing stream data to Cassandra yet (using writeStream()), we can do it by use foreachBatch() as the given below:

def writeToCassandra(writeDF, _):
writeDF.write \
.format("org.apache.spark.sql.cassandra")\
.mode('append')\
.options(table="odometry", keyspace="ros")\
.save()
df1.writeStream \
.foreachBatch(writeToCassandra) \
.outputMode("update") \
.start()\
.awaitTermination()
df1.show()

Finally, we got the given script streamingKafka2Cassandra.py:

(Second Way) Prepare Apache Spark Structured Streaming Pipeline Kafka to Console

There are a few differences between writing to the console and writing to Cassandra. We directly write stream to console. With writeStream() we can write stream data directly to the console.

df1.writeStream \
.outputMode("update") \
.format("console") \
.option("truncate", False) \
.start() \
.awaitTermination()

The rest of the process takes place in the same way as the previous one. Finally, we got the given script streamingKafka2Console.py:

5. Demonstration & Results

If you are sure that all preparations are done, you can start a demo. You have to follow the given steps.

Start ROS and publish odom data to Kafka.

  • roscore : starts ROS master
  • odomPublisher.py : generates random odom data and publishes them along network
  • ros2Kafka.py : subscribes odom topic and writes odom data into kafka container
# these all are implemented in your local pc
# open a terminal and start roscore
$ roscore
# open another terminal and run odomPublisher.py
$ python3 odomPublisher.py
# open another terminal and run ros2Kafka.py
$ python3 ros2Kafka.py

(Option-1) Start Streaming to Console

# Execute spark container with container id given above
docker exec -it e3080e48085c bash
# go to /home and run given command
spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.0 streamingKafka2Console.py

(Option-2) Start Streaming to Cassandra

# Execute spark container with container id given above
docker exec -it e3080e48085c bash
# go to /home and run given command
spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.0,com.datastax.spark:spark-cassandra-connector_2.12:3.0.0 streamingKafka2Console.py

After the spark job is started, you can see the schema on the screen.

If you run option-1, you will have a view as the given below on your terminal screen.

After all the process is done, we got the data in our Cassandra table as the given below:

You can query the given command to see your table:

# Open the cqlsh 
cqlsh
# Then write select query to see content of the table
cqlsh> select * from ros.odometry

The world’s fastest cloud data warehouse:

When designing analytics experiences which are consumed by customers in production, even the smallest delays in query response times become critical. Learn how to achieve sub-second performance over TBs of data with Firebolt.

--

--