Implement Deep-CNN on Quanser Qcar for Self Driving

Talha Ejaz
3 min readSep 25, 2022

--

This article briefly introduces implementing a CNN for obstacle avoidance, it is an essential function in a self-driving vehicle. When a car moves from a starting point, it should avoid static and dynamic obstacles and move to the target position. The proposed article provides a brief overview of how I created the dataset, trained the model using TensorFlow, and stored it in TFlite which is an optimized FlatBuffer format in a form of a .tflite extension.

A small-scale model (QCar) from Quanser equipped with state-of-the-art sensors and Nvidia Jetson TX2 has been conducted to verify the effectiveness of the proposed network.

Quanser Python API:
Hardware (HIL)
Multimedia
Communication

For implementation and experimentation, QCar is equipped with Nvidia Jetson TX2. The figure below illustrates an implementation of an end-to-end deep learning system, and its functions and interactions. The input of the proposed architecture is raw RGB frames collected from a camera at a speed of 30 fps where Qcar is controlled manually via joypad and runs the car in a different environment, and light setting, and records video in every run. Data was recorded in video format by extracting the data we collected data in a form of images. The obstacles such as humans or objects are one set and without obstacle is second set. A total of 1818 images are used and split them 80:20 ratio. 1455 images were used for training and 363 images were used for validation. The image size captured by the camera on the Qcar is 224x224 which scales down to 180x180 to act as input for CNN.

END-TO-END DEEP LEARNING ARCHITECTURE

The network architecture includes convo2d starting with 32 and 64 batch sizes with padding after 128,256,512,728 using ReLu activation.

System Design Implementation

For implementation and experimentation, A Qcar which acts as the robot communicates with the computer server. Sending the code we use Putty via the wireless network. We use the Xlauncher which gives real-time visual data in the laptop/computer by communicating with the Qcar. Here are the steps below to run SSH as remote access to remote desktop (Qcar).

  • cd /Documents/testing (reached into target directory)

write this script

  • sudo python3 SelfDriving.py

Results:

After executing the command we get the result.

Future Work:

It’s not an optimal solution as inconsistent false positives of object detectors would also cause false positives for our system. I just tried to implement Deep Convolutional Network Using Binary Classification for some testing. Qcar can able to perform big tasks like Fully Autonomous Navigation System using different techniques like deep reinforcement learning. My aim is to implement a Genetic Algorithm for fully autonomous navigation which includes path planning, perception, control, and coordination.

Thanks for reading and hope you found this post helpful! Feedback and questions are welcomed!

For code and supportive material please check my github @ https://github.com/talhaejazh

--

--

Talha Ejaz

Robotics Researcher (Machine Learning, Computer Vision, Autonomous Navigation, Big Data) , Traveler, Nature, Potrait