Bob is following triangles … Hello OpenCV / ESP8266 / MQTT!

Bob is following another Bob!

It’s been a while since my last update on bob the robot. In this post I’ll share my progress, which is very exiting!

I’ve decided to drop IR communication, multiple arduino implementation as this implementation is way too complex and shifts my attention far away from the main goal, which is to build group of robots capable of identifying each other and communicating.

Source code available here.

Hello OpenCV + Python + ESP8266 + MQTT

After realizing that my solution is over-complicated, I’ve deiced to redesign and re-achitecure and came out with the following components:

  1. robot mechanics — I’ll made some modifications to the robot chassis and framework: lower profile design, lower cog (center of gravity), smaller wheels and USB power bank will function as power supply.
  2. robot control/logic — I’ve decided to use esp8266 instead of arduino mini, the main reason for this change is the on board WiFi chip. In addition, I’ve replaced l293d chip with L9110S due to their lower voltage requirements ( 2.5v–12v).
  3. commuication — MQTT will serve as the robot communication layer, passing messages between robots and software components.
  4. vision — Instead of using IR sensors, encoding messages I’ve decided to use android phone camera as robot eyes. This way a phone, attached to the robot will broadcast video to the video processing server, i.e android phone will act as ip camera. This design allows me to use an android phone as a camera or connect any ip camera for video transmission.
  5. image processing — OpenCV is used to detect shapes and some python code to take decisions on what the robot should do, more on that later.
  6. commuication — MQTT will serve as the communication layer, passing messages between robots and software components. I’m using mosquitto as my MQTT broker.

Vision Script

The follwoing clips demonstrate how the video processing works :

Basically, I am running a python script which is connected to the ip/android camera and respond to each frame. Then, the script will run a ‘detection’ script on each frame, which will return a result object with the following data:

[ screen resolution, detected objects ]

for each detected object, the following information is returned:

[ x,y,size,x center offset , y center offset ]

x/y center offset — how ‘far’ is the object from the image center,.

each result object is submitted to the following MQTT channel : /thing_id/sensors/eyes

Robot Control Script

The vision script mention above is responsible for target detection, the robot control script is responsible for reacting to the events triggered by vision script:

For each vision message, the robot control script performs some validations (new mesasge? are there any targets? … ) and decides how to act : stop, turn left, turn right, move forward, move backward and broadcasts this decision to the robot using the follwoing MQTT channel : /thing_id/hw_control.

Before actually submitting the message, there are some additional validations that are made to prevent from sending the same message again.

Robot Low level Control Script

This is basically low level esp8266 code written in arduino which does the following: subscribes and listens to the hardware commands channel (i.e /thing_id/hw_control).

accepted commands :

  • forward
  • backward
  • left
  • right
  • set movement speed
  • set turn speed
  • set command timeout
  • help
  • status

Testing and simulation

It didn’t take me a long time to understand that testing my code on a ‘live’ robot is going to be challenging, the damn thing has its own will! And I really don’t like and get up and move it to the testing location every 3 minutes…

In addition, I had some challenges with the vision script, especially in poor lighting conditions.

To overcome this and allow my self to enjoy my cup of coffee while coding I’ve decided to write small unity 3d app to simulate the robot.

Working principle:

  • App listens to the same MQTT channel as the real robot
  • Using unity’s fps controller I can move the player (i.e. the robot.) when a robot control command is received

Now, the only part missing is connecting the vision script to the simulator. To achieve this I’ve written a python script which can use capture video from a window. This way, testing was easy!

Next Steps

Currently I am very happy with myself and my robots are able to chase each other. Next, I wish to add more in depth detection abilities so the robots will be able to know which robot it’s facing.