Autobots, Roll IoT!

Building an Arduino-based edge analytics demo

This past summer, I had the opportunity to intern at the Accenture Tech Labs in San Jose. As a member of the Systems and Platforms team, my role was to implement an application to serve as a proof of concept for an Internet of Things and edge analytics platform. Project code can be found here.

What the app does:

A user picks a set of locations to visit, and a fleet of robot cars dispatch to every location in that set.

Phase I — Building the Car

Firsts thing’s first, I needed an edge device to carry out tasks. For this project, I used the ELEGO Smart Robot Car Kit. The kit comes with a fairly straightforward building guide, so putting the car together was not too much of a problem. That said, the car’s overall build is somewhat shoddy; for every piece added, two fall off.

Phase II — Programming the Arduino

The kit provides a few Arduino files, written in C++, that define basic movements (forward, back, left, right) and sensing functions utilizing the Arduino’s I/O pins.

Movements

Unfortunately, the kit does not include any sensor that can be used to identify absolute location. Instead, I calculated distance using a dead reckoning, which requires consistent units of measurement for the car’s movement. The car’s basic movements are too erratic for the device to accurately rely on rate and time to calculate distance. Thus, the best available solution was to make use of the very primitive line tracking sensors. Three of these are mounted on the bottom of the vehicle, each returning the value ‘1’ when facing a black surface and ‘0’ otherwise. One of ELEGO’s provided files implements basic line tracking as follows:

  1. The car starts moving straight along a black line
  2. If the leftmost sensor stops reporting a black surface, the car has drifted left and should adjust right until all three sensors are once again on the black line.
  3. If the rightmost sensor stops reporting a black surface, the car has drifted right, and should adjust left.

On the physical topography, I laid out a grid of black lines reaching every possible destination location. Anywhere two perpendicular lines met, I placed a white space.

The final product.

Then I established discrete, uniform movements:

  • Track: Use the provided line tracking method to move forward along a black line until the middle line tracking sensor sees a white surface, indicating the car has reached the end of the line. Then continue moving forward for a fraction of a second to ensure the car is far enough forward to either detect the next available line or make an accurate turn.
  • Right Turn: Turn right until all three sensors have reported a white surface, and then continue turning until the sensors simultaneously report a black surface. This process demands that the car turns until it leaves its current line and aligns properly to the line immediately to the right.
  • Left Turn: The same process as the discrete right turn, but turning left.

With this addition, the car can traverse any two locations on the board using some set of discrete instructions. To improve accuracy, I also optimized and adjusted a few functions:

  • Turns taken to realign during the “Track” action move slower than normal turns to avoid over-adjustment.
  • After the “Track” action’s ending forward lurch, the car realigns if any sensor sees a black line.
  • If, at the start of a “Track” action, no sensor is over a black line, the car turns in the direction opposite the most recent tracking adjustment. (Note: This correction is an educated guess, as the cars tend to over-adjust during tracking.) If there have been no recent tracking adjustments, the car reports that it is off its intended track, and does not carry out the task.
  • U-Turn: A new discrete action only to be used at a dead end. The car performs a full 180° turn in the same direction as its most recent tracking adjustment. (Note: This is another educated guess to improve accuracy, but the car can usually turn in either direction at a dead end to perform a 180° turn.) If the car detects any black surface at the start of the function, it is not at a dead end and must be off its intended path, so it reports an error.

Collision Avoidance

The kit includes one more useful sensor — an HC-SR04 ultrasonic distance sensor — which I implemented to increase the overall utility and adaptability of the car and to prevent multi-car pileups. Remember how every piece added causes two more to fall off? Still true here. Here are a few fun problems caused by the addition of the distance sensor:

Problem: Because the Arduino is a single-processor, only one function (such as the ultrasonic sensor, line tracking, movement) can be running at any given time. Unfortunately, the ultrasonic sensor needs an indefinite period of time to get a good reading, which causes other functions to suffer. For example, if the car drifts off a line while calculating distance, it can’t adjust for a potentially fatal amount of time

Solution: The ultrasonic sensor takes an amount of time roughly proportional to the distance of the object it’s sensing (with some constant additional lag time). I can set a timeout so the sensor only reports the distances of very close objects, which can be determined quickly and are all the cars really needed anyway.

Problem: Some of the sensors repeatedly report a nearby object even when none exists. This false reading causes the cars to stop unnecessarily throughout their runs.

Solution: The faulty value reported is usually 5 cm, so I just added a condition to ignore that particular value if it occurs. Actual obstructions should never be that close anyway, since the threshold for stopping the car is 25 cm.

Problem: The sensor faces directly forward, so its field of view is too far above other cars to detect them.

Solution: I MacGyver’d the sensor so it tilts downward.

My engineering career is coming along nicely.

Phase III — Programmatic Communication

All communication between the car and the application occur through a Bluetooth serial connection.

From Python to the Device

Single character instructions are sent using Python’s Serial library to the Arduino. The microcontroller then translates these instructions into discrete actions.

From the Device to Python

At the beginning and end of any process, the Arduino emits a particularly formatted Serial communication containing a response code, a status message, and any other desired sensor information, such as the distance of a nearby obstacle. This message is then parsed and stored within the Python application.

Multiple Cars

During its startup, the application binds all paired Bluetooth devices to individual communication ports and opens. For each car, it then opens an independently running thread dedicated to receiving incoming messages via serial. Instructions are sequentially sent to every device during the application’s main update loop.

Phase IV — Pathfinding

When a task is assigned to a car, that task contains a specific destination. For the car to fulfill the task, it needs a route from its current position to that task destination. To calculate those paths between a car’s starting and destination locations, the application utilizes graph search algorithms.

State Space

Rather than searching a graph modeled exactly after the nodes and edges of the board, I defined a state space and searched a graph of states. In this case, each state is a set of two coordinates denoting a location on the board, as well as a direction (North, South, East, West). Each directional edge represents, and is labeled with, the single discrete step that moves the car from the edge’s source state to its destination state.

A small portion of the state space graph. The red circles represent nodes, and the black/grey lines represent edges.

NetworkX

To execute this phase, I utilized Python’s NetworkX package. I meticulously added each required vertex and edge for my board to the graph, which I then serialized to avoid having to rebuild on every run of the project. NetworkX’s shortest_path function can run on that graph to quickly find a path between a given pair of states. This path of edges is then translated into an ordered list of discrete commands for the car.

Collision Prevention

I decided to increase the project’s reliability by preventing any intersection between assigned paths. Here’s how I implemented that:

  • Add a weight value to every edge, initialized at 1.
  • After a car is assigned a path, set the weights of every adjacent edge to any vertex with a coordinate in that path to a value larger than the total number of edges in the graph.
  • Whenever a coordinate in the path is traversed and exited by a car, reset to 1 the weights of all adjacent edges to vertices containing that coordinate.
  • Any newly requested shortest path will either avoid any locations on a previously assigned and untraversed path, or have a weight greater than the number of edges in the graph, which indicates the absence of a non-intersecting path.

Phase V — Assigning and Executing Tasks

Tasks

From a user’s point of view, a task is simply a desired destination location, represented as distinctive integer value that corresponds to a box on the topography. The application can translate this integer location to a particular state in the state space, and directly utilized it for pathfinding.

Car and Task Queues

Tasks are queued in the order in which they are received. During the main update loop, available cars are assigned to the first tasks they can fulfill.

Action Queues

When a particular car is assigned a task, all the required actions and their corresponding states are placed into an action queue for that car. When the device finishes any discrete movement, it emits a response code indicating its readiness for a new instruction. The application then pops the next movement instruction from the car’s queue and sends it to the edge device. This method has the advantage of allowing the cars to be stopped or rerouted at any time.

All Together Now

The current product in motion. The cars are first dispatched to the locations with pens, then to the locations with markers, and finally to the three leftmost squares.

Remaining Problems

There are still a few known bugs that break the functionality of this iteration of the project.

Endless Standoff

If two cars face each other in close proximity at any point in a run, they will stop moving entirely due to the collision prevention efforts of the distance sensor. This may occur even when the cars are not assigned intersecting paths, such as when cars are in adjacent squares exactly one column apart. Possible solutions include relegating distance sensor reads to occur only during tracking motions, or adding an additional command for the car to ignore distance sensor reads in particular instances.

Initiation

The cars must be initiated in specific positions and in the correct ordering. Otherwise, the dead reckoning will not succeed.

Movement Inaccuracy

Despite my best efforts to maximize movement accuracy, the cars occasionally fail actions. Since location is determined entirely through dead reckoning, a single failed action usually causes complete failure in all subsequent tasks. The best way to solve this issue would be the implementation of additional, more reliable sensors, such as a camera or advanced line tracker setup. Since I did not have these options, I simply added a function to stop all cars and print their intended locations, allowing them to be manually adjusted and restarted without having to kill the entire run.

Possible Future Improvements

Here are ways in which the project could be improved in future iterations.

Add more/better sensors to improve car accuracy

  • Camera: Could scan lines or markers on the board to accurately determine location. Could also automatically adjust the car to move it in accurate discrete increments.
  • Better line trackers: Could more accurately adjust the car to keep it on the line. Color detection could allow for stops that provide more informative location information and differentiate themselves from off-road areas.

Utilize AI to improve car accuracy

  • Reinforcement learning: Rather than manually programming solutions for every movement edge case, allow each car to run online reinforcement learning on tracking and turning motions to determine its own individual tendencies and adjust accordingly.
  • Downside to RL: Likely difficult to implement given the lack of information to automatically detect rates of success or failure. Also too many external variables, such as car battery level, which cannot be determined with the provided sensors.

Automatic Cartography

Using a camera, detect a grid layout and automatically map a corresponding state space graph.

Intelligent Task Assignment

In the project’s current implementation, tasks are assigned to the first available car. As one might imagine, this method often results in cars traveling unnecessary distances. A few possible alternate methods of task assignment include:

  • Optimal Solution: For a given set of cars and tasks, find the shortest total set of instructions for each car such that each task location is touched exactly once by any car. This is exactly the Open Vehicle Routing Problem, which is NP-hard.
  • Proximity Search: Initially search only for available cars within a close range of a given task destination. Gradually increase the search radius until an available car is discovered.
  • Greedy: Assign each available car to the task with the closest destination.
  • Heuristic: A number of Vehicle Routing Problem heuristic functions are available online and in research papers, which could allow for near-optimal task assignments.

Special thanks to Teresa Tung, Nivas Yelisetty, Anu Chintalapally, Chris Kang, Desmond Duggan, Michael Giba, and Andre Askarinam, who were enormous sources of help and support in this project and throughout my time at Accenture.