I’ve been very busy since my last post several months ago. I packed away the Lego sorter and took a break to think about my next project.
I had the opportunity to visit a plastic injection molding factory, and I was inspired by one of the automated picking robots I saw. I thought:
That robot is doing a simple movement along the X, Y, Z axis and it should be easy to replicate.
I was wrong. It’s not easy. At all.
But after a few iterations, I’ve made progress and I ended with a somewhat good robot that can be put to use for practical movements.
A couple of things will happen in this blog series:
- I learned a lot and I’ll explain in detail all about it.
- This time, I’ve invested a lot of time to ensure that my work can be replicated, keep reading and you’ll find the work end to end.
- If you have kids, Toys are a fantastic way of sharing your work.
All right, let’s go deep.
The Machine in Action
I want one!
You will also find step by step instructions, and have the opportunity to order a kit so you can go faster.
I’m working in getting the code published and shared, so hang tight…
The Source of Inspiration and the Iteration Process
I know at this point, the story of how I got to this machine is probably a secondary topic.
If you are interested in reading more about where I go my inspiration and the process I went through on each iteration, you can read this post:
Why is this a Toy Picker? (Family Advertisement)
Just as with the Lego Sorter, my wife allowed me to work on this project only if I could build it in a “family-accessible” place. We chose the dining room and when Iteration 4 went “into production”, I started getting fantastic help and feedback from my kids and they started bringing in ideas (and energy) to test the machine.
This is my first time working with a robot, and I have to admit that I’m fascinated by this world.
By design, I tried to keep this robot as simple as possible and a lot of the math I was able to figure out through simple trigonometry and vectors.
The capabilities of this machine are well illustrated in the diagram below:
Physical setup showcasing X-axis range (in red):
With this robot, I’m using a Raspberry Pi and Arduino, and the setup is fairly straightforward (aside from the huge amount of cables).
The Y and Z axis servos are controlled directly by the Arduino, with the help of a LM2596 voltage regulator, as the servos I used required 6.8 V (much higher than what the Arduino can supply).
For the X-Axis, I am using a Nema 17 that is controlled by a A4988 Stepper Driver and the power comes directly from the ATX Power Supply I’m using.
The real world setup looks like this:
- Arduino, A4988 and LM2596
- Raspberry Pi and ATX breakout
- Camera Pi
The setup I built, enabled me to leverage the capabilities of OpenCV to use the Pi Camera to detect the location of objects in the real world.
This is a task that took me a bit of effort and I was able to find a reliable way to this, which I will be sharing in it’s own post. In a nutshell, the Pi Camera could detect object, and pin-point the real word X, Y coordinates.
To keep things simple, the task at hand requires camera calibration, as well as transforming the projected image into real-world coordinates.
To the outsider, almost all our mobile devices can perform this kind of operations in a “business-as-usual” fashion, but trying to achieve this proved to be quite challenging.
The basics of the approach are found in the OpenCV docs, but if you are like me, you will find a ton of benefit of the blog I write on how to effectively succeed at this task.
You can deep-dive into how to achieve this in your project through my documentation (https://www.fdxlabs.com/calculate-x-y-z-real-world-coordinates-from-a-single-camera-using-opencv/)
As in any modern solution, the magic typically is driven by the software.
For this project, I realized there is as much work on the mechanical and electronic component, as there is in the software.
I have more experience in the software side, and personally, I found myself spending time on a ratio of 4:1 (mechanical/electrical : software)
This is a high level view of the software components. Given it’s my area of expertise, I consider it to be fairly simple, but I realize how someone who comes from the mechanical/electrical side of things, can find that their personal “ratio” is flipped from mine.
I consider that I’ve done a “fairly good job” on writing readable code this time, and I will be publishing a series of blogs around what I’ve found to be the most challenging parts of the software.
By Difficulty Rank, 1 being the highest:
- Image to Real World Coordinate Translation
- Camera Calibration (using Open CV)
- Image Detection (using Open CV)
- X, Y, Z Coordinate to Servo angle Conversion (I’ve learned this is the world of Kinematics)
Things I would like to change
I’m very happy with the results I obtained with this project, but I have a bit more of ambition and thinking about doing a 6th iteration.
These are a couple of things I believe that can be improved:
Grip, Grip, Grip.
The mechanical grip design has to be the first thing to improve on:
- detecting a successful grip of an object
- the ability to rotate the grip from 0 to 90 degrees (in reference to x-axis), to have flexibility of having the object in any kind of orientation.
- Having the ability to rotate the grip from 0 to 90 degrees (in reference of z-axis), so you can enable the arm to place things “into a cabinet”.
There is tangible opportunity to improve the arm:
- Ability to update Servos for Stepper motors, so accuracy can be improved. The design is already fairly modular, so it requires a bit of time.
- Self-calibration along X, Y, Z axis, to enable the device to self calibrate itself, especially on axis driven by stepper motors.
- Increase the length of the arms, in conjunction to incorporating strings to increase the payload the arm can handle.
I was able to get the camera and real-world coordinate translation working, but it requires a significant amount of calibration and effort.
I don’t have a “straight-line” plan to this, but there is a significant opportunity to make the object detection much more easy.
The overall frame has it’s own share of opportunities. For one, it has a bit of play on my current setup and I’m finding a hard limitation on finding smooth rods longer than 1 meter long. There is a lot of opportunities to make longer and more stable movements possible across the X-axis.
Cables count on the arm is roughly 7, and when I think of adding more sensors, it is starting to become a limiting factor to expanding it’ s movement.
There is huge opportunity to simplify the cabling of the arm and there are many ways of solving it. One of the leading solutions I’m thinking of, is mounting the Arduino no the arm, and perhaps adding wireless communication to it, which would simplify the wires down to 2 for power.
I can go on and probably write a book here, but I’ll leave this post at a high level and split this project into several blocks.
If you are interested in this project, you should expect the following from me:
- Ability to Replicate this kit (link here if you don’t want to wait for me to post all in Medium)
- Ability to Understand the Software Components
I am ready to be very agile and will be publishing the most important documentation on each of the steps that I consider to be crucial. You will be able to see the progress here, stay tuned!
Thank you for reading and stay tuned for deep-dives into this!