Object Recognition and Object Location using UR5e and Kinect V2 (Part 1/3)
Published in
1 min readNov 23, 2019
In this project, we will be building a ROS environment to detect, locate and capture objects using UR5e and Kinect V2. The UR5e robot will start approaching the object once the robot detects it.
The main focus of this project is to do have the robot do a pick & place in real world. The reason I decided to experiment this is that it will benefit to my current and future research, which involves the use of UR5e and vision camera using ROS.
Packages that will be used:
- IAI Kinect2 — Used to conduct the RGB-D camera.
- Universal Robot — Used to control the 6-DOF robot.
- Creating your own object detector — Used to train the image data for the purpose of object recognition
- Object detection from images/point cloud using ROS — Using this reference to apply object detection under ROS environment
- easy_handeye: TF / VISP Hand-Eye Calibration — This package will be used to overcome robot-camera calibration
Plan for the next two weeks:
Week 1:
- Have the package setup
- train the image data
- calibrate the camera and the robot
Week 2:
- Make a move.py to move the robot after detecting the target
- Move the robot when the object is found
- Stop the robot when the object is captured