MYNT EYE D1200 3D Camera first look
Small sized stereo camera
Today I’d like to share with you my first results on experimenting with MYNT EYE D1200 stereo camera. This camera is designed for mobile devices and has Android SDK. But for me it is interesting mostly as a sensor for small sized mobile robots. As a test platform I will use NVIDIA Jetson Nano Developer Kit
Specs
MYNT EYE D1200 is lightweight and compact — its weight is just 44g with dimensions of 76x35x13mm and separation between cameras (baseline) 40 mm. The maximum depth image resolution is 1280 x 720. RGB image can be provided with maximum resolution 2560 x 720 for stereo mode or 1280 x 720 for single camera mode. Range for depth sensing is 0.2–3 m with field of view D:66° H:59° V:35°.
For better depth sensing the camera has IR structured light emitter. This option increases depth map quality in the darkness or on flat surfaces without texture. Intensity of the light can be controlled by software.
The camera consumes less than 0.3A with IR emitter enabled and streaming RGB and depth data simultaneously. This makes it good for drones and small ground vehicles powered by battery or handheld devices.
MYNT EYE D1200 is recognized by OS as two video devices (/dev/videoX) - one video device for RGB stream and another for depth map. This feature makes integration easy with various computer vision frameworks even without SDK installed. On Jetson Nano I’ve got 24–30 fps for RGB stream and 5 fps for depth stream for resolution 1280x720.
SDK
For the cameras of D series there is a SDK which contains camera driver sources, set of examples (samples) and ROS wrapper. But it looks like SDK is supposed to be used mostly with D1000 cameras. Neither samples nor ROS wrapper work with D1200 out of the box. But it is easy to fix (at least for SDK code samples).
The installation process is simple enough and works fine on x64 and ARM platforms as well — clone SDK source code from Github, install additional libraries, initialize and then install the driver:
$ git clone https://github.com/slightech/MYNT-EYE-D-SDK$ sudo apt install build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev$ cd MYNT-EYE-D-SDK$ make init$ make install
There are several additional make options that can be useful (you can get full options list by executing make help command):
- make samples — build samples, SDK examples. Binaries will be located at MYNT-EYE-D-SDK/samples/_output/
- make ros — build ros wrapper
As I said before, SDK driver and samples do not work with the D1200 camera out of the box. Let’s fix it.
To check if the driver works I will use two applications from SDK samples: get_device_info and get_points2.
First let’s run get_points2 and see if it works. The application will return an error containing this line:
This error throws Device::CompatibleMJPG() method. The method works well with D1000 camera, but does not consider modes of MYNT EYE D1200. For our case we can simply disable this check. Simply comment line 478 in src/mynteyed/device/device.cc:
Another patch should be done also in src/mynteyed/device/device.cc. The issue is that Depth mode is incorrectly selected according to Color mode. If you run get_device_info you will get the following modes list for D1200:
And if we look at the code which is responsible for selecting Depth mode index we will notice that only height is considered:
Thus, if we set Color and Depth mode, during camera setup, to 1280x720 (Width x Height), the first Depth mode index which will be selected is 0 (640x720) and Depth mode will be not compatible with the selected Color mode.
By experiment I figured out that the only usable for my purposes Color and Depth modes are 640x480 and 1280x720. So just adding checking for width will fix the issue:
The last code change is related to the sample itself. In camera parameters we should set Color and Depth stream formats explicitly:
Now, after recompilation by executing make samples command, the pointcloud sample will work:
On texture you can see a specific pattern. This is a light from IR emitter, which helps to get a pointcloud of better quality in dark environment or on walls without a texture (a painted white wall for example).
How to use with ROS
Existing ROS wrapper has lots of parameters and gives flexible configuration, but I wasn’t able to run provided ROS node with D1200 camera. Same as for the overall SDK, ROS wrapper is supposed to be used with D1000 cameras.
I didn’t want to dive deeply in MYNT EYE ROS wrapper code, and decided to create a simple demo node from scratch.
My demo node will be based on get_points2 SDK sample code and will publish Pointcloud2 topic. The full node you can find on my Github page.
The node is pretty much simple and self-explanatory. The whole flow is placed in a main() function:
int main (int argc, char** argv)
{
// Initialize ROS
ros::init (argc, argv, "mynteye_d1200_node");
ros::NodeHandle nh;
// Create a ROS publisher for the output point cloud
ros::Publisher pointCloudPub = nh.advertise<sensor_msgs::PointCloud2> ("output", 1);// Initialize MYNT EYE camera
Camera cam;
DeviceInfo devInfo;
if (!util::select(cam, &devInfo)) {
return 1;
}// Set parameters for D1200 camera
OpenParams params(devInfo.index);
params.color_stream_format = StreamFormat::STREAM_MJPG;
params.depth_stream_format = StreamFormat::STREAM_YUYV;
params.color_mode = ColorMode::COLOR_RECTIFIED;
params.stream_mode = StreamMode::STREAM_1280x720;
// Set intensity of IR structured light emitter
params.ir_intensity = 10;cam.Open(params);std::cout << std::endl;
if (!cam.IsOpened()) {
std::cerr << "Error: Open camera failed" << std::endl;
return 1;
}
std::cout << "Opened " << devInfo.name << " device." << std::endl;Rate rate(params.framerate);
// loop
ros::Rate loop_rate(params.framerate);
while (nh.ok()) {
auto cloud = pcutil::get_point_cloud(&cam, 500.0);
if (cloud) {
sensor_msgs::PointCloud2 msg;
pcl::toROSMsg(*cloud, msg);
msg.header.stamp = ros::Time().now();
msg.header.frame_id = "depth_frame"; pointCloudPub.publish (msg);
}
loop_rate.sleep();
}cam.Close();
return 0;
}
I borrowed get_point_cloud() utility function to create a PCL pointcloud and then transform it by PCL library function, called toROSMsg(), to the ROS Pointcloud2 format. I set the same ROS publish rate as it was set for the camera.
In demo package I’ve added a convenient launch script, which runs a demo node and configured Rviz. If you compile the mynteye_d1200_example package and run:
$ roslaunch mynteye_d1200_example d1200_demo.launch
then you will get a result similar to the following:
As you can see the pointcloud looks dense and precise on short distances. The precision and level of noise I will test next time.
Conclusion and further plans
D1200 camera is an interesting sensor which can be used for 3D scanning of medium sized objects, or as a sensor for small ground vehicles or aerial drones, thanks to its small size, weight and power consumption.
Also the option that D1200 is recognized by system as an ordinary video streaming device makes it easy to start experimenting with even without installing SDK and drivers.
Next time I will check if it possible to use MYNT EYE D1200 for 3D map building and navigation and how it can be integrated in 4WD CbBot robot’s system.
Stay tuned!
Links
YouTube video: https://youtu.be/CO-Z-ZmVwwU
Product page: https://www.mynteye.com/pages/d12000
SDK documentation: https://mynt-eye-d-sdk.readthedocs.io/en/latest/
Github: https://github.com/Andrew-rw/mynteye_d1200_example